Scientists Warn That AI Threatens Science Itself

What role should text-generating large language models (LLMs) have in the scientific research process? According to a team of Oxford scientists, the answer — at least for now — is: pretty much none. In a new essay, researchers from the Oxford Internet Institute argue that scientists should abstain from using LLM-powered tools like chatbots to assist in scientific research on the grounds that AI’s penchant for hallucinating and fabricating facts, combined with the human tendency to anthropomorphize the human-mimicking word engines, could lead to larger information breakdowns — a fate that could ultimately threaten the fabric of science itself. “Our tendency to anthropomorphize machines and trust models as human-like truth-tellers, consuming and spreading the bad information that they produce in the process,” the researchers write in the essay, which was published this week in the journal Nature Human Behavior, “is uniquely worrying for the future of science.” The scientists’ argument hinges on the reality that LLMs and the many bots that the technology powers aren’t primarily designed to be truthful. As they write in the essay, sounding truthful is but “one element by which the usefulness of these systems is measured.” Characteristics including “helpfulness, harmlessness, technical efficiency, profitability, [and] customer adoption” matter, too. “LLMs are designed to produce helpful and convincing responses,” they continue, “without any overriding guarantees regarding their accuracy or alignment with fact.” Put simply, if a large language model — which, above all else, is taught to be convincing — comes up with an answer that’s persuasive but not…Scientists Warn That AI Threatens Science Itself

Leave a Reply

Your email address will not be published. Required fields are marked *