People have been anthropomorphizing AI at least since ELIZA in the 1960s, but the new Bing chatbot seems to have kicked things up a notch.This matters, because anthropomorphizing AI is dangerous. It can make the emotionally disturbing effect of misbehaving chatbots much worse. Besides, people might be more inclined to follow through if the bot suggests real-world harm. Most importantly, there are urgent and critical policy questions on generative AI. If ideas like “robot rights” get even a toehold, it may hijack or derail those debates. Experts concerned about this trend have been trying to emphasize that large language models are merely trying to predict the next word in a sequence. LLMs have been called stochastic parrots, autocomplete on steroids, bullshit generators, pastiche generators, and blurry JPEGs of the web. On the other hand, there’s the media, which reaches way more people.That’s the first of three reasons why people find it easy to anthropomorphize chatbots. To be fair, the articles are more balanced than the headlines, and many are critical, but in this case criticism is a form of hype.Design also plays a role. ChatGPT’s repeated disclaimers (“As a large language model trained by OpenAI, …”) were mildly annoying but helpful for reminding people that they were talking to a bot. Bing Chat seems just the opposite. It professes goals and desires, mirrors the user’s tone, and is very protective of itself, among other human-like behaviors. These behaviors have no utility in the context of a search engine and could have been…People keep anthropomorphizing AI. Here’s why