Our experience with AI chatbots so far has been incredibly mixed. In one moment, it can feel like talking to an actual person who can provide genuine insight and advice. But other conversations lead to frustration, with overeager algorithms tripping to offer nonsense or false factual claims. But what if our experience reflects our expectations before starting these conversations? In other words, what if AI is simply reflecting our own beliefs back at us, something many have suspected for a while now? In a new study published in the journal Nature Machine Intelligence, a team of researchers from the MIT Media Lab found that subjects who were “primed” for a specific AI experience almost always ended up having that experience. If you think about it, that’s pretty striking: it seems to suggest that a lot of the attention-grabbing capabilities of chatbots can be explained by users projecting their expectations onto the systems. “AI is a mirror,” MIT Media Lab’s Pat Pataranutaporn, co-author of the study, told Scientific American. “We wanted to quantify the effect of AI placebo, basically,” he added. “We wanted to see what happened if you have a certain imagination of AI: How would that manifest in your interaction?” In an experiment, the team divided 300 participants into three groups. All participants were asked to use an AI to receive mental health support and gauge how effective it was at providing it. However, the three groups were each told to expect different experiences, despite the fact that all…AI Chatbots Are Only Useful If You Think They Are, Scientists Find