ChatGPT and other language AI’s are just as irrational as we are

The past few years have seen an explosion of progress in large language model artificial intelligence systems that can do things like write poetry, conduct humanlike conversations and pass medical school exams. This progress has yielded models like ChatGPT that could have major social and economic ramifications ranging from job displacements and increased misinformation to massive productivity boosts. Despite their impressive abilities, large language models don’t actually think. They tend to make elementary mistakes and even make things up. However, because they generate fluent language, people tend to respond to them as though they do think. Image: Pexels This has led researchers to study the models’ “cognitive” abilities and biases, work that has grown in importance now that large language models are widely accessible. This line of research dates back to early large language models such as Google’s BERT, which is integrated into its search engine and so has been coined BERTology. This research has already revealed a lot about what such models can do and where they go wrong. For instance, cleverly designed experiments have shown that many language models have trouble dealing with negation – for example, a question phrased as “what is not” – and doing simple calculations. They can be overly confident in their answers, even when wrong. Like other modern machine learning algorithms, they have trouble explaining themselves when asked why they answered a certain way Words and thoughts Inspired by the growing body of research in BERTology and related fields like cognitive science, my student Zhisheng Tang and I set out to answer a seemingly simple question about large language models: Are they rational?…ChatGPT and other language AI’s are just as irrational as we are

Leave a Reply

Your email address will not be published. Required fields are marked *