{"id":3228,"date":"2023-04-10T16:42:53","date_gmt":"2023-04-10T16:42:53","guid":{"rendered":"https:\/\/www.godefy.com\/chatgpt-and-other-language-ais-are-just-as-irrational-as-we-are"},"modified":"2023-04-10T16:42:53","modified_gmt":"2023-04-10T16:42:53","slug":"chatgpt-and-other-language-ais-are-just-as-irrational-as-we-are","status":"publish","type":"post","link":"https:\/\/www.godefy.com\/chatgpt-and-other-language-ais-are-just-as-irrational-as-we-are\/","title":{"rendered":"ChatGPT and other language AI\u2019s are just as irrational as we are"},"content":{"rendered":"

The past few years have seen an explosion of progress in large language model artificial intelligence systems that can do things like write poetry, conduct humanlike conversations and pass medical school exams. This progress has yielded models like ChatGPT that could have major social and economic ramifications ranging from job displacements and increased misinformation to massive productivity boosts. Despite their impressive abilities, large language models don\u2019t actually think. They tend to make elementary mistakes and even make things up. However, because they generate fluent language, people tend to respond to them as though they do think. Image: Pexels This has led researchers to study the models\u2019 \u201ccognitive\u201d abilities and biases, work that has grown in importance now that large language models are widely accessible. This line of research dates back to early large language models such as Google\u2019s BERT, which is integrated into its search engine and so has been coined BERTology. This research has already revealed a lot about what such models can do and where they go wrong. For instance, cleverly designed experiments have shown that many language models have trouble dealing with negation \u2013 for example, a question phrased as \u201cwhat is not\u201d \u2013 and doing simple calculations. They can be overly confident in their answers, even when wrong. Like other modern machine learning algorithms, they have trouble explaining themselves when asked why they answered a certain way Words and thoughts Inspired by the growing body of research in BERTology and related fields like cognitive science, my student Zhisheng Tang and I set out to answer a seemingly simple question about large language models: Are they rational?…ChatGPT and other language AI\u2019s are just as irrational as we are<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"

The past few years have seen an explosion of progress in large language model artificial intelligence systems that can do things like write poetry, conduct humanlike conversations and pass medical school exams. This progress… <\/p>\n","protected":false},"author":2,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[1],"tags":[121,29,112,300,120,1420,87,206,706,12,295],"_links":{"self":[{"href":"https:\/\/www.godefy.com\/wp-json\/wp\/v2\/posts\/3228"}],"collection":[{"href":"https:\/\/www.godefy.com\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.godefy.com\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.godefy.com\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.godefy.com\/wp-json\/wp\/v2\/comments?post=3228"}],"version-history":[{"count":0,"href":"https:\/\/www.godefy.com\/wp-json\/wp\/v2\/posts\/3228\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.godefy.com\/wp-json\/wp\/v2\/media?parent=3228"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.godefy.com\/wp-json\/wp\/v2\/categories?post=3228"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.godefy.com\/wp-json\/wp\/v2\/tags?post=3228"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}