Yes ChatGPT Lies

I’ve published well over 5,000 words about ChatGPT getting things wrong. In one article ChatGPT gave me the wrong date and title for a blog post about a Neil Gaiman novel. I know it’s wrong because Gaiman responded to me with a link to the correct post. In another article I wrote about ChatGPT source laundering an article from a small blog, and attributing the source to the United Nations. The only reason I could prove ChatGPT plagiarized and obfuscated the source, is because the blog’s author made a factual error. Large language model’s are essentially built from plagiarizing text that humans wrote, and humans lie, get facts wrong, and are generally unreliable. I also ranted a bit about the fact that AI generated content will eventually start getting pulled into generative chat AIs. Blackhat SEOs and large media companies like CNET creating webpages with AI written content will make it impossible for the AI’s creators to filter for only human written words. Why ChatGPT and Bing Chat are so good at making things upBy: Benj Edwards, April 6, 2023, arstechnica.com The article from Ars Technica covers some far more harmful examples of AI created misinformation. An Australian mayor who allegedly found that ChatGPT said he went to prison for bribery, and a “law professor who discovered that ChatGPT had placed him on a list of legal scholars who had sexually harassed someone.” This and other reports have sparked a linguistic debate about calling Chat GPT a liar. Techmeme has…Yes ChatGPT Lies

Leave a Reply

Your email address will not be published. Required fields are marked *