Today’s links Humans are not perfectly vigilant: And that’s bad news for AI. Hey look at this: Delights to delectate. This day in history: 2004, 2009, 2014, 2019, 2023 Upcoming appearances: Where to find me. Recent appearances: Where I’ve been. Latest books: You keep readin’ em, I’ll keep writin’ ’em. Upcoming books: Like I said, I’ll keep writin’ ’em. Colophon: All the rest. Humans are not perfectly vigilant (permalink) Here’s a fun AI story: a security researcher noticed that large companies’ AI-authored source-code repeatedly referenced a nonexistent library (an AI “hallucination”), so he created a (defanged) malicious library with that name and uploaded it, and thousands of developers automatically downloaded and incorporated it as they compiled the code: https://www.theregister.com/2024/03/28/ai_bots_hallucinate_software_packages/ These “hallucinations” are a stubbornly persistent feature of large language models, because these models only give the illusion of understanding; in reality, they are just sophisticated forms of autocomplete, drawing on huge databases to make shrewd (but reliably fallible) guesses about which word comes next: https://dl.acm.org/doi/10.1145/3442188.3445922 Guessing the next word without understanding the meaning of the resulting sentence makes unsupervised LLMs unsuitable for high-stakes tasks. The whole AI bubble is based on convincing investors that one or more of the following is true: I. There are low-stakes, high-value tasks that will recoup the massive costs of AI training and operation; II. There are high-stakes, high-value tasks that can be made cheaper by adding an AI to a human operator; III. Adding more training data to an AI will make it stop…Pluralistic: Humans are not perfectly vigilant (01 Apr 2024)