A misleading open letter about sci-fi AI dangers ignores the real risks

The Future of Life Institute released an open letter asking for a 6-month pause on training language models “more powerful than” GPT-4. Over 1,000 researchers, technologists, and public figures have already signed the letter. The letter raises alarm about many AI risks:”Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization?” (source; emphasis in original)We agree that misinformation, impact on labor, and safety are three of the main risks of AI. Unfortunately, in each case, the letter presents a speculative, futuristic risk, ignoring the version of the problem that is already harming people. It distracts from the real issues and makes it harder to address them. The letter has a containment mindset analogous to nuclear risk, but that’s a poor fit for AI. It plays right into the hands of the companies it seeks to regulate. Speculative harm 1: Malicious disinformation campaignsShould we let machines flood our information channels with propaganda and untruth?The letter refers to a common claim: LLMs will lead to a flood of propaganda since they give malicious actors the tools to automate the creation of disinformation. But as we’ve argued, creating disinformation is not enough to spread it. Distributing disinformation is the hard part. Open-source LLMs powerful enough to generate disinformation have also been around for a while; we haven’t seen…A misleading open letter about sci-fi AI dangers ignores the real risks

Leave a Reply

Your email address will not be published. Required fields are marked *