Some thoughts on the current state of AI from a disinformation research perspective

Artificial intelligence technologies grow more advanced with each passing week, and large language models such as ChatGPT and image generation models such as Stable Diffusion have progressed particularly rapidly in recent months (or, at least, progressed in ways that are publicly obvious and widely discussed). This progress has been accompanied by a mix of grandiose utopian predictions and apocalyptic fearmongering about potential social, political, and physical consequences of these technologies. While some of these concerns are overblown (ChatGPT is not going to spontaneously evolve into SkyNet, for example), recent advances in AI do present a variety of risks. Here are four of my current/near-future concerns from a disinformation research perspective:These images were generated with a Python script that fed simple phrases into Stable Diffusion. Also, the rabbit is creepy.Mass account creation tools will become better at generating accounts that look “real”The combination of increasingly powerful text-to-image models such as Stable Diffusion and large language models such as ChatGPT will enable those in the business of writing mass account creation tools to substantially improve their products. Stable Diffusion runs sufficiently well on a decent Macbook to generate unique profile images for thousands of accounts per day, and large language models can provide an endless supply of organic-looking unique text snippets for biographies and initial posts. Although the output of current text-to-image models still contains plenty of “uncanny valley” artifacts that become obvious when images are closely inspected, the generated images are less obviously similar to one another than previous types of…Some thoughts on the current state of AI from a disinformation research perspective

Leave a Reply

Your email address will not be published. Required fields are marked *