CEO Suggests That Humans Could Be "Adversarially Attacked" Like Neural Networks

Apropos of almost nothing, the CEO of an AI startup suggested that a bad actor could use a mysterious image to “attack” human brains the way researchers have done with neural networks. This kerfuffle began when Florent Crivello, the CEO of the Lindy AI staffing startup, agreed with researcher Tomáš Daniš’ take that “there is no evidence humans can’t be adversarially attacked like neural networks can.” “There could be,” the Germany-based AI researcher wrote on X-formerly-Twitter, “an artificially constructed sensory input that makes you go insane forever.” Tonally reminiscent of former OpenAI chief scientist Ilya Sutskever’s infamous claim two years ago that “it may be that today’s large neural networks are slightly conscious,” Daniš’ post also appeared out of the ether with little context — and Crivello, while adding citations to the theory, didn’t do much to lend it more credence. In his own post, the Lindy CEO referenced a 2015 Google study that found that overlaying a noise-laden image atop a photo of a panda would cause a neural network to misidentify it as a monkey. To his mind, it seems “completely obvious” that such an attack — better known in the AI world as a “jailbreak” — could be used on humans as well. I’m always surprised to find that this isn’t completely obvious to everyone. There’s precedent suggesting that that’s the case — like that Pokémon episode aired in the 90s in Japan where a pikachu attack caused the screen to flash red and blue at 12hz…CEO Suggests That Humans Could Be "Adversarially Attacked" Like Neural Networks

Leave a Reply

Your email address will not be published. Required fields are marked *