Nick Bostrom Says AI Chatbots May Have Some Degree of Sentience

It’s Alive! One of the world’s foremost philosophers of artificial intelligence is arguing that some chatbots might exhibit glimpses of sentience — but that doesn’t necessarily mean what you think it means. In an interview with the New York Times, Oxford academic Nick Bostrom said that rather than viewing the concept of sentience as all-or-nothing, he thinks of it in terms of degrees, and when that framework is applied to the rapidly-advancing world of AIs, things start to look different. “I would be quite willing to ascribe very small amounts of degree to a wide range of systems, including animals,” Bostrom, the director of Oxford’s Future of Humanity Institute, told the NYT. “If you admit that it’s not an all-or-nothing thing, then it’s not so dramatic to say that some of these [AI] assistants might plausibly be candidates for having some degrees of sentience.” Justice League While there’s been ample derision for those who have suggested that AIs may be getting a little bit sentient — including ex-Googler Blake Lemoine and OpenAI’s Ilya Sutskever — Bostrom said the insistence on the opposite doesn’t account for how smart these chatbots really are. “I would say with these large language models [LLMs], I also think it’s not doing them justice to say they’re simply regurgitating text,” Bostrom said. “They exhibit glimpses of creativity, insight and understanding that are quite impressive and may show the rudiments of reasoning.” What’s more, the Sweden-born philosopher said that LLMs “may soon develop a conception of self as persisting…Nick Bostrom Says AI Chatbots May Have Some Degree of Sentience

Leave a Reply

Your email address will not be published. Required fields are marked *