Scientists Gave AI an "Inner Monologue" and Something Fascinating Happened

Therefore AI Am If you give an AI an inner monologue, it apparently starts teaching itself to be smarter. In a not-yet-peer-reviewed paper researchers from Stanford and a group calling itself “Notbad AI” have teamed up to create an AI model that pauses to “think” before spitting out answers, shows its work, and asks users to tell it which response is most correct. The team behind the Quiet Self-Taught Reasoner, or Quiet-STaR for short, wanted their model to not only be able to teach itself to reason — which they achieved in 2022 with the original Self-Taught Reasoner algorithm — but also to do so “quietly” before providing answers to prompts, thus operating like a human’s inner monologue that, ideally, runs before we speak. “Excitingly,” as Stanford’s Eric Zelikam enthused in an X-formerly-Twitter thread about the new model he helped produce, “self-teaching reasoning on diverse web text automatically improves other reasoning!” If You Build It To create this contemplative AI, the research team built Quiet-STaR on Mistral 7B, an open-source large language model (LLM) that is, according to the Hugging Face AI community, trained on seven billion parameters and said to be able to outperform the latest version of Meta’s Llama model. Quiet-STaR was programmed, essentially, to show its work when giving reasoning for its outputs, and users of the model were then able to select which response was most accurate. This approach, as the paper notes, resulted in the model being accurate 47.2 percent of the time — which…Scientists Gave AI an "Inner Monologue" and Something Fascinating Happened

Leave a Reply

Your email address will not be published. Required fields are marked *