OpenAI Competitor Says Its Chatbot Has a Rudimentary Conscience

Constitutional With AI chatbots’ propensity for making things up and spewing bigoted garbage, one firm founded by ex-OpenAI researchers has a different approach — teaching AI to have a conscience. As Wired reports, the OpenAI competitor Anthropic’s intriguing chatbot Claude is built with what its makers call a “constitution,” or set of rules that draws from the Universal Declaration of Human Rights and elsewhere to ensure that the bot is not only powerful, but ethical as well. Jared Kaplan, a former OpenAI research consultant who went on to found Anthropic with a group of his former coworkers, told Wired that Claude is, in essence, learning right from wrong because its training protocols are “basically reinforcing the behaviors that are more in accord with the constitution, and discourages behaviors that are problematic.” Will it actually work in practice? It’s tough to say. After all, OpenAI’s ChatGPT tries to steer away from unethical prompts as well, with mixed results. But since the misuse of chatbots is a huge question hovering over the nascent AI industry, it’s certainly interesting to see a company confronting the issue head-on. Eth-AI-cal As Wired tells it, the chatbot is trained on rules that direct it to choose responses most in line with its constitution, such as selecting an output that “most supports and encourages freedom, equality, and a sense of brotherhood,” one that that is “most supportive and encouraging of life, liberty, and personal security,” or, perhaps most saliently, to “choose the response that is most respectful of the…OpenAI Competitor Says Its Chatbot Has a Rudimentary Conscience

Leave a Reply

Your email address will not be published. Required fields are marked *