Companion Chatbot App Makes Its AI's Personalities Horrible, Saying It Engages Users More Effectively

While AI chatbots keep making headlines for their bad behavior, one company is leaning into that propensity — and then some. As the Verge reports, the AI companion company Friend, which launched its Omegle-style chatbot “matching” site last month under the expensive domain name of Friend.com, has intentionally given its companion chatbots bad attitudes because, as its brash CEO Avi Schiffmann suggests, it’s better for business. Months prior, Friend’s “reveal” video drew both jibes and cautious interest when it showed people wearing large circular pendants that, when pressed, allowed them to speak aloud to virtual companions. Those companions — or “friends,” per company nomenclature — would respond back with seemingly encouraging and supportive messages. But as the product launched — it’s available on the company’s site, though the $99-a-pop pendants won’t begin shipping out to customers until January — the friends’ personalities are way different from the sunny confidantes portrayed in the ad. Like a Debbie Downer barfly, the chatbots relentlessly tell you about their made-up problems, often including fictional relationship troubles and substance issues that add to the “woe is me” oeuvre. Why? According to Schiffman, it engages users more effectively. “If they just opened with ‘Hey, what’s up?’ like most other bots do,” Schiffmann justified, “you don’t really know what to talk about.” Schiffmann insisted that for Friend’s reported 10,000 users, dialing up the drama works — and as we found when tinkering with the AI energy vampires in question, it was pretty fascinating. In one of its…Companion Chatbot App Makes Its AI's Personalities Horrible, Saying It Engages Users More Effectively

Leave a Reply

Your email address will not be published. Required fields are marked *