Embattled Character.AI Hiring Trust and Safety Staff

Content warning: this story discusses sexual abuse, self-harm, suicide, eating disorders and other disturbing topics. Character.AI, the Google-backed AI chatbot startup embroiled in two lawsuits concerning the welfare of minors, appears to be bulking up its content moderation team in the wake of litigation and heightened public scrutiny. The embattled AI firm’s trust and safety head Jerry Ruoti announced in a LinkedIn post yesterday that Character.AI is “looking to grow” its safety operations, describing the role as a “great opportunity” to “help build a function.” A linked job listing for a “trust and safety associate,” also posted yesterday, describes a role akin to a traditional social media moderation position. Contract hirees will be tasked to “review and analyze” flagged content for “compliance with company moderation standards,” remove content deemed “inappropriate or offensive,” and “respond to user inquiries” concerning safety and privacy, among other duties. The apparent effort to manually bolster safety teams comes as Character.AI faces down two separate lawsuits filed on behalf of three families across Florida and Texas who claim their children were emotionally and sexually abused by the platform’s AI companions, resulting in severe mental suffering, physical violence, and one suicide. Google — which is closely tied to Character.AI through personnel, computing infrastructure, and a $2.7 billion cash infusion in exchange for access to Character.AI-collected user data — is also named as a defendant in both lawsuits, as are Character.AI cofounders Noam Shazeer and Daniel de Freitas, both of whom returned to work on the search giant’s…Embattled Character.AI Hiring Trust and Safety Staff

Leave a Reply

Your email address will not be published. Required fields are marked *