Grief tech seems predatory as a Victorian-era séance

Many so-called grief tech startups are popping up. These companies promise the impossible, interactive digital immortality powered by AI. But using large language models to train AIs with the writings of a deceased loved one does not and cannot bring them back. Oliver Bateman wrote an article for UnHeard about attempting to “resurrect” his dad using GPT-4. The AI trained from thousands of emails generated text that bore some resemblance to his father’s writing, but the chat program was not the man. The text was off in substance and style—the typo-laden, idiosyncratic human writing input was output as clean, clear sentences.  Image: KnowTechie For Futurism, Maggie Harrison chronicled an experience with a startup called Seance AI. Like Bateman, Harrison interacted with a poorly reflected copy of her father. The chat program ultimately just echoed her messages, a partial active listening technique, telling her what she had just said.  AI’s have simple objectives of satisfying a user. Large language models are prone to sycophancy and sandbagging. AIs will answer subjective questions, flattering their users stated beliefs and endorsing common misconceptions when users appear uneducated. Grief tech is a use case especially likely to trigger AI sycophancy and sandbagging. After all, the AI is trying to not only pass the Turing test but pass it as someone’s deceased loved one. As exemplified by thousands of years of mentalism and claims of communicating with lost souls, love is blind, and grief is blinding. The late James Randi devoted much of his life to debunking those who…Grief tech seems predatory as a Victorian-era séance

Leave a Reply

Your email address will not be published. Required fields are marked *