OpenAI’s ChatGPT just got a whole lot chatty. The company is rolling out its Advanced Voice Mode to a select group of ChatGPT Plus users, allowing for real-time conversations that sense and respond to emotion. Think less robotic AI voice and more uncanny valley human-like chat. But there’s a catch—it’s not quite the revolutionary voice feature OpenAI first teased back in May. You might recall the demo that had everyone freaking out over a voice that sounded suspiciously like Scarlett Johansson’s. Well, that particular voice, known as Sky, isn’t part of the alpha rollout. OpenAI claims it didn’t actually use Johansson’s voice, but the actress was spooked enough to lawyer up. Now, Advanced Voice Mode will be limited to four preset voices created with paid voice actors. We’re starting to roll out advanced Voice Mode to a small group of ChatGPT Plus users. Advanced Voice Mode offers more natural, real-time conversations, allows you to interrupt anytime, and senses and responds to your emotions. pic.twitter.com/64O94EhhXK— OpenAI (@OpenAI) July 30, 2024 So what makes this voice mode so “advanced”? For starters, GPT-4o, the AI model behind the voice, is multimodal. That means it can process voice inputs and generate human-like responses without needing separate models for transcription and voice synthesis. The result is supposedly way more natural and less laggy. OpenAI’s even tested it with over 100 external testers speaking 45 different languages. But here’s the thing: this voice tech is a potential minefield of safety and ethical issues. We’ve already seen…ChatGPT’s creepy new voice feature is here (kind of)