OpenAI has started rolling out its advanced Voice Mode feature. Starting today, a small number of paying ChatGPT users will be able to have a tete-a-tete with the AI chatbot. All ChatGPT Plus members should receive access to the expanded toolset by the fall of this year.
In an announcement on X, the company said this advanced version of its Voice Mode “offers more natural, real-time conversations, allows you to interrupt anytime, and senses and responds to your emotions.”
We’re starting to roll out advanced Voice Mode to a small group of ChatGPT Plus users. Advanced Voice Mode offers more natural, real-time conversations, allows you to interrupt anytime, and senses and responds to your emotions. pic.twitter.com/64O94EhhXK
— OpenAI (@OpenAI) July 30, 2024
Support for voice conversations arrived last September in ChatGPT and the more advanced version got a public demo in May. ChatGPT-4o uses a single multimodal model for the voice capabilities rather than the three separate models used by its previous audio solution, decreasing the latency in conversations with the chatbot.
OpenAI drew a lot of criticism at the May demo for debuting a voice option that sounded uncannily like Scarlett Johansson, whose acting career included voicing AI character Samantha in Spike Jonze’s film Her. The release date for advanced Voice Mode was delayed shortly after the backlash. Even though the company insisted that the voice actor was not imitating Johansson’s performance, the similar-sounding voice was since taken out.
Source link
lol