- Openai is retiring with Chatgpt’s default voice mode in September
- Only the faster, more expressive advanced voting mode will be available
- Many users are upset over the change and prefer the sound and approach to voice opening Openai
The voice that people have come to associate with Chatgpt, retiring from September 9, and not everyone is happy about it. Chatgpt’s “Standard” voice disappears in favor of the “advanced” voice setting that was first released to a limited selection of chatgpt users last year. Rebranded simply as “Chatgpt Voice”, it will be the only choice in the future.
The original “Standard” voice state debuted in 2023, built on a simple pipeline: You would talk, Openai’s servers would transcribe your input, generate an answer using the GPT model and then read it back using a relatively neutral synthetic voice.
Chatgpt’s advanced voice state is designed to respond faster, to be more human in its tone and speak way and generally perform at a higher level than its predecessor. Nevertheless, lots of people think it’s a mistake.
“The standard voice offers a warm, depth and natural connection that the advanced voice simply does not match,” a user wrote in a post on the Openais Forum. The advanced voice comes across robot and detached, lacks the soulful and understanding tone I value. “
More than one person described the new voice as less engaging to talk to. There were also complaints that the new model is talking too quickly, as if it was trying to get the interaction over.
“Standard voice is thought provoking and has a voice and cadence that is natural and comforting. Digging,” released a reddit user. “Advanced voice does not have the same qualities, do not give thoughtful answers, have restrictive content limits and always sound like they are trying to hurry through a mediocre answer.”
Advanced voices
Even if you don’t mind how the new voice sounds, some chatgpt users are annoyed because they have found that it doesn’t even work the same as the previous voice.
Advanced voting state integrates your voice, AI’s response and its vocal expression into a real -time process. The integrated process means that AI does not quote the written response verbatim. Instead, it expresses ideas more conversation, sometimes jumping over sentences, condensing clauses or adjusting tone based on context. Technically impressive, but not what some chatgpt users want.
“The default voice would literally read the exact response that Chatgpt would normally give you. It was a direct line, you know?” Read an example post at Reddit. “But this new one? It sounds like it’s paraphrasing or summerization [sic] It instead. It skips the small details and makes the whole conversation feel more disconnected. “
It may sound smaller in the large schedule of AI progress, but it repeats a broader trend in tech where people are upset when there is a big change, though it seems to be an upgrade.
Not everyone doesn’t like the new voice setting, of course. Some like its realism and speed and how it creates a more fluid conversation. Openai has also promised several improvements. But considering complaints about GPT-4OS removal when GPT-5 debuted, led to the older model’s return, I wouldn’t be too surprised to see the standard voice state a comeback as well.



