OpenAI is rolling out a significant update to ChatGPT that will enable the chatbot to engage in voice conversations with users and interact using images, bringing it closer to established AI assistants like Apple’s Siri.
According to OpenAI’s blog post on Monday, the addition of the voice feature “opens doors to many creative and accessibility-focused applications.” This update means that ChatGPT can now narrate bedtime stories, settle debates, and read text input from users out loud, among other things.
AI services such as Siri, Google Voice Assistant, and Amazon’s Alexa deeply integrate with the devices on which they run, and people commonly use them to perform tasks like setting alarms, providing reminders, and delivering internet-based information.
Since its debut, ChatGPT has found adoption in various industries for tasks ranging from summarizing documents to writing computer code, leading to a competitive race among Big Tech companies to launch their own generative AI-based offerings.
The voice feature of the new ChatGPT can serve various purposes, including providing spoken content and enhancing accessibility. Additionally, Spotify is using the technology behind ChatGPT to translate podcast content into different languages.
With the introduction of image support, users can now take pictures of objects or scenes and ask the chatbot for assistance, such as troubleshooting issues with appliances, planning meals based on fridge contents, or analyzing complex data graphs for work-related tasks. Alphabet’s Google Lens is a current popular choice for obtaining information from images.
These new features of ChatGPT will be rolled out to subscribers of its Plus and Enterprise plans over the next two weeks, offering a more versatile and interactive AI experience for users.