OpenAI has just unveiled GPT-4o, a swifter and more economical iteration of the AI model driving its chatbot, ChatGPT, as it strives to maintain its edge in a bustling marketplace.
During a livestreamed event, OpenAI introduced GPT-4o as an upgraded version of its year-old GPT-4 model. This new large-scale language model, trained extensively on internet data, boasts enhanced capabilities in processing text, audio, and images in real-time. The improvements are slated to be accessible in the coming weeks.
OpenAI claims that when prompted verbally, the system can swiftly deliver an audio response in milliseconds, enabling smoother conversational interactions. In a demonstration, OpenAI researchers and Chief Technology Officer Mira Murati engaged in dialogue with the new ChatGPT solely through voice commands, showcasing its ability to respond audibly. Throughout the presentation, the chatbot effortlessly translated speech between languages and even sang excerpts from a story upon request.
Mira Murati remarked to Bloomberg News, “This marks a significant advancement in interaction and user-friendliness. We’re enabling seamless collaboration with tools like ChatGPT.”
The update will democratize several features previously restricted to paid subscribers, such as web search capabilities, multi-voice responses, and the ability to command the chatbot to remember and recall information.
The launch of GPT-4o is poised to disrupt the rapidly evolving AI landscape, where GPT-4 stands as the benchmark. Numerous startups and tech giants, including Anthropic, Cohere, and Google, have recently introduced AI models vying with or surpassing GPT-4 in certain benchmarks.
OpenAI’s announcement precedes Google I/O, where Google, a frontrunner in AI, is anticipated to unveil further AI developments. In a rare blog post, CEO Sam Altman expressed that while the original ChatGPT hinted at the potential of language-computer interaction, using GPT-4o evokes a “visceral” experience akin to science fiction.
GPT-4o, short for “omni,” integrates voice, text, and vision processing into a unified model, rendering it twice as fast and significantly more efficient than its predecessor. By consolidating these capabilities, OpenAI aims to enhance user immersion and interaction with ChatGPT.
Despite its advancements, GPT-4o encountered glitches during the demo, including audio interruptions and unexpected responses. OpenAI plans to gradually roll out GPT-4o’s new features to select paying users before offering them to enterprise clients. Additionally, it will expand access to the GPT Store, allowing users to create customized chatbots.
Speculation surrounding OpenAI’s next launch has been rife, with recent hints suggesting the company’s involvement in a mysterious new chatbot. Altman’s cryptic comments fueled rumors, later confirmed on social media, that the chatbot in question was indeed GPT-4o.
OpenAI continues to develop a range of products, including voice and video technology, and is rumored to be working on a search feature for ChatGPT. Despite dispelling immediate rumors of GPT-5’s imminent release or a new search product, Altman teased that more announcements are forthcoming, ensuring the anticipation remains high.
Pop star Justin Timberlake was charged with drunken driving early Tuesday in the Hamptons village…
The recent events surrounding Elon Musk and Tesla have highlighted the profound impact of his…
Bangladesh's star all-rounder Shakib Al Hasan silenced his critics with a stellar performance against the…
In a recent emotional episode of "The Kardashians," Kylie Jenner opened up to her sister…
In a historic T20 World Cup match on Friday (June 14) at the Sir Vivian…
Shakira and Gerard Piqué met in 2010 and were together for more than a decade.…