OpenAI’s next flagship AI model is reportedly struggling to outperform older models on some tasks


OpenAI is rumored to be working on the next generation of its flagship large language models (LLMs), however, this may be hitting a wall. The San Francisco-based AI firm is struggling to significantly upgrade the capabilities of its next AI model, internally codenamed Orion, according to a report. The model is said to outperform older models when it comes to language-based tasks, but is weaker in some tasks like coding. Notably, the company is also struggling to accumulate enough training data to properly train its AI models.

OpenAI’s Orion AI model reportedly fails to show significant improvement

The AI ​​firm’s next head LLM, Orion, is not performing as per expectations in terms of coding-related tasks, the report said. Citing unnamed employees, the report claims that AI models have shown considerable upgrades when it comes to language-based tasks, but some tasks remain weak.

This is considered a major issue as Orion is reportedly more expensive to run in OpenAI’s data centers than older models such as GPT-4 and GPT-4o. The cost-to-performance ratio of the upcoming LLM may pose a challenge for the company in making it attractive to enterprises and customers.

Additionally, the report also claims that the jump in overall quality between GPT-4 and Orion is less than the jump between GPT-3 and GPT-4. This is a worrying development, however, as this trend is also being seen in other recently released AI models by competitors like Anthropic and Mistral.

For example, benchmark scores for Cloud 3.5 Sonnet show that quality jumps are more iterative with each new Foundation model. However, competitors have largely avoided the attention, focusing instead on developing new capabilities such as agentic AI.

In the report, the publication also highlighted that the industry, as a way to tackle this challenge, is choosing to improve AI models after initial training is completed. This can be done by fine-tuning the output by adding additional filters. However, this is a solution and does not compensate for the limitation that is being caused by the lack of framework or sufficient data.

While the former is a technical and research-based challenge, the latter is largely due to the availability of free and licensed data. To solve this, OpenAI has reportedly created a foundation team that is tasked with finding a way to deal with the lack of training data. However, there is no telling whether the team will be able to obtain more data in time to further train Orion and improve its capabilities.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *