OpenAI's new inference AI model will generate more hallucinations
OpenAI's New Reasoning AI Models Hallucinate More
The recently launched o3 and o4 - mini models by OpenAI have demonstrated industry - leading standards in many aspects. However, these two models still cannot escape the "hallucination" problem, and it is even more serious than the previously released models. According to OpenAI's internal tests, the o3 and o4 - mini, as reasoning models, have a higher frequency of hallucinations than the previous reasoning models o1, o1 - mini, o3 - mini, and even non - reasoning models. In the technical reports released for these two models, OpenAI stated: "To figure out why the hallucination problem becomes more serious as the scale of reasoning models expands, further research is needed." Although the o3 and o4 - mini models perform better than before in tasks such as programming and mathematics, due to the increase in the total number of answers output by the models, they can make more accurate judgments, but at the same time, they inevitably make more mistakes and even hallucinations.
—— TechCrunch
via Windvane Reference Express - Telegram Channel