Researchers believe that large models can neither think nor reason.
Researchers Believe Large Language Models Neither Think Nor Reason
2025-05-30 14:18 by Brave New World
Popular large AI models have started to provide "reasoning processes" — generating a series of lengthy intermediate texts before giving the final answer, which seems similar to human reasoning drafts. Researchers at Arizona State University published a paper on the preprint platform arXiv, suggesting that this behavior of large language models should not be described as "reasoning" or "thinking", believing that such anthropomorphic descriptions can cause harmful misunderstandings about the actual working process of large language models. Although reasoning models like DeepSeek R1 have shown higher performance, they actually neither think nor reason, and researchers have not found any evidence representing a true reasoning process. The so - called thinking of large language models is actually just finding correlations, and as we all know, correlation does not equal causation. Researchers warn that regarding the intermediate inputs of large language models as reasoning will give users a false sense of confidence in the problem - solving mechanisms of large language models.
arxiv.org/pdf/2504.09762
/.:Researchers Warn Against Treating AI Outputs as Human - Like Reasoning
#Artificial Intelligence
via Solidot - Telegram Channel