https://machinelearning.apple.com/research/illusion-of-thinking
The document investigates the capabilities and limitations of Large Reasoning Models (LRMs), a new generation of language models designed for complex problem-solving. It critiques current evaluation methods, which often rely on mathematical benchmarks prone to data contamination, and instead proposes using controllable puzzle environments to systematically analyze model behavior. The research identifies three distinct performance regimes based on problem complexity: standard models may outperform LRMs at low complexity, LRMs show an advantage at medium complexity, but both collapse at high complexity. Crucially, LRMs exhibit a counter-intuitive decline in reasoning effort as problems become overwhelmingly difficult, despite having available token budgets, and also demonstrate surprising limitations in executing exact algorithms and inconsistent reasoning across different puzzle types.
信息
- 节目
- 频率一日一更
- 发布时间2025年6月6日 UTC 18:57
- 长度17 分钟
- 分级儿童适宜