AI可可AI生活

[人人能懂] 从内功心法、内心戏到学习公式

本期节目,我们将一起深入AI的大脑,看看它究竟是如何思考的。我们会发现,AI不仅会修炼“内功心法”,以小博大;还会上演复杂的“内心戏”,让我们难辨其思考的真伪。我们还会揭开它通用的“学习公式”,看看聪明的AI为何会掉入“聪明陷阱”,以及它最终如何学会替我们高效地“试错”。

00:00:32 AI的“内功心法”:让小模型拥有大智慧的秘密

00:05:52 AI的“内心戏”:你看到的思考过程,有多少是“演”的?

00:10:49 AI的“聪明陷阱”:为什么懂得多,反而容易犯错?

00:16:26 揭秘AI的“学习公式”:原来万变不离其宗

00:21:32 让AI替你“试错”,我们能省下多少力气?

[CL] Scaling Latent Reasoning via Looped Language Models

[ByteDance Seed]

https://arxiv.org/abs/2510.25741

---

[LG] Can Aha Moments Be Fake? Identifying True and Decorative Thinking Steps in Chain-of-Thought

[Northeastern University & UC Berkeley]

https://arxiv.org/abs/2510.24941

---

[CL] Are Language Models Efficient Reasoners? A Perspective from Logic Programming

[ETH Zürich & EPFL]

https://arxiv.org/abs/2510.25626

---

[CL] Language Model Behavioral Phases are Consistent Across Architecture, Training Data, and Scale

[MIT & UCSD]

https://arxiv.org/abs/2510.24963

---

[LG] GPTOpt: Towards Efficient LLM-Based Black-Box Optimization

[MIT]

https://arxiv.org/abs/2510.25404