本期节目干货满满,我们将一起潜入AI思维的深海。我们会看到,AI如何学会像顶尖高手一样“倒着想”,先构思结尾再动笔;也会发现,为何AI赛道上1%的微小进步,会在“马拉松”式的长任务中滚成决定性的胜利。我们还会用一场“真心话大冒险”去窥探AI的“内心世界”,看看它是否真的“言行一致”;并揭秘一套先学“普通话”再通“方言”的语言天才养成法。最后,我还会为你揭露一种AI时代的“新巫术”,它可能会让你对许多所谓的“科学结论”打上一个大大的问号。精彩内容,马上开始!
00:00:40 AI写作的“逆向工程”:高手都是先想好结尾再动笔
00:05:18 AI的“马拉松”陷阱:为什么领先一点点,最后能赢一大圈?
00:10:10 AI的“真心话大冒险”:你嘴上说不要,身体却很诚实?
00:15:06 AI的“语言天才”养成法:先学“普通话”,再通“方言”
00:20:12 AI时代的“新巫术”:只要锄头换得勤,没有墙角挖不倒
本期介绍的几篇论文:
[LG] Reverse-Engineered Reasoning for Open-Ended Generation
[ByteDance Seed]
https://arxiv.org/abs/2509.06160
---
[LG] The Illusion of Diminishing Returns: Measuring Long Horizon Execution in LLMs
[University of Cambridge & University of Stuttgart & Max Planck Institute for Intelligent Systems]
https://arxiv.org/abs/2509.09677
---
[LG] Probing the Preferences of a Language Model: Integrating Verbal and Behavioral Tests of AI Welfare
[Future Impact Group (FIG) & Ruhr-University Bochum]
https://arxiv.org/abs/2509.07961
---
[CL] mmBERT: A Modern Multilingual Encoder with Annealed Language Learning
[Johns Hopkins University]
https://arxiv.org/abs/2509.06888
---
[CL] Large Language Model Hacking: Quantifying the Hidden Risks of Using LLMs for Text Annotation
[Bocconi University & University of Zurich]
https://arxiv.org/abs/2509.08825
Informations
- Émission
- FréquenceTous les jours
- Publiée14 septembre 2025 à 02:00 UTC
- Durée27 min
- ClassificationTous publics