AI可可AI生活

[人人能懂] 从死记硬背到举一反三的五堂课

00:00:27 高手过招:抄作业还是自己闯?

00:04:38 AI也会“喜新厌旧”?高手是如何做到“学而不忘”的

00:09:36 让AI学会“脑补”,速度提升5倍的秘密

00:13:45 AI的“开窍”秘诀:如何让机器学会举一反三?

00:17:53 聪明反被聪明误?AI的“笨办法”

本期介绍的五篇论文:

[LG] Towards a Unified View of Large Language Model Post-Training  

[Tsinghua University]  

https://arxiv.org/abs/2509.04419 

---

[LG] RL's Razor: Why Online Reinforcement Learning Forgets Less  

[MIT]  

https://arxiv.org/abs/2509.04259 

---

[LG] Set Block Decoding is a Language Model Inference Accelerator  

[FAIR at Meta]  

https://arxiv.org/abs/2509.04185 

---

[LG] ArcMemo: Abstract Reasoning Composition with Lifelong LLM Memory  

[University of California, San Diego]  

https://arxiv.org/abs/2509.04439 

---

[LG] Learning When to Plan: Efficiently Allocating Test-Time Compute for LLM Agents  

[University College London & University of Oxford]  

https://arxiv.org/abs/2509.03581