AI可可AI生活

[人人能懂] 给AI做脑CT、建记忆宫殿,再教它自学成才

今天,我们将深入AI的“内心世界”,看看它如何变得更聪明。我们会用最新论文中的方法,给AI做一次“脑CT”看清能力升级的代价,并教会“音盲”的它“脑补”出声音的质感。接着,我们将揭示AI如何像搭建“记忆宫殿”和使用“信息压缩机”一样,告别遗忘和臃肿。最后,我们将见证AI如何摆脱人类老师,通过预测作者思路实现“自学成才”!

00:00:31 给AI做一次“脑CT”:排行榜之外,我们如何看透模型的真本事?

00:05:50 AI的“记忆宫殿”:聊得再久,它怎么才能记住重点?

00:11:19 给AI装上“压缩饼干”机:信息再多,也能秒懂重点

00:16:27 AI学会了“脑补”声音:闭上眼睛,如何听懂全世界?

00:21:44 AI界的“自学成才”:没有老师,如何炼成绝世武功?

本期介绍的几篇论文:

[CL] Beyond the Leaderboard: Understanding Performance Disparities in Large Language Models via Model Diffing  

[HBKU]  

https://arxiv.org/abs/2509.18792  

---

[CL] EpiCache: Episodic KV Cache Management for Long Conversational Question Answering  

[Apple]  

https://arxiv.org/abs/2509.17396  

---

[CL] CompLLM: Compression for Long Context Q&A  

[Amazon]  

https://arxiv.org/abs/2509.19228  

---

[CL] AuditoryBench++: Can Language Models Understand Auditory Knowledge without Hearing?  

[Pohang University of Science and Technology & HJ AILAB]  

https://arxiv.org/abs/2509.17641  

---

[CL] Reinforcement Learning on Pre-Training Data  

[Tencent]  

https://arxiv.org/abs/2509.19249