This paper focuses on modeling the behavior of Transformer models during training, particularly concerning in-context learning (ICL), which shows a transition from generalizing to memorizing. The authors utilize a Bayesian model that incorporates two primary predictors, Memorizing (M) and Generalizing (G), and demonstrate that this model accurately captures the observed behavior of the Transformer across tasks like linear regression and classification. The paper examines the relationship between training steps, task diversity, and the dominance of the two predictors, concluding that specific parameters relating to Kolmogorov complexity and sample efficiency are necessary to explain the observed transient generalization phenomenology. The visual data, presented as heatmaps, illustrates how these factors influence the shift between generalization (blue) and memorization (red) over the course of training.
資訊
- 節目
- 頻率每週更新
- 發佈時間2025年10月11日 下午6:59 [UTC]
- 長度16 分鐘
- 年齡分級兒少適宜