Explore how sparse autoencoders and transcoders unveil the inner workings of GPT-2 by revealing functional features and computational circuits. Discover breakthrough methods that shift from observing raw network activations to mapping the model's actual computation, making AI behavior more interpretable than ever.
Information
- Program
- FrekvensVarje vecka
- Publicerad26 januari 2026 kl. 00:00 UTC
- Längd11 min
- ÅldersgränsBarnvänligt
