AE Alignment Podcast

James Bowler

The AE Alignment Podcast explores the ideas, research, and people working to make advanced AI systems more interpretable, and more aligned. Hosted by James Bowler, the show features conversations with researchers, engineers, and technical leaders at AE Studio and beyond on topics including mechanistic interpretability, model psychology, and approaches to AI alignment. Each episode aims to make cutting-edge alignment research more accessible without losing the technical substance, giving listeners a front-row seat to the questions shaping the future of AI.

Puntate

  1. 22 H FA

    Keenan Pepper: Self-Interpretation in LLMs

    In this episode, AE Studio Research Director Mike Vaiana is joined by Research Scientist Keenan Pepper, to explore a new approach to model self-interpretation - teaching language models to explain their own internal activations. They dive into Keenan’s recent paper on training lightweight adapters that transform activation vectors into soft tokens the model can interpret as language. The conversation walks through how this method improves on prior approaches like SelfIE, and how simple affine transformations can unlock surprisingly strong interpretability. Mike and Keenan break down concrete examples, including how models can identify latent topics like “baseball” from internal states, and even surface hidden reasoning steps in multi-hop questions, offering a potential path toward detecting when models are reasoning, guessing, or even hiding information. They also explore broader implications for AI alignment: from probing deception and internal representations, to enabling new forms of activation steering and self-monitoring. Along the way, they discuss attention schema theory, limitations of current labeling methods, and how this work could evolve into a general interface between model internals and human-understandable concepts. In this episode: What self-interpretation of activations is How lightweight adapters improve interpretability without retraining models Why this approach could help uncover hidden reasoning and deception in LLMs Learn more: ae.studio/alignmentKeenan's Research Paper: https://arxiv.org/abs/2602.10352AE Studio is hiring: https://www.ae.studio/join-usLinkedIn: https://www.linkedin.com/in/james-bowler-84b02a100/

    48 min
  2. 25 MAR

    Alex McKenzie: Endogenous Steering Resistance

    In this episode, James is joined by AE alignment researcher Alex to discuss endogenous steering resistance (ESR), a newly studied phenomenon where large language models appear to notice when they’ve been pushed off track and then steer themselves back toward the original task. They break down a concrete example from Alex’s research, where a model answering a simple probability question is continuously injected with unrelated internal signals about human body positions. Despite the distraction, the model sometimes catches the mismatch, says it made a mistake, and restarts with a much better answer. Alex explains why this matters for mechanistic interpretability, AI alignment, and the broader question of whether models may be developing early forms of self-monitoring. The conversation also explores activation steering, sparse autoencoders, off-topic detector latents, and why ESR may become more common as models scale. James and Alex discuss how this line of research could help us better understand jailbreak resistance, evaluation awareness, deception, and other alignment-relevant behaviors in frontier AI systems. They also preview AE’s next phase of research, supported by a grant from the UK AI Security Institute, and reflect on underexplored directions in AI alignment, including model psychology, cognitive interpretability, and alignment in a world where powerful open-weight models are widely accessible. In this episode: What endogenous steering resistance isHow activation steering worksWhy self-correction in LLMs may matter for alignment Learn more: ae.studio/alignment ESR paper: https://arxiv.org/abs/2602.06941 AE Studio is hiring: https://www.ae.studio/join-us Linkedin: https://www.linkedin.com/in/james-bowler-84b02a100/

    45 min

Descrizione

The AE Alignment Podcast explores the ideas, research, and people working to make advanced AI systems more interpretable, and more aligned. Hosted by James Bowler, the show features conversations with researchers, engineers, and technical leaders at AE Studio and beyond on topics including mechanistic interpretability, model psychology, and approaches to AI alignment. Each episode aims to make cutting-edge alignment research more accessible without losing the technical substance, giving listeners a front-row seat to the questions shaping the future of AI.