Intelligence Unbound

Petri: An Open-Source AI Safety Auditing Tool

This episode introduce Petri (Parallel Exploration Tool for Risky Interactions), an open-source framework developed by Anthropic to accelerate AI safety research through automated auditing. Petri uses specialized AI auditor agents and LLM judges to test target models across diverse, multi-turn scenarios defined by human researchers via seed instructions.