The Turing Podcast

Turing

The Turing Podcast is a weekly dispatch from Founder & CEO Jonathan Siddharth and Turing experts on the future of AI infrastructure, research, and frontier model development. Each episode explores how labs are building, training, and evaluating the next generation of intelligent systems, with an eye on speed, neutrality, and real-world scale.

الحلقات

  1. ١٤ أغسطس

    Learning from Reid Hoffman’s AI Clone | The Multimodal Frontier

    What can we learn from Reid Hoffman’s AI clone? In this episode of the Turing Podcast, Mahesh Joshi, Head of Data and AI at Turing, explores how multimodal AI — systems that integrate text, audio, video, and more — is redefining what’s possible in artificial intelligence. From the rapid progress in audio generation to the more complex challenges of video generation, Mahesh breaks down where the technology stands today, the realistic benchmarks needed to measure its true capabilities, and the enterprise opportunities emerging from these breakthroughs. The conversation also looks ahead to the quest for embodied AI, where digital intelligence could interact with the world in human-like ways. Whether you’re fascinated by the idea of an AI clone or looking to understand the cutting edge of generative AI applications, this episode offers a clear-eyed view of the multimodal frontier and Turing’s role in pushing it forward. Episode Highlights: Multimodal systems are becoming increasingly important in AI research.The advancements in audio generation are significant, but challenges remain.Video generation technology is still developing and has a long way to go.Realistic benchmarks are essential for evaluating AI models effectively.Enterprises are eager to adopt AI technologies for practical applications.Turing plays a crucial role in advancing AI research and applications.The quest for embodied AI is a key focus for future developments.AI-generated content must meet high standards to be considered effective.The integration of audio and video capabilities is a complex challenge.AI can significantly enhance productivity in various enterprise settings. Chapters [00:00] Introduction to Multimodal AI [01:08] The Rise of Multimodal Systems [02:59] State of the Art in Audio and Video [07:37] Challenges in Video Generation [10:14] Opportunities for Incumbents in Video AI [14:11] Benchmarking AI: The Turing Test and Beyond [18:22] Defining Human Interaction with AI [24:45] Future of Multimodal Applications [27:04] Enterprise Adoption of Multimodal AI [30:06] Turing's Role in AI Advancement [33:16] Research Focus: Embodied AI

    ٣٤ من الدقائق
  2. ٨ أغسطس

    GPT-5 and SWE-Bench: A Launchpad for O5-Level Code Reasoning

    In this episode, Lilin Wang, Engineering Director at Turing, discusses SWE Bench, a benchmark designed to evaluate the software engineering reasoning capabilities of large language models. She explores the motivation behind SWE Bench, its structure, and how it differs from traditional coding benchmarks. Lilin explains Turing's approach to enhancing model performance through data expansion and trajectory data, as well as the challenges posed by SWE Bench compared to other benchmarks. The episode concludes with insights into the future of software engineering with AI and the evolving role of engineers. Highlights SWE Bench evaluates the capability of large language models in real-world software engineering tasks.The benchmark moves beyond simple coding tasks to include bug fixing and feature development.SWE Bench leverages high-quality data from GitHub repositories for evaluation.The model's ability to understand context is crucial for solving complex problemsTuring aims to expand the SWE Bench dataset for better model training.Trajectory data helps in understanding and correcting model failures.SWE Bench presents unique challenges compared to other benchmarks like Human Eval.The future of software engineering may see models acting as junior engineers.Engineers will shift to supervisory roles, focusing on high-level planning.Improving model capabilities will enhance efficiency in software development. Chapters 00:00 Introduction and Model Breaking Prompts 03:52 Understanding SWE Bench: Motivation and Structure 06:58 Evaluating Tasks: Solvable vs. Hard 10:04 Turing's Approach to Multi-Step Code Reasoning 16:23 Challenges of SweetBench vs. Other Benchmarks 20:16 Future of AI in Software Engineering 27:04 Conclusion and Future Prospects

    ٣٢ من الدقائق
  3. ٢ أغسطس

    Inside RL Gyms: From Function Calls to Simulated Universes

    This week on The Turing Podcast, Anshul Bhagi, Turing’s RL Gym expert, discusses how reinforcement learning environments are built and why they matter right now. This episode lays out where reinforcement learning fits into the AI stack and how Turing’s RL Gyms are helping elite labs build strength with every cycle. Highlights: The two main types of RL Environments and how each as evolvedWhat separates a good RL environment from a great one and why that difference matters for trainingWhen reinforcement learning is the right tool for an AI problem and when it is notHow future RL gyms could simulate entire businesses or train personalized agents in rich virtual environments To move fast and stay ahead, AI teams need to strengthen their capabilities. Turing’s RL Gyms are designed for that purpose. They are environments where researchers, agents, and systems improve with every iteration. The result is stronger, more capable models and faster progress. If you are working on complex model training, AGI or ASI development, or building AI-native systems, this episode offers an inside view into the future of AI training infrastructure. (00:00) Introduction to RL Environments (04:14) Types of RL Environments (07:05) Evolution of RL Environments (09:59) Human Involvement in RL Design (10:54) When Not to Use RL (21:40) Accuracy in RL Environments (24:46) Future of RL Environments (27:31) Complexity of RL Environments3 (30:37) Future Directions

    ٣٣ من الدقائق

التقييمات والمراجعات

٥
من ٥
‫٢ من التقييمات‬

حول

The Turing Podcast is a weekly dispatch from Founder & CEO Jonathan Siddharth and Turing experts on the future of AI infrastructure, research, and frontier model development. Each episode explores how labs are building, training, and evaluating the next generation of intelligent systems, with an eye on speed, neutrality, and real-world scale.