#229 Mitesh Agrawal: Why Lambda Labs’ AI Cloud Is a Game-Changer for Developers

Eye On A.I.

This episode is sponsored by Netsuite by Oracle, the number one cloud financial system, streamlining accounting, financial management, inventory, HR, and more.

NetSuite is offering a one-of-a-kind flexible financing program. Head to  https://netsuite.com/EYEONAI to know more. 

In this episode of the Eye on AI podcast, we dive into the transformative world of AI compute infrastructure with Mitesh Agrawal, Head of Cloud/COO at Lambda

Mitesh takes us on a journey from Lambda Labs' early days as a style transfer app to its rise as a leader in providing scalable, deep learning infrastructure. Learn how Lambda Labs is reshaping AI compute by delivering cutting-edge GPU solutions and accessible cloud platforms tailored for developers, researchers, and enterprises alike.

Throughout the episode, Mitesh unpacks Lambda Labs’ unique approach to optimizing AI infrastructure—from reducing costs with transparent pricing to tackling the global GPU shortage through innovative supply chain strategies. He explains how the company supports deep learning workloads, including training and inference, and why their AI cloud is a game-changer for scaling next-gen applications.

We also explore the broader landscape of AI, touching on the future of AI compute, the role of reasoning and video models, and the potential for localized data centers to meet the growing demand for low-latency solutions. Mitesh shares his vision for a world where AI applications, powered by Lambda Labs, drive innovation across industries.

Tune in to discover how Lambda Labs is democratizing access to deep learning compute and paving the way for the future of AI infrastructure.

Don’t forget to like, subscribe, and hit the notification bell to stay updated on the latest in AI, deep learning, and transformative tech!

Stay Updated:

Craig Smith Twitter: https://twitter.com/craigss

Eye on A.I. Twitter: https://twitter.com/EyeOn_AI

(00:00) Introduction and Lambda Labs' Mission

(01:37) Origins: From DreamScope to AI Compute Infrastructure

(04:10) Pivoting to Deep Learning Infrastructure

(06:23) Building Lambda Cloud: An AI-Focused Cloud Platform

(09:16) Transparent Pricing vs. Hyperscalers

(12:52) Managing GPU Supply and Demand

(16:34) Evolution of AI Workloads: Training vs. Inference

(20:02) Why Lambda Labs Sticks with NVIDIA GPUs

(24:21) The Future of AI Compute: Localized Data Centers

(28:30) Global Accessibility and Regulatory Challenges

(32:13) China’s AI Development and GPU Restrictions

(39:50) Scaling Lambda Labs: Data Centers and Growth

(45:22) Advancing AI Models and Video Generation

(50:24) Optimism for AI's Future

(53:48) How to Access Lambda Cloud

To listen to explicit episodes, sign in.

Stay up to date with this show

Sign in or sign up to follow shows, save episodes and get the latest updates.

Select a country or region

Africa, Middle East, and India

Asia Pacific

Europe

Latin America and the Caribbean

The United States and Canada