Alexa's Input (AI)

Alexa Griffith

Alexa’s Input is a podcast about how technology actually moves forward. Hosted by Alexa Griffith, it features conversations with engineers, founders, CEOs, and leaders shaping today’s tech landscape. Each episode digs into the decisions behind the systems — what’s being built, what’s being questioned, and why it matters now. Opinions are my own Linktree: https://linktr.ee/alexagriffith Website: https://alexagriffith.com/ LinkedIn: https://www.linkedin.com/in/alexa-griffith/ X: @lexal0u

  1. 3D AGO

    Securing the Software Supply Chain with Justin Cappos

    Modern software is built on layers and layers of code. So how do we know we can trust it? In this episode of Alexa’s Input (AI), Alexa Griffith sits down with Justin Cappos, professor of computer science at NYU and a leading expert in software supply chain security, to unpack what trust really means in today’s digital infrastructure. From package managers and dependency chains to large-scale outages and AI systems built on inherited code, Justin explains why many security failures aren’t random accidents, they’re predictable consequences of weak process, misaligned incentives, and insecure design. They discuss: Why security only becomes visible when something breaks The difference between unavoidable failure and negligence How modern software supply chains amplify small mistakes The role of leadership and culture in preventing breaches Why verification systems like TUF and in-toto matter more than ever As AI accelerates development and increases system complexity, the need for verifiable trust only grows. This episode is a practical look at the invisible infrastructure that keeps modern software, and increasingly, modern AI, from collapsing under its own complexity. Podcast Links Watch: ⁠⁠⁠⁠⁠⁠https://www.youtube.com/@alexa_griffith⁠⁠⁠⁠⁠⁠ Read: ⁠⁠⁠⁠⁠⁠⁠⁠https://alexasinput.substack.com/⁠⁠⁠⁠⁠⁠⁠⁠ Listen:⁠⁠⁠⁠ https://creators.spotify.com/pod/profile/alexagriffith/⁠⁠⁠⁠ More: ⁠⁠⁠⁠⁠⁠https://linktr.ee/alexagriffith⁠⁠⁠⁠⁠⁠ Website: ⁠⁠⁠⁠⁠⁠https://alexagriffith.com/⁠⁠⁠⁠⁠⁠ LinkedIn: ⁠⁠⁠⁠⁠⁠https://www.linkedin.com/in/alexa-griffith/⁠⁠⁠⁠⁠ Find out more about the guest at: Website: https://engineering.nyu.edu/faculty/justin-cappos NYU page: https://ssl.engineering.nyu.edu/personalpages/jcappos/ Wikipedia: https://en.wikipedia.org/wiki/Justin_Cappos Chapters 00:00 Introduction to Justin Cappos and His Work 01:17 The Importance of Security in Software Systems 03:50 Understanding Security Breaches: Mistakes vs. System Design Problems 06:34 Cultural Factors in Security Failures 09:25 Justin's Journey in Software Security 12:03 The Role of Academia in Enterprise Security 14:10 Evaluating Enterprise Security Systems 16:58 Foundational Projects in Software Security 19:21 AI Security Concerns and Future Directions 24:59 The Need for MCP 2.0 28:57 Security Challenges with LLMs 32:33 Designing Secure AI Systems 37:14 Ethical Dilemmas in AI Decision-Making 40:17 The Role of AI in Open Source 43:44 Trust and Mindset in AI Security

    49 min
  2. 3D AGO

    The Artificial Immune System with Wendy Chin, PureCipher CEO

    As AI systems grow more autonomous, the question is no longer just what they can do, but whether we can trust the data and models behind their decisions. In this episode of Alexa’s Input (AI), Alexa Griffith talks with Wendy Chin, CEO of PureCipher, about building what she calls an artificial immune system for AI, a framework designed to make data, models, and inference tamper-evident across the AI lifecycle. They unpack what data poisoning really means (training data, weights and biases, inference inputs), why small amounts of targeted poison can create outsized model misbehavior, and how generative AI lowers the barrier to sophisticated malware. The conversation expands into the security implications of agent-to-agent communication via MCP, digital twins, and why we don’t have the luxury of “shipping now and securing later.” It’s a wide-ranging discussion that moves from practical threat models to the philosophical frontier of what happens as AI becomes more human-like, and more autonomous. Podcast Links Watch: ⁠⁠⁠⁠⁠⁠https://www.youtube.com/@alexa_griffith⁠⁠⁠⁠⁠⁠ Read: ⁠⁠⁠⁠⁠⁠⁠⁠https://alexasinput.substack.com/⁠⁠⁠⁠⁠⁠⁠⁠ Listen:⁠⁠⁠⁠ https://creators.spotify.com/pod/profile/alexagriffith/⁠⁠⁠⁠ More: ⁠⁠⁠⁠⁠⁠https://linktr.ee/alexagriffith⁠⁠⁠⁠⁠⁠ Website: ⁠⁠⁠⁠⁠⁠https://alexagriffith.com/⁠⁠⁠⁠⁠⁠ LinkedIn: ⁠⁠⁠⁠⁠⁠https://www.linkedin.com/in/alexa-griffith/⁠⁠⁠⁠⁠ Find out more about the guest at: LinkedIn: https://www.linkedin.com/in/wendy-chin-ctg/ Website: https://www.purecipher.com/ Chapters 00:00 Introduction to AI Security 01:16 Understanding Data Poisoning 04:38 The Dangers of Malware in AI 07:46 AI's Moral Dilemmas and Decision Making 08:45 Building Empathy in AI 13:07 The Role of Good Data in AI Training 17:02 PureCypher's Artificial Immune System 22:34 Digital Twins and Their Implications 25:22 Nurturing AI Like a Child 30:53 Data Therapy for AI 36:13 The Future of AI and Human Interaction 38:45 The Dark Side of AI: Hacking and Security 45:03 Global Perspectives on AI Security 48:11 MCP Agents and Security Concerns 51:41 Philosophical Implications of AI and Human Connection 01:00:04 The Sci-Fi Future of AI and Humanity

    1h 6m
  3. 4D AGO

    Shipping Agents, Not Vulnerabilities with Ian Webster, PromptFoo CEO

    As LLM apps evolve from simple chatbots to tool-using agents, the attack surface explodes, and the old security playbooks don’t hold. In this episode of Alexa’s Input (AI), Alexa Griffith sits down with Ian Webster, co-founder and CEO of PromptFoo, to break down what AI security actually looks like in practice: automated red teaming, prompt injection and jailbreak testing, evaluation workflows that scale, and why “guardrails alone” is not a security strategy. Ian shares how PromptFoo grew from a side project into a widely adopted open-source standard, what it means to raise multi-millions in a fast-moving market, and how enterprises are approaching the full vulnerability lifecycle, from finding issues to triage, remediation, and validation. Ian also discusses the “lethal trifecta” that makes agents fundamentally risky (untrusted input + sensitive data + exfil path), and why MCP security isn’t just about users and tools, it’s about dangerous tool combinations and rogue servers. Podcast Links Watch: ⁠⁠⁠⁠⁠https://www.youtube.com/@alexa_griffith⁠⁠⁠⁠⁠ Read: ⁠⁠⁠⁠⁠⁠⁠https://alexasinput.substack.com/⁠⁠⁠⁠⁠⁠⁠ Listen:⁠⁠⁠ https://creators.spotify.com/pod/profile/alexagriffith/⁠⁠⁠ More: ⁠⁠⁠⁠⁠https://linktr.ee/alexagriffith⁠⁠⁠⁠⁠ Website: ⁠⁠⁠⁠⁠https://alexagriffith.com/⁠⁠⁠⁠⁠ LinkedIn: ⁠⁠⁠⁠⁠https://www.linkedin.com/in/alexa-griffith/⁠⁠⁠⁠ Find out more about the guest at: PromptFoo Website: https://www.promptfoo.dev/ Github: https://github.com/promptfoo/promptfoo Ian’s LinkedIn: https://www.linkedin.com/in/ianww/ Chapters 00:00 Introduction to AI Security Challenges 02:06 Funding and Growth of PromptFu 06:16 The Genesis of PromptFu 11:05 Career Journey and Lessons Learned 12:53 Understanding AI Red Teaming 17:36 Recent AI Security Vulnerabilities 19:46 The Dual Nature of AI in Security 21:47 Understanding the Lethal Trifecta in AI Security 24:22 Exploring Model Context Protocol (MCP) and Its Security Implications 26:22 Common Security Issues in MCP Systems 28:17 The Role of Identity and Permissions in AI Security 30:00 Practical Implications of Using PromptFoo for Developers 31:33 Evaluating Language Models: Challenges and Techniques 36:34 The Limitations of Guardrails in AI Security 38:25 Best Practices for Engineers in AI Development 39:58 Future Trends in AI and Security 42:28 Everyday Applications of AI and Language Models

    45 min
  4. Inside the Future of AI Infrastructure with Marc Austin

    FEB 6

    Inside the Future of AI Infrastructure with Marc Austin

    Most AI infrastructure today is hitting a breaking point. Marc Austin, CEO of Hedgehog, reveals how open source networking and cloud-native solutions are revolutionizing how enterprises build and operate AI at scale. This episode addresses issues many building AI infrastructure today are facing — expensive proprietary systems, overwhelming complex network configurations, and ways to make on-prem AI infrastructure feel just like the public cloud. We discuss how networking is the hidden bottleneck in scaling GPU clusters and the surprising physics and hardware innovations enabling higher throughput. Marc shares the journey of building Hedgehog, an open source, cloud-native platform designed for AI workloads that bridges the gap between complex hardware and seamless, user-friendly cloud experiences. Marc explains how Hedgehog's software abstracts and automates the networking complexity, making AI infrastructure accessible to enterprises without dedicated networking teams. We break down the future of AI networks, from multi-cloud and hybrid environments to the rise of Neo Clouds and the open source movement transforming enterprise AI infrastructure. If you're a CTO, data scientist, or AI innovator, understanding these network innovations can be your moat. Listen to this episode to see how open source, cloud-native networking, and physical innovation are shaping the AI infrastructure of tomorrow. Podcast Links Watch: ⁠⁠⁠⁠https://www.youtube.com/@alexa_griffith⁠⁠⁠⁠ Read: ⁠⁠⁠⁠⁠⁠https://alexasinput.substack.com/⁠⁠⁠⁠⁠⁠ Listen:⁠⁠ https://creators.spotify.com/pod/profile/alexagriffith/⁠⁠ More: ⁠⁠⁠⁠https://linktr.ee/alexagriffith⁠⁠⁠⁠ Website: ⁠⁠⁠⁠https://alexagriffith.com/⁠⁠⁠⁠ LinkedIn: ⁠⁠⁠⁠https://www.linkedin.com/in/alexa-griffith/⁠⁠⁠⁠ Find out more about the guest at LinkedIn:  https://www.linkedin.com/in/austinmarc/ Website: https://hedgehog.cloud/ Github: https://github.com/githedgehog Chapters 00:00 Rethinking AI Infrastructure 02:49 The Role of Networking in AI 05:54 Marc's Journey to Hedgehog 08:46 Lessons from Big Companies 11:38 Requirements for AI Networks 14:48 Advancements in AI Networking 17:33 Future Challenges in AI Infrastructure 20:46 Creating a Cloud Experience On-Prem 23:32 The Shift to Hybrid Multi-Cloud 28:10 Evolving AI Infrastructure and Efficiency 30:57 AI Workloads and Network Configurations 32:41 Zero Touch Lifecycle Management 35:12 Support for Hardware Devices 35:45 Networking Paradigms and Vendor Lock-in 38:42 The Rise of Neo Clouds 41:31 Demand for AI Infrastructure 43:57 Open Source and Cloud-Native Networking 47:27 Challenges of Building a Networking Startup 50:46 Proud Accomplishments at Hedgehog 52:41 Future Excitement in AI Inference

    46 min
  5. JAN 19

    Beyond the Clouds with Kelsey Hightower

    Five years ago, Kelsey Hightower helped me find my voice in tech as the guest for my fifth podcast episode. Today, the man who taught the world Kubernetes and became a legend for his live demos returns for a conversation that goes far beyond infrastructure and code. Now retired-ish, Kelsey has transitioned into a new chapter. In this episode, we explore what it means to be not only a senior engineer, but also a "senior human" in an industry obsessed with speed. Kelsey shares his unique perspective on: Real vs. Artificial Intelligence: Why we must stop ignoring real intelligence and focus on providing humans with the same context and clarity we give to AI.The Future of Engineering: Why your value will shift from writing code to making stylistic, high-impact decisions as AI levels the technical playing field.Impact Over Activity: How to stop being a "busybot" and start asking the difficult questions about why we are building in the first place.The Senior Human Unit Test: Building communities with integrity, leading with empathy, and staying balanced in a world that always wants more.Whether you are just getting into your career or a seasoned veteran, this episode is a masterclass in curiosity, craft, and the art of staying grounded while building the future. Podcast Links Watch: ⁠⁠⁠⁠https://www.youtube.com/@alexa_griffith⁠⁠⁠⁠ Read: ⁠⁠⁠⁠⁠⁠https://alexasinput.substack.com/⁠⁠⁠⁠⁠⁠ Listen:⁠⁠ https://creators.spotify.com/pod/profile/alexagriffith/⁠⁠ More: ⁠⁠⁠⁠https://linktr.ee/alexagriffith⁠⁠⁠⁠ Website: ⁠⁠⁠⁠https://alexagriffith.com/⁠⁠⁠⁠ LinkedIn: ⁠⁠⁠⁠https://www.linkedin.com/in/alexa-griffith/⁠⁠⁠⁠ Find out more about the guest at: Bluesky: https://bsky.app/profile/kelseyhightower.com LinkedIn: https://www.linkedin.com/in/kelsey-hightower-849b342b1 GitHub Profile: https://github.com/kelseyhightower Kubernetes the Hard Way: https://github.com/kelseyhightower/kubernetes-the-hard-way No Code (The minimalist project): https://github.com/kelseyhightower/nocode Kubernetes: Up and Running (Book): https://www.oreilly.com/library/view/kubernetes-up-and/9781492046523/ Chapters 00:00 Introduction and Background 01:10 Transitioning from Engineer to Tech Philosopher 04:00 The Importance of Being a Senior Human 07:23 AI's Impact on People Skills 10:12 The Future of Engineering in an AI World 15:04 Navigating the AI Shift 21:21 Finding Impact Over Activity 25:47 Creating Meaningful Products 29:57 The Power of Listening and Connection 35:21 The Importance of Listening in Discussions 35:55 Embracing the Learning Journey 36:58 Understanding Imposter Syndrome 39:33 Creating Supportive Learning Environments 40:31 Learning in Public and Sharing Experiences 41:31 Finding Your Own Voice 43:26 The Power of Emotion in Presentations 47:29 Crafting Engaging Stories 48:40 Improvisation in Public Speaking 55:10 The Evolution of Presentation Styles 01:03:28 Legacy and Impact in the Tech Community

    1h 6m
  6. Building with Purpose: Joe Beda on Systems and Self

    JAN 12

    Building with Purpose: Joe Beda on Systems and Self

    In this episode of Alexa’s Input (AI), Alexa sits down with Joe Beda, co-creator of Kubernetes and one of the key figures behind modern cloud computing. Joe talks through his journey from big tech to founding a startup and back again, and what it actually takes to build systems that scale technically, organizationally, and emotionally. Joe shares the origin story of Kubernetes, what people often misunderstand about open source, and why infrastructure success sometimes comes with unexpected personal costs. They also discuss tradeoffs between shipping fast and getting it right, how incentives shape engineering culture, and why identity standards like SPIFFE/SPIRE is just now getting more attention. Joe gives a wide-ranging, honest look at infrastructure, innovation, and the people behind it. Links Watch: ⁠⁠⁠⁠https://www.youtube.com/@alexa_griffith⁠⁠⁠⁠ Read: ⁠⁠⁠⁠⁠⁠https://alexasinput.substack.com/⁠⁠⁠⁠⁠⁠ Listen:⁠⁠ https://creators.spotify.com/pod/profile/alexagriffith/⁠⁠ More: ⁠⁠⁠⁠https://linktr.ee/alexagriffith⁠⁠⁠⁠ Website: ⁠⁠⁠⁠https://alexagriffith.com/⁠⁠⁠⁠ LinkedIn: ⁠⁠⁠⁠https://www.linkedin.com/in/alexa-griffith/⁠⁠⁠⁠ Find out more about the guest at: LinkedIn https://www.linkedin.com/in/jbeda/ SPIFFE: https://spiffe.io/Kubernetes: https://kubernetes.io/ Joe Beda Interview – Increment Magazinehttps://increment.com/containers/joe-beda-interview/ Joe Beda on The Podlets Podcasthttps://thepodlets.io/episodes/006-joe-beda/ GitLab Blog: Kubernetes & Community (Joe Beda)https://about.gitlab.com/blog/kubernetes-chat-with-joe-beda/ Keywords Kubernetes, Joe Beda, cloud-native, open source, technology, Google, VMware, Heptio, AI, security standards Chapters 00:00 Introduction to Joe Beda and Kubernetes 02:50 Understanding Kubernetes: The Foundation of Modern Computing 04:36 The Birth of Kubernetes: From Idea to Reality 07:38 Internal Debates: Navigating Challenges at Google 10:14 Key Innovations: What Sets Kubernetes Apart 13:30 The Role of Community: Collaborating with Red Hat 15:26 Design Challenges: Networking and Configuration Pain Points 19:28 Joe's Journey: Transitioning from Microsoft to Google 23:02 Navigating Corporate Politics: Influence and Success 25:24 Career Growth: Balancing Company Success and Personal Development 30:44 Navigating Industry Trends and Career Durability 35:46 The Balance of Work and Life 40:40 Understanding Burnout and Personal Ownership 47:49 The Journey of Founding Heptio 54:31 The Acquisition by VMware and Its Implications 01:00:15 Authenticity in Sales and Motivation 01:01:23 Career Transitions: From Engineer to Founder 01:02:11 The Evolution of Perspective in Tech Careers 01:04:26 Navigating the Challenges of Startup Life 01:06:12 Post-Acquisition Dynamics at VMware 01:09:52 Finding Purpose in Corporate Structures 01:11:32 Philanthropy and Personal Values 01:13:02 Open Source Contributions: Spiffy and Spire 01:16:51 The State of Security Standards in AI 01:22:12 Advising Principles and Green Flags in Startups

    1h 20m
  7. The Hyperadaptive Model for AI with Melissa Reeve

    12/15/2025

    The Hyperadaptive Model for AI with Melissa Reeve

    Why do so many AI rollouts stall right after the tools ship?In this episode of Alexa’s Input (AI), Alexa talks with Melissa Reeve, author of the book Hyper Adaptive: Rewiring the Enterprise to Become AI Native, about what it actually takes to get AI adopted in large organizations. Melissa shares how her background in lean, Agile, and DevOps transformation shaped her view that AI adoption is less about “buying the tool” and more about rewiring how work happens. Together, they break down why many AI initiatives fail (and why ROI is slow), the FOCUS framework, the “AI time paradox,” and how support structures like AI activation hubs, social learning, and better success metrics can raise quality and accelerate impact. A must-listen for engineering leaders, product teams, and executives trying to move beyond pilots and turn AI into real operational leverage. Learn more about Melissa and Hyper Adaptive below. Links Watch: ⁠⁠⁠https://www.youtube.com/@alexa_griffith⁠⁠⁠ Read: ⁠⁠⁠⁠⁠https://alexasinput.substack.com/⁠⁠⁠⁠⁠ Listen:⁠ https://creators.spotify.com/pod/profile/alexagriffith/⁠ More: ⁠⁠⁠https://linktr.ee/alexagriffith⁠⁠⁠ Website: ⁠⁠⁠https://alexagriffith.com/⁠⁠⁠ LinkedIn: ⁠⁠⁠https://www.linkedin.com/in/alexa-griffith/⁠⁠⁠ Find out more about the guest at: LinkedIn: https://www.linkedin.com/in/melissamreeve/ Book: https://itrevolution.com/product/hyperadaptive/ Keywords AI adoption, enterprise transformation, Hyper Adaptive model, organizational change, DevOps, Lean, Agile, AI integration, customer-centricity, innovation accounting, social learning Chapters 00:00 Introduction to AI Adoption in Enterprises 03:00 Melissa's Journey and the Foundation of AI Thinking 06:06 The Analogy of DevOps and AI Implementation 08:47 Cultural Shifts vs. Tooling in AI Adoption 11:49 The Hyper Adaptive Model for AI Integration 14:48 Sociology of Workflows and Organizational Change 17:49 Understanding AI Initiative Failures 21:00 Customer Centricity in AI Solutions 23:58 The AI Time Paradox and Learning 26:58 AI Activation Hubs and Their Role 30:54 The Role of Human Oversight in AI Automation 34:03 Incentivizing AI Engagement in Organizations 35:59 Social Learning and AI: The Power of Collaboration 40:57 Practical Applications of AI in Daily Life 44:44 Quality vs. Productivity: The AI Dilemma 46:13 The Focus Framework: Prioritizing AI Use Cases 48:23 Influencing AI Adoption in Organizations 51:07 The Future of Hyper Adaptive Organizations 55:08 Decision-Making in the Age of AI 57:37 Key Takeaways for Leaders in the AI Revolution

    57 min
  8. Making MLOps Marvelous with Maria Vechtomova

    12/14/2025

    Making MLOps Marvelous with Maria Vechtomova

    What does it actually take to move machine learning from experiments into production reliably, responsibly, and at scale? In this episode of Alexa’s Input (AI), Alexa talks with Maria Vechtomova, co-founder of Marvelous MLOps and an O’Reilly author-in-progress on MLOps with Databricks. Maria shares how her background in data science led her into MLOps, and why most teams struggle not because of tools, but because of missing processes, traceability, and shared understanding across teams. Alexa and Maria dive into what separates good MLOps from fragile deployments, why shipping notebooks as “production” creates long-term pain, and how traceability across code, data, and environment forms the foundation for reliable ML systems. They also explore how LLM applications are reshaping MLOps tooling, and where the biggest skill gaps still exist between platform, data, and AI engineers. A must-listen for anyone building, operating, or scaling machine learning systems and for teams trying to make MLOps less magical and more marvelous. Learn more about Marvelous MLOps and Maria’s work below. Links Watch: ⁠⁠https://www.youtube.com/@alexa_griffith⁠⁠ Read: ⁠⁠⁠⁠https://alexasinput.substack.com/⁠⁠⁠⁠ Listen: https://creators.spotify.com/pod/profile/alexagriffith/ More: ⁠⁠https://linktr.ee/alexagriffith⁠⁠ Website: ⁠⁠https://alexagriffith.com/⁠⁠ LinkedIn: ⁠⁠https://www.linkedin.com/in/alexa-griffith/⁠⁠ Find out more about the guest at: LinkedIn: https://www.linkedin.com/in/maria-vechtomova/ Takeaways Maria started as a data analyst and transitioned into MLOps. She emphasizes the importance of tracking data, code, and environment in MLOps. MLOps is a practice to bring machine learning models to production reliably. Good deployment processes require modular code and proper tracking. MLOps differs from DevOps due to the complexities of data and model drift. Education is crucial for bridging gaps between teams in AI. Small steps can lead to better MLOps practices. Scaling MLOps requires understanding the unique data of different brands. The rise of LLMs is changing the MLOps landscape. Effective teaching methods involve step-by-step guidance. Chapters 00:00 Introduction to MLOps and Maria's Journey 02:11 Maria's Path to MLOps and Knowledge Sharing 04:41 The Importance of MLOps in AI Deployments 10:12 Defining MLOps and Its Challenges 11:38 MLOps vs. DevOps: Key Differences 13:00 Overcoming Stagnation in MLOps 16:04 Small Steps Towards Better MLOps Practices 19:29 Scaling MLOps in Large Organizations 21:58 The Impact of LLMs on MLOps 23:58 The Shift from Traditional ML to AI Applications 26:51 Evolving Roles in AI Engineering 28:33 Databricks: A Comprehensive AI Platform 31:45 Future of AI Platforms and Regulations 34:26 Bridging Skill Gaps in AI Teams 38:42 The Importance of Context in AI Development 40:40 Foundational Skills for MLOps Professionals 45:43 Integrating Personal Passions with Professional Growth 47:30 Building Impactful AI Communities

    44 min

Ratings & Reviews

5
out of 5
6 Ratings

About

Alexa’s Input is a podcast about how technology actually moves forward. Hosted by Alexa Griffith, it features conversations with engineers, founders, CEOs, and leaders shaping today’s tech landscape. Each episode digs into the decisions behind the systems — what’s being built, what’s being questioned, and why it matters now. Opinions are my own Linktree: https://linktr.ee/alexagriffith Website: https://alexagriffith.com/ LinkedIn: https://www.linkedin.com/in/alexa-griffith/ X: @lexal0u