Let's Talk Shop

Elias Khnaser

Coming to you from the Windy City, and hosted by Elias Khnaser! Welcome to Lets Talk Shop! A Podcast about all things cloud and enterprise tech. Listen to insights and guest interviews with IT thought leaders and professionals.

  1. DEC 18

    How AWS is Moving Beyond 'Bolt-On AI' to Full Autonomy

    How close are we to building Jarvis from Iron Man? Host Elias Khnaser sits down with Ali Maaz, AWS's leader for Go-to-Market Developer Services, to discuss how Amazon Q (Kiro) is transforming from a simple coding assistant to an Autonomous Agent and a peer on your team. Recorded live at AWS re:Invent 2025, this conversation dives deep into the future of enterprise software development and Cloud Operations. Key Takeaways: ► The Evolution of AI Agents: Why the biggest problem isn't coding, but the planning cycle—and how Kiro is solving it by arbitrating between product and engineering teams. ► Autonomous Agents: Our first look at Kiro Autonomous Agents, designed to address issues like bug fixes right out of Slack or Teams without a human opening a laptop. ► The Trust Factor: How AWS builds validation and trust with Property Based Testing, making Kiro a reliable, productive teammate. ► Cloud Ops Revolution: A major announcement focusing on a new agent specifically for Cloud Operations and DevOps to reduce Mean Time to Resolution (MTTR) and detect state/policy drift. ► AWS Differentiation: How AWS remains focused on customer-driven innovation, viewing internal teams (like Amazon.com) as just one of their largest and most important customers. ► The era of "bolt-on AI" is ending. The next step is "AI-driven development and operations." Tune in to see how you can "get out of the way" and let AI manage your next big project. 00:00:00 Intro & Guest Welcome: Ali Maz, AWS Developer Services  00:01:11 The "Jarvis" Question: How Close is AWS Q to Iron Man's AI?  00:01:46 Beyond Coding: Kiro's Role in the Planning Cycle (PR-FAQ)  00:03:43 Announcement 1: Kiro Autonomous Agent (From Assistant to Peer)  00:05:21 Building Trust: Validation, Oversight, and the Human in the Loop  00:06:46 Automated Reasoning & Property Based Testing (AI Validating AI)  00:07:34 Announcement 2: Kiro Powers & Personalized Context for ISVs  00:10:40 Agent Core: Policy Management & Evaluation for Production-Grade Agents  00:12:43 AWS Differentiation: Why We are Customer-Obsessed, Not Competitor-Obsessed  00:14:12 New Agent Announced: Focused on Cloud Operations & DevOps  00:16:33 The AI Evolution: Moving to "AI Managed" and "Get Out of the Way"  00:17:47 Conclusion & Wrap-up Sign Up Now for my online course "The Cloud Strategy Master Class": ► On my web site: https://lnkd.in/gcxcrX and use promo code LINKEDIN20 to receive a 20% discount. ► On Udemy: https://www.udemy.com/course/cloud-strategy-master-class/ PODCASTS: Listen wherever you get your podcasts: ► Let's Talk Shop: http://letstalkshop.buzzsprout.com  ► Reality Distortion Fields RDFs Podcast: https://youtu.be/88z1UiVaV00  Follow me: ► TikTok: @ekhnaser ► Instagram: @ekhnaser ► Twitter: @ekhnaser ► LinkedIn: https://www.linkedin.com/in/eliaskhnaser/ ► Website: www.eliaskhnaser.com

    18 min
  2. DEC 18

    Storage Architect to AI: Is Your Data Performance Fast Enough?

    Is storage is the new bottleneck in the age of AI? Elias Khnaser and Asad Khan, Senior Director of Google Cloud Storage, discuss this topic in depth. While all the spotlight is on fast, expensive GPUs and TPUs, Elias and Asad are back to basics. In the past, CPU was never the bottleneck; slow storage was. Today, AI training and inferencing workloads require feeding high-cost GPUs/TPUs with data at an unprecedented speed to prevent them from sitting idle and wasting millions of dollars. Key Takeaways: ► The shift: Why high-performance storage is now mission-critical for maximizing your ROI on massive GPU clusters. ► How Google Cloud is solving the data performance problem by moving beyond HDDs to intelligent SSD tiering. ► Deep dive into Google Cloud Storage solutions for AI, including Anywhere Cache and Rapid Store, designed to automatically handle caching, prefetching, and high-performance throughput across all zones without the customer having to worry about colocation. ► The importance of data APIs for researchers: object storage (GCS) vs. full POSIX compliance (Lustre). ► The truth: The best AI performance isn't just about the fastest chip—it's the correct configuration of GPUs, storage, and networking. 00:00:00 Intro & Guest Welcome: Asad Khan, Google Cloud Storage 00:01:19 GCS, Lustre, & the Full Google Cloud Storage Portfolio 00:02:00 Is Storage Dead? The GPU vs. Storage Conversation 00:03:12 The New AI Bottleneck: Why GPUs Sit Idle (Wasting Money) 00:06:39 From Cheap Scale to High-Performance Cloud Storage 00:08:22 The Two Dimensions of AI Storage: SSDs & APIs 00:10:37 Anywhere Cache: Automatic High-Performance Caching 00:13:15 How Storage Differs for AI Training vs. Inferencing 00:15:35 Rapid Store and Full POSIX Compliance with Lustre 00:18:26 The True Formula for AI Performance (It's Not Just the GPU) 00:20:39 Sony Honda Mobility Case Study: Lustre in Action 00:23:41 Traditional vs. AI Customers: Different Storage Priorities 00:27:07 The Future: Unlocking Insights from Unstructured Enterprise Data 00:33:40 Final Thoughts & Key Takeaways Sign Up Now for my online course "The Cloud Strategy Master Class": ► On my web site: https://lnkd.in/gcxcrX and use promo code LINKEDIN20 to receive a 20% discount. ► On Udemy: https://www.udemy.com/course/cloud-strategy-master-class/ PODCASTS: Listen wherever you get your podcasts: ► Let's Talk Shop: http://letstalkshop.buzzsprout.com  ► Reality Distortion Fields RDFs Podcast: https://youtu.be/88z1UiVaV00  Follow me: ► TikTok: @ekhnaser ► Instagram: @ekhnaser ► Twitter: @ekhnaser ► LinkedIn: https://www.linkedin.com/in/eliaskhnaser/ ► Website: www.eliaskhnaser.com

    35 min
  3. DEC 13

    AI in the Enterprise: Real World Use Cases & Reskilling

    The honeymoon phase of AI is over. Elias Khnaser discusses with Pankaj Kumar, Executive Partner at IBM Consulting, the practical realities of deploying Agentic AI in the enterprise—beyond simple chatbots. Recorded live from AWS re:Invent 2025, this episode answers the biggest question facing executives: How do we do AI? We dive into a real-world case study of a major gas utility company (powering Las Vegas) that is completely reimagining its workflow to address one of its biggest problems: high-bill customer calls. Discover how the solution moves far beyond automating the contact center by using a multi-pronged approach that analyzes customer usage, infrastructure data, and weather patterns. Key Takeaways: ► Why the "boring work"—data governance, cloud architecture, and security—is the mandatory foundation for successful enterprise AI deployment. ► The strategic, phased approach: How the utility customer first executed a full cloud migration (DC exec to AWS) and SAP RISE before bolting on Agentic AI. ► The technology stack: How they integrated AWS Bedrock, LangChain, and LangGraph to create a comprehensive, agile solution. ► The Job Question: An honest conversation about the impact of Agentic AI on jobs. Is it mass firing, or a necessary focus on workforce reskilling and filling hard-to-fill contact center roles? #IBMPartner Sign Up Now for my online course "The Cloud Strategy Master Class": ► On my web site: https://lnkd.in/gcxcrX and use promo code LINKEDIN20 to receive a 20% discount. ► On Udemy: https://www.udemy.com/course/cloud-strategy-master-class/ PODCASTS: Listen wherever you get your podcasts: ► Let's Talk Shop: http://letstalkshop.buzzsprout.com  ► Reality Distortion Fields RDFs Podcast: https://youtu.be/88z1UiVaV00  Follow me: ► TikTok: @ekhnaser ► Instagram: @ekhnaser ► Twitter: @ekhnaser ► LinkedIn: https://www.linkedin.com/in/eliaskhnaser/ ► Website: www.eliaskhnaser.com

    9 min
  4. OCT 23

    From Cloud to On-Prem: Gemini, GPUs, and the AI Anywhere Vision

    The AI Revolution is here, but what about enterprises dealing with sensitive data, regulatory compliance, and low latency requirements? They can't always move to the public cloud—but now, they don't have to choose between compliance and innovation. In this episode of Let's Talk Shop, host Elias Khnaser sits down with two technology giants: Justin Boitano, VP of Enterprise AI at NVIDIA, and Rohan Grover, Senior Director and Head of Product for Google Distributed Cloud. They break down the deep technical partnership that is making the "AI Anywhere" vision a reality, allowing customers to run Google's cutting-edge Gemini 2.5 models directly on-premises using NVIDIA GPU servers. Discover how this collaboration uses confidential computing on both CPUs and NVIDIA Blackwell GPUs to secure sensitive customer data and proprietary model weights, turning previously inaccessible "dark data" into a source of competitive advantage. If you work in public sector, finance, healthcare, oil and gas, or any enterprise with strict data sovereignty rules, this discussion on on-prem GenAI and Google Distributed Cloud's managed and customer-owned deployment models is a must-watch. 👍 Like this video and Subscribe for more insights on cloud and enterprise tech! Sign Up Now for my online course "The Cloud Strategy Master Class": ► On my web site: https://lnkd.in/gcxcrX and use promo code LINKEDIN20 to receive a 20% discount. ► On Udemy: https://www.udemy.com/course/cloud-strategy-master-class/ PODCASTS: Listen wherever you get your podcasts: ► Let's Talk Shop: http://letstalkshop.buzzsprout.com  ► Reality Distortion Fields RDFs Podcast: https://youtu.be/88z1UiVaV00  Follow me: ► TikTok: @ekhnaser ► Instagram: @ekhnaser ► Twitter: @ekhnaser ► LinkedIn: https://www.linkedin.com/in/eliaskhnaser/ ► Website: www.eliaskhnaser.com

    55 min
  5. OCT 23

    Google Cloud’s AI Infrastructure Strategy | TPU, NVIDIA Blackwell & More

    In this episode, we dive deep into AI infrastructure at Google Cloud—what it means, why it matters, and how it’s evolving. Our guest shares insights from over 8 years at Google and previous experience as a hardware engineer at IBM, bringing a unique perspective on the nuts and bolts that power today’s AI revolution. We explore: ✅ The foundations of AI infrastructure—chips, networking, storage, and workload-optimized systems ✅ How Google’s custom hardware (TPUs, Axion, ARM processors) differentiates it from AWS, Microsoft, Oracle, and IBM ✅ The concept of the AI Hypercomputer—a reference architecture combining hardware, software, and flexible consumption models ✅ Key announcements from Google, including NVIDIA Blackwell, GB200, Ironwood TPUs, and Cluster Director ✅ Why inference (not just training) is now the hot topic—and how Google helps customers lower the cost per inference From hardware assembly roots to leading AI infrastructure strategy, this conversation highlights how Google builds and scales the systems behind Gemini, Vertex AI, and beyond. 📌 If you’re curious about the future of AI infrastructure, supercomputing, and how enterprises can actually run large-scale AI workloads efficiently—this one’s for you. 🔔 Don’t forget to like, comment, and subscribe for more in-depth discussions on technology and innovation! Sign Up Now for my online course "The Cloud Strategy Master Class": ► On my web site: https://lnkd.in/gcxcrX and use promo code LINKEDIN20 to receive a 20% discount. ► On Udemy: https://www.udemy.com/course/cloud-strategy-master-class/ PODCASTS: Listen wherever you get your podcasts: ► Let's Talk Shop: http://letstalkshop.buzzsprout.com  ► Reality Distortion Fields RDFs Podcast: https://youtu.be/88z1UiVaV00  Follow me: ► TikTok: @ekhnaser ► Instagram: @ekhnaser ► Twitter: @ekhnaser ► LinkedIn: https://www.linkedin.com/in/eliaskhnaser/ ► Website: www.eliaskhnaser.com

    40 min
  6. JUL 31

    Google Cloud WAN: A New Era for Enterprise Networking Powered by AI

    The world of enterprise technology is evolving, and networking is more critical than ever. In this episode of Let's Talk Shop, host Elias Khnaser sits down with Muninder Singh Sambi, the General Manager and Vice President of Google Cloud's Cloud Networking. Forget everything you thought you knew about networking. Muninder explains why a robust and intelligent network is the secret sauce for a successful AI strategy. They discuss Google's massive global network, including its vast subsea cable infrastructure, and the innovative new products announced at Google Next. What You'll Learn: ► The Four Pillars of an AI Strategy: Understand the essential components, from AI infrastructure to data management and, most importantly, networking. ► The Power of Google Cloud WAN: Discover how this new, managed backbone service can simplify and secure enterprise networking, offering a potential 40% reduction in total cost of ownership. ► Cloud WAN in Action: Learn how companies like Nestle and Citadel Securities are leveraging Google's network to accelerate their business journeys. ► Openness in the Cloud: Muninder addresses the concept of multi-cloud and explains how Google Cloud WAN is designed to be an open ecosystem, allowing you to connect to applications and services wherever they are hosted. ► Why Google's Network is Different: Uncover the unique redundancy and reliability features, including Google's multi-shard architecture and proprietary subsea cables, that set its network apart from competitors. Whether you're an IT professional, a thought leader, or just curious about the future of enterprise networking, this episode will challenge your assumptions and provide valuable insights into how the cloud is shaping the future of connectivity. Additional Resources: ►Nestlé's network transformation with Cloud WAN: https://www.youtube.com/watch?v=mHLlU7mjuvY ►BRK2-133: Google’s AI-powered next-gen global network: Built for the Gemini era: https://www.youtube.com/watch?v=oZN9kUIVLOU ►BRK3-043: Best practices for designing and deploying Cross-Cloud network security: https://www.youtube.com/watch?v=X0LQTHc1FOw 🔔 Don’t forget to like, comment, and subscribe for more in-depth discussions on technology and innovation! Sign Up Now for my online course "The Cloud Strategy Master Class": ► On my web site: https://lnkd.in/gcxcrX and use promo code LINKEDIN20 to receive a 20% discount. ► On Udemy: https://www.udemy.com/course/cloud-strategy-master-class/ PODCASTS: Listen wherever you get your podcasts: ► Let's Talk Shop: http://letstalkshop.buzzsprout.com  ► Reality Distortion Fields RDFs Podcast: https://youtu.be/88z1UiVaV00  Follow me: ► TikTok: @ekhnaser ► Instagram: @ekhnaser ► Twitter: @ekhnaser ► LinkedIn: https://www.linkedin.com/in/eliaskhnaser/ ► Website: www.eliaskhnaser.com

    41 min
  7. MAY 27

    The Digital Backbone: How Equinix Connects Everything for Modern Enterprises

    Think Equinix is just about colocation? Think again! In this insightful episode of "Let's Talk Shop," your host sits down with Arun Dev, VP of Interconnection Services at Equinix, to explore how Equinix is revolutionizing digital infrastructure far beyond its traditional roots. We dive deep into the power of interconnection and virtual networking, revealing how enterprises like IHG are modernizing their networks to achieve incredible scale and agility across global operations. Discover how Equinix helps solve real-world challenges, from simplifying complex legacy networks to enabling seamless hybrid and multi-cloud strategies. Arun sheds light on: ►The true value of Equinix's global ecosystem: Over 260 data centers, 75 metros, 35 countries, and an unparalleled network of 2,000+ network providers and 3,000+ cloud/IT companies. ►What "interconnection services" truly means at Equinix: Secure, private, low-latency connectivity to financial exchanges, hyperscalers, and beyond. ►The magic of Equinix Fabric: On-demand, virtual connectivity across regions, driven by APIs and SDKs – allowing you to spin up connections in seconds and scale bandwidth on the fly. ►Real-world enterprise transformation: The IHG success story – how virtualized networking with Equinix helped them serve 115 million mobile app users with a flawless experience. ►Equinix's role in the Age of AI: How current network limitations are driving urgency for modernization and how Equinix is uniquely positioned to handle demanding AI workloads at the edge. ►The interconnected edge: Why Equinix's global footprint makes them the ideal partner for delivering low-latency experiences, especially for use cases like in-store retail innovation. ►Complementary cloud strategies: Understanding how Equinix works with hyperscaler backbones and offers a neutral, abstraction layer for seamless multi-cloud connectivity, even between competing cloud providers. ►Future of intelligent networking: Equinix's vision for AI-driven network optimization, predictive insights, and cost-saving recommendations for customers. 🔔 Don’t forget to like, comment, and subscribe for more in-depth discussions on technology and innovation! Sign Up Now for my online course "The Cloud Strategy Master Class": ► On my web site: https://lnkd.in/gcxcrX and use promo code LINKEDIN20 to receive a 20% discount. ► On Udemy: https://www.udemy.com/course/cloud-strategy-master-class/ PODCASTS: Listen wherever you get your podcasts: ► Let's Talk Shop: http://letstalkshop.buzzsprout.com  ► Reality Distortion Fields RDFs Podcast: https://youtu.be/88z1UiVaV00  Follow me: ► TikTok: @ekhnaser ► Instagram: @ekhnaser ► Twitter: @ekhnaser ► LinkedIn: https://www.linkedin.com/in/eliaskhnaser/ ► Website: www.eliaskhnaser.com

    47 min
  8. MAY 8

    Unlock Your AI Potential: NetApp's Features for Performance, Scalability, and Governance

    Welcome back to Let's Talk Shop! In this episode, we dive deep into the world of AI with Russell Fishman (Sr. Director of Solutions Product Management) at NetApp to explore the evolving storage landscape. Forget the GPU-centric hype – we uncover the critical role of intelligent data infrastructure in the AI pipeline, from ingestion to inference. Russell shares how NetApp's long history in data management for analytics and HPC uniquely positions them to tackle AI's biggest challenges, especially around data readiness. We discuss NetApp's specific features that boost performance and scalability across on-premises and multicloud environments, and address the crucial topic of data governance and reducing vendor lock-in in the age of AI. Discover how NetApp is helping customers make their data AI-ready through innovative metadata management and integrations with MLOps tools, ultimately simplifying complex workflows for data scientists and engineers. ► Bridging Customer Outcomes with AI ► The Impact of Intelligent Data Infrastructure on the AI Pipeline ► Beyond Training: The Growing Importance of AI Inference ► NetApp's Role: Fueling the AI Engine with Data Management ► Right Performance, Right Stage: NetApp's Feature Flexibility ► Achieving Extreme Performance: NetApp's Standard-Based Approach with PNFS ► NetApp's Evolution: From Analytics & Big Data to AI ► Making Data Management Accessible for Data Scientists ► Simplifying AI Workflows: Integration with MLOps Tools like Domino Data Lab ► Addressing Vendor Lock-in: NetApp's Open Standards Commitment and Multi-Cloud Strategy ► Getting Data AI-Ready: The Importance of Metadata ► Leveraging Metadata for Retrieval Augmented Generation (RAG) ► Data Governance, Traceability, and Security Across Multi-Cloud with ONTAP Sign Up Now for my online course "The Cloud Strategy Master Class": ► On my web site: https://lnkd.in/gcxcrX and use promo code LINKEDIN20 to receive a 20% discount. ► On Udemy: https://www.udemy.com/course/cloud-strategy-master-class/ PODCASTS: Listen wherever you get your podcasts: ► Let's Talk Shop: http://letstalkshop.buzzsprout.com  ► Reality Distortion Fields RDFs Podcast: https://youtu.be/88z1UiVaV00  Follow me: ► TikTok: @ekhnaser ► Instagram: @ekhnaser ► Twitter: @ekhnaser ► LinkedIn: https://www.linkedin.com/in/eliaskhnaser/ ► Website: www.eliaskhnaser.com

    59 min

About

Coming to you from the Windy City, and hosted by Elias Khnaser! Welcome to Lets Talk Shop! A Podcast about all things cloud and enterprise tech. Listen to insights and guest interviews with IT thought leaders and professionals.