DevOps and Docker Talk: Cloud Native Interviews and Tooling

Bret Fisher
DevOps and Docker Talk: Cloud Native Interviews and Tooling Podcast

Interviews from Bret Fisher's live show with co-host Nirmal Mehta. Topics cover container and cloud topics like Docker, Kubernetes, Swarm, Cloud Native development, DevOps, SRE, GitOps, DevSecOps, platform engineering, and the full software lifecycle. Full show notes and more info available at https://podcast.bretfisher.com

  1. 6 SEPT

    MLOps for DevOps People

    Bret and Nirmal are joined by Maria Vechtomova, a MLOps Tech Lead and co-founder of Marvelous MLOps, to discuss the obvious and not-so obvious differences between a MLOps Engineer and traditional DevOps jobs.Maria is here to discuss how DevOps engineers can adopt and operate machine learning workloads, also known as MLOps. With her expertise, we'll explore the challenges and best practices for implementing ML in a DevOps environment, including some hot takes on using Kubernetes. Be sure to check out the live recording of the complete show from June 20, 2024 on YouTube (Stream 271). ★Topics★Marvelous MLOps on LinkedInMarvelous MLOps SubstackMarvelous MLOps YouTube Channel Creators & Guests Cristi Cotovan - Editor Beth Fisher - Producer Bret Fisher - Host Maria Vechtomova - Guest Nirmal Mehta - Host (00:00) - Intro (02:04) - Maria's Content (03:22) - Tools and Technologies in MLOps (09:21) - DevOps vs MLOps: Key Differences (19:22) - Transitioning from DevOps to MLOps (22:52) - Model Accuracy vs Computational Efficiency (24:46) - MLOps with Sensitive Data (29:10) - MLOps Roadmap and Getting Started (32:36) - Tools and Platforms for MLOps (37:14) - Adapting MLOps Practices to Future Trends (44:08) - Is Golang an Option for CI/CD Automation? You can also support my free material by subscribing to my YouTube channel and my weekly newsletter at bret.news! Grab the best coupons for my Docker and Kubernetes courses.Join my cloud native DevOps community on Discord.Grab some merch at Bret's Loot BoxHomepage bretfisher.com

    48 min
  2. 9 AUG

    Debug Containers with Mintoolkit

    Bret is joined by DockerSlim (now mintoolkit) founder Kyle Quest, to show off how to slim down your existing images with various options. The slimming down includes distroless images like Chainguard Images and Nix. We also look at using the new "mint debug" feature to exec into existing images and containers on Kubernetes, Docker, Podman, and containerd. Kyle joined us for a two-hour livestream to discuss mint’s evolution. Be sure to check out the live recording of the complete show from May 30, 2024 on YouTube (Stream 268). Includes demos. ★Topics★Mint repository in GitHub Creators & Guests Cristi Cotovan - Editor Beth Fisher - Producer Bret Fisher - Host Kyle Quest (aka Q) - Guest (00:00) - Intro (02:26) - The Evolution of Docker Slim (04:43) - Docker Slim's First Feature (10:04) - Forcing Change is Not Always Possible (13:29) - Docker Slim Name Change to Mintoolkit (15:13) - Dive vs Mint (18:45) - Mint and the Problem with Container Debugging (28:25) - AI-Assisted Debugging (34:46) - Hands-On Debugging Examples (41:27) - Debugging a Podman Image (49:00) - Kubernetes Debugging Example (59:00) - What is KoolKits? (01:05:48) - Future Plans for Mintoolkit (01:06:44) - cdebug: Dedicated Debugging Tool for Containers You can also support my free material by subscribing to my YouTube channel and my weekly newsletter at bret.news! Grab the best coupons for my Docker and Kubernetes courses.Join my cloud native DevOps community on Discord.Grab some merch at Bret's Loot BoxHomepage bretfisher.com

    1h 16m
  3. 26 JUL

    Observability Cost-Savings and eBPF Goodness with Groundcover

    Bret is joined by Shahar Azulay, Groundcover CEO and Co-Founder, to discuss their new approach to fully observe K8s and its workloads with a "hybrid observability architecture."Groundcover is a new, cloud-native, eBPF-based platform that designed a new model for how observability solutions are architected and priced. It is a product that can drastically reduce your monitoring, logging, and tracing costs and complexity, it stores all its data in your clusters and only needs one agent per host for full observability and APM. We dig into the deployment, architecture, and how it all works under the hood. Be sure to check out the live recording of the complete show from June 27, 2024 on YouTube (Stream 272). Includes demos. ★Topics★Groundcover Discord ChannelGroundcover Repository in GitHubGroundcover YouTube ChannelJoin the Groundcover Slack Creators & Guests Cristi Cotovan - Editor Beth Fisher - Producer Bret Fisher - Host Shahar Azulay - Guest (00:00) - Intro (03:16) - Shahar's Background and GroundCover's Origin (06:34) - Where Did the Hybrid Idea Come From? (12:11) - GroundCover's Deployment Model (18:21) - Monitoring More than Kubernetes (20:32) - eBPF from the Ground Up (23:58) - How Does Groundcover read eBPF Logs? (32:06) - GroundCover's Stack and Compatibility (36:18) - The Importance of PromQL (37:41) - Groundcover Also OnPrem and Managed (49:35) - Getting Started with Groundcover (52:15) - Groundcover Caretta (54:55) - What's Next for Groundcover? You can also support my free material by subscribing to my YouTube channel and my weekly newsletter at bret.news! Grab the best coupons for my Docker and Kubernetes courses.Join my cloud native DevOps community on Discord.Grab some merch at Bret's Loot BoxHomepage bretfisher.com

    56 min
  4. 12 JUL

    Flow State with VS Code AI

    Bret and Nirmal are joined by Continue.dev co-founder, Nate Sesti, to walk through an open source replacement for GitHub Copilot.Continue lets you use a set of open source and closed source LLMs in JetBrains and VSCode IDEs for adding AI to your coding workflow without leaving the editor. You've probably heard about GitHub Copilot and other AI code assistants. The Continue team has created a completely open source solution as an alternative, or maybe a superset of these existing tools, because along with it being open source, it's also very configurable and allows you to choose multiple models to help you with code completion and chatbots in VSCode, JetBrains, and more are coming soon. So this show builds on our recent Ollama show. Continue uses Ollama in the background to run a local LLM for you, if that's what you want to Continue to do for you, rather than internet LLM models. Be sure to check out the live recording of the complete show from May 16, 2024 on YouTube (Ep. 266). Includes demos. ★Topics★Continue.dev Website Creators & Guests Cristi Cotovan - Editor Beth Fisher - Producer Bret Fisher - Host Nirmal Mehta - Host Nate Sesti - Guest (00:00) - Introduction (01:52) - Meet Nate Sesti, CTO of Continue (02:40) - Birth and Evolution of Continue (03:56) - Continue's Features and Benefits (22:24) - Running Multiple Models in Parallel (26:38) - Best Hardware for Continue (32:45) - Other Advantages of Continue (36:08) - Getting Started with Continue You can also support my free material by subscribing to my YouTube channel and my weekly newsletter at bret.news! Grab the best coupons for my Docker and Kubernetes courses.Join my cloud native DevOps community on Discord.Grab some merch at Bret's Loot BoxHomepage bretfisher.com

    38 min
  5. 28 JUN

    AWS Graviton: The Great Arm Migration

    Bret and Nirmal are joined by Michael Fischer of AWS to discuss why we should use Graviton, their arm64 compute with AWS-designed CPUs.Graviton is AWS' term for their custom ARM-based EC2 instances. We now have all major clouds offering an ARM-based option for their server instances, but AWS was first, way back in 2018. Fast forward 6 years and AWS is releasing their 4th generation Graviton instances, and they deliver all the CPU, networking, memory and storage performance that you'd expect from their x86 instances and beyond. I'm a big fan of ARM-based servers and the price points that AWS gives us. They have been my default EC2 instance type for years now, and I recommend it for all projects I'm working on with companies. We get into the history of Graviton, how easy it is to build and deploy containers and Kubernetes clusters that have Graviton and even two different platform types in the same cluster. We also cover how to build multi-platform images using Docker BuildKit. Be sure to check out the live recording of the complete show from May 9, 2024 on YouTube (Ep. 265). Includes demos. ★Topics★Graviton + GitLab + EKSPorting Advisor for GravitonGraviton Getting Started Creators & Guests Cristi Cotovan - Editor Beth Fisher - Producer Bret Fisher - Host Nirmal Mehta - Host Michael Fischer - Guest (00:00) - Intro (06:19) - AWS and ARM64: Evolution to Graviton 4 (07:55) - AWS EC2 Nitro: Why and How? (11:53) - Nitro and Graviton's Evolution (18:35) - What Can't Run on Graviton? (23:15) - Moving Your Workloads to Graviton (27:19) - K8s Tooling and Multi-Platform Images (37:07) - Tips for Getting Started with Graviton You can also support my free material by subscribing to my YouTube channel and my weekly newsletter at bret.news! Grab the best coupons for my Docker and Kubernetes courses.Join my cloud native DevOps community on Discord.Grab some merch at Bret's Loot BoxHomepage bretfisher.com

    39 min
  6. 14 JUN

    Local GenAI LLMs with Ollama and Docker

    Bret and Nirmal are joined by friend of the show, Matt Williams, to learn how to run your own local ChatGPT clone and GitHub Copilot clone with Ollama and Docker's "GenAI Stack," to build apps on top of open source LLMs. We've designed this conversation for tech people like myself, who are no strangers to using LLMs in web products like chat GPT, but are curious about running open source generative AI models locally and how they might set up their Docker environment to develop things on top of these open source LLMs. Matt Williams is walking us through all the parts of this solution, and with detailed explanations, shows us how Ollama can make it easier on Mac, Windows, and Linux to set up LLM stacks. Be sure to check out the video version of this episode for any demos. This episode is from our YouTube Live show on April 18, 2024 (Stream 262). ★Topics★ Creators & Guests Cristi Cotovan - Editor Beth Fisher - Producer Bret Fisher - Host Matt Williams - Host Nirmal Mehta - Host (00:00) - Intro (01:32) - Understanding LLMs and Ollama (03:16) - Ollama's Elevator Pitch (08:40) - Installing and Extending Ollama (17:17) - HuggingFace and Other Libraries (19:24) - Which Model Should You Use? (26:28) - Ollama and Its Applications (28:57) - Retrieval Augmented Generation (RAG) (36:44) - Deploying Models and API Endpoints (40:38) - DockerCon Keynote and LLM Demo (47:44) - Getting Started with Ollama You can also support my free material by subscribing to my YouTube channel and my weekly newsletter at bret.news! Grab the best coupons for my Docker and Kubernetes courses.Join my cloud native DevOps community on Discord.Grab some merch at Bret's Loot BoxHomepage bretfisher.com

    50 min

About

Interviews from Bret Fisher's live show with co-host Nirmal Mehta. Topics cover container and cloud topics like Docker, Kubernetes, Swarm, Cloud Native development, DevOps, SRE, GitOps, DevSecOps, platform engineering, and the full software lifecycle. Full show notes and more info available at https://podcast.bretfisher.com

To listen to explicit episodes, sign in.

Stay up to date with this show

Sign in or sign up to follow shows, save episodes and get the latest updates.

Select a country or region

Africa, Middle East, and India

Asia Pacific

Europe

Latin America and the Caribbean

The United States and Canada