166 episodes

Interviews from Bret Fisher's live show with co-host Nirmal Mehta. Topics cover container and cloud topics like Docker, Kubernetes, Swarm, Cloud Native development, DevOps, SRE, GitOps, DevSecOps, platform engineering, and the full software lifecycle. Full show notes and more info available at https://podcast.bretfisher.com

DevOps and Docker Talk: Cloud Native Interviews and Tooling Bret Fisher

    • Education
    • 4.5 • 51 Ratings

Interviews from Bret Fisher's live show with co-host Nirmal Mehta. Topics cover container and cloud topics like Docker, Kubernetes, Swarm, Cloud Native development, DevOps, SRE, GitOps, DevSecOps, platform engineering, and the full software lifecycle. Full show notes and more info available at https://podcast.bretfisher.com

    Observability Cost-Savings and eBPF Goodness with Groundcover

    Observability Cost-Savings and eBPF Goodness with Groundcover

    Bret is joined by Shahar Azulay, Groundcover CEO and Co-Founder, to discuss their new approach to fully observe K8s and its workloads with a "hybrid observability architecture."
    Groundcover is a new, cloud-native, eBPF-based platform that designed a new model for how observability solutions are architected and priced. It is a product that can drastically reduce your monitoring, logging, and tracing costs and complexity, it stores all its data in your clusters and only needs one agent per host for full observability and APM.
    We dig into the deployment, architecture, and how it all works under the hood.
    Be sure to check out the live recording of the complete show from June 27, 2024 on YouTube (Stream 272). Includes demos.
    ★Topics★Groundcover Discord ChannelGroundcover Repository in GitHubGroundcover YouTube ChannelJoin the Groundcover Slack
    Creators & Guests

    Cristi Cotovan - Editor
    Beth Fisher - Producer
    Bret Fisher - Host
    Shahar Azulay - Guest

    (00:00) - Intro
    (03:16) - Shahar's Background and GroundCover's Origin
    (06:34) - Where Did the Hybrid Idea Come From?
    (12:11) - GroundCover's Deployment Model
    (18:21) - Monitoring More than Kubernetes
    (20:32) - eBPF from the Ground Up
    (23:58) - How Does Groundcover read eBPF Logs?
    (32:06) - GroundCover's Stack and Compatibility
    (36:18) - The Importance of PromQL
    (37:41) - Groundcover Also OnPrem and Managed
    (49:35) - Getting Started with Groundcover
    (52:15) - Groundcover Caretta
    (54:55) - What's Next for Groundcover?

    You can also support my free material by subscribing to my YouTube channel and my weekly newsletter at bret.news!
    Grab the best coupons for my Docker and Kubernetes courses.Join my cloud native DevOps community on Discord.Grab some merch at Bret's Loot BoxHomepage bretfisher.com

    • 55 min
    Flow State with VS Code AI

    Flow State with VS Code AI

    Bret and Nirmal are joined by Continue.dev co-founder, Nate Sesti, to walk through an open source replacement for GitHub Copilot.
    Continue lets you use a set of open source and closed source LLMs in JetBrains and VSCode IDEs for adding AI to your coding workflow without leaving the editor.
    You've probably heard about GitHub Copilot and other AI code assistants. The Continue team has created a completely open source solution as an alternative, or maybe a superset of these existing tools, because along with it being open source, it's also very configurable and allows you to choose multiple models to help you with code completion and chatbots in VSCode, JetBrains, and more are coming soon.
    So this show builds on our recent Ollama show. Continue uses Ollama in the background to run a local LLM for you, if that's what you want to Continue to do for you, rather than internet LLM models.
    Be sure to check out the live recording of the complete show from May 16, 2024 on YouTube (Ep. 266). Includes demos.
    ★Topics★Continue.dev Website
    Creators & Guests

    Cristi Cotovan - Editor
    Beth Fisher - Producer
    Bret Fisher - Host
    Nirmal Mehta - Host
    Nate Sesti - Guest

    (00:00) - Introduction
    (01:52) - Meet Nate Sesti, CTO of Continue
    (02:40) - Birth and Evolution of Continue
    (03:56) - Continue's Features and Benefits
    (22:24) - Running Multiple Models in Parallel
    (26:38) - Best Hardware for Continue
    (32:45) - Other Advantages of Continue
    (36:08) - Getting Started with Continue

    You can also support my free material by subscribing to my YouTube channel and my weekly newsletter at bret.news!
    Grab the best coupons for my Docker and Kubernetes courses.Join my cloud native DevOps community on Discord.Grab some merch at Bret's Loot BoxHomepage bretfisher.com

    • 37 min
    AWS Graviton: The Great Arm Migration

    AWS Graviton: The Great Arm Migration

    Bret and Nirmal are joined by Michael Fischer of AWS to discuss why we should use Graviton, their arm64 compute with AWS-designed CPUs.
    Graviton is AWS' term for their custom ARM-based EC2 instances. We now have all major clouds offering an ARM-based option for their server instances, but AWS was first, way back in 2018. Fast forward 6 years and AWS is releasing their 4th generation Graviton instances, and they deliver all the CPU, networking, memory and storage performance that you'd expect from their x86 instances and beyond.
    I'm a big fan of ARM-based servers and the price points that AWS gives us. They have been my default EC2 instance type for years now, and I recommend it for all projects I'm working on with companies.
    We get into the history of Graviton, how easy it is to build and deploy containers and Kubernetes clusters that have Graviton and even two different platform types in the same cluster. We also cover how to build multi-platform images using Docker BuildKit.
    Be sure to check out the live recording of the complete show from May 9, 2024 on YouTube (Ep. 265). Includes demos.
    ★Topics★Graviton + GitLab + EKSPorting Advisor for GravitonGraviton Getting Started
    Creators & Guests

    Cristi Cotovan - Editor
    Beth Fisher - Producer
    Bret Fisher - Host
    Nirmal Mehta - Host
    Michael Fischer - Guest

    (00:00) - Intro
    (06:19) - AWS and ARM64: Evolution to Graviton 4
    (07:55) - AWS EC2 Nitro: Why and How?
    (11:53) - Nitro and Graviton's Evolution
    (18:35) - What Can't Run on Graviton?
    (23:15) - Moving Your Workloads to Graviton
    (27:19) - K8s Tooling and Multi-Platform Images
    (37:07) - Tips for Getting Started with Graviton

    You can also support my free material by subscribing to my YouTube channel and my weekly newsletter at bret.news!
    Grab the best coupons for my Docker and Kubernetes courses.Join my cloud native DevOps community on Discord.Grab some merch at Bret's Loot BoxHomepage bretfisher.com

    • 39 min
    Local GenAI LLMs with Ollama and Docker

    Local GenAI LLMs with Ollama and Docker

    Bret and Nirmal are joined by friend of the show, Matt Williams, to learn how to run your own local ChatGPT clone and GitHub Copilot clone with Ollama and Docker's "GenAI Stack," to build apps on top of open source LLMs.
    We've designed this conversation for tech people like myself, who are no strangers to using LLMs in web products like chat GPT, but are curious about running open source generative AI models locally and how they might set up their Docker environment to develop things on top of these open source LLMs.
    Matt Williams is walking us through all the parts of this solution, and with detailed explanations, shows us how Ollama can make it easier on Mac, Windows, and Linux to set up LLM stacks.
    Be sure to check out the video version of this episode for any demos.
    This episode is from our YouTube Live show on April 18, 2024 (Stream 262).
    ★Topics★
    Creators & Guests

    Cristi Cotovan - Editor
    Beth Fisher - Producer
    Bret Fisher - Host
    Matt Williams - Host
    Nirmal Mehta - Host

    (00:00) - Intro
    (01:32) - Understanding LLMs and Ollama
    (03:16) - Ollama's Elevator Pitch
    (08:40) - Installing and Extending Ollama
    (17:17) - HuggingFace and Other Libraries
    (19:24) - Which Model Should You Use?
    (26:28) - Ollama and Its Applications
    (28:57) - Retrieval Augmented Generation (RAG)
    (36:44) - Deploying Models and API Endpoints
    (40:38) - DockerCon Keynote and LLM Demo
    (47:44) - Getting Started with Ollama

    You can also support my free material by subscribing to my YouTube channel and my weekly newsletter at bret.news!
    Grab the best coupons for my Docker and Kubernetes courses.Join my cloud native DevOps community on Discord.Grab some merch at Bret's Loot BoxHomepage bretfisher.com

    • 50 min
    Kubernetes Observability with Site24x7

    Kubernetes Observability with Site24x7

    Bret is joined by Jasper Paul and Vinoth Kanagaraj, observability experts and Site24x7 Product Managers, to discuss achieving end-to-end visibility for applications on Kubernetes infrastructure. We answer questions on all things monitoring, OpenTelemetry, and KPIs for DevOps and SREs.
    We talk about the industry's evolution from monitoring to full observability platforms, as well as adjacent topics for helping you with your own Kubernetes and application monitoring, including going through some of the most useful metrics in Kubernetes and AI's role in metric analysis and alerting humans.
    Be sure to check out the live recording of the complete show from April 25, 2024 on YouTube (Ep. 263). Includes demos.
    ★Topics★Site24x7 Full stack observabilitySite24x7 Kubernetes monitoringVoting App
    Creators & Guests

    Cristi Cotovan - Editor
    Beth Fisher - Producer
    Bret Fisher - Host
    J.P. Jasper - Guest

    (00:00) - Intro
    (02:01) - Observability vs Monitoring
    (08:32) - The New App Health Layer
    (14:39) - Attributes Collected
    (17:47) - Unified Observability
    (19:00) - AI-Powered Insights: The Role of AIOps
    (21:51) - OpenTelemetry and Multi-Cluster Monitoring
    (25:45) - Windows Support
    (26:06) - Correlating Requests Between Microservices
    (28:14) - Synthetic vs Real-Time Monitoring
    (30:25) - Dashboards, Tracing and Metrics
    (37:17) - Getting Started

    You can also support my free material by subscribing to my YouTube channel and my weekly newsletter at bret.news!
    Grab the best coupons for my Docker and Kubernetes courses.Join my cloud native DevOps community on Discord.Grab some merch at Bret's Loot BoxHomepage bretfisher.com

    • 40 min
    K2D by Portainer

    K2D by Portainer

    Bret and Nirmal are joined by Neil Cresswell and Steven Kang from Portainer to look at K2D, a new project that enables us to leverage Kubernetes tooling to manage Docker containers on tiny devices at the far edge.
    K2D stands for Kubernetes to Docker, which is a bit of a crazy idea -- it's a partial Kubernetes API running on top of Docker Engine without needing a full Kubernetes control plane. If you work with very small devices, including older Raspberry PIs, 32-bit machines, maybe industry sensors and the infrastructure we now call 'edge', the container hardware is often hard for you to make simple, reliable, and automated all at the same time.
    So this project uses less resources than a single node K3S and still allows you to use Kubernetes tools to deploy and manage your containers, which are in fact just running on a Docker Engine with no full-fledged Kubernetes distribution going on there.
    We get into far more detail on the architecture, the Portainer team's motivations for this new open source project and what its limitations are, because it's not real Kubernetes, so it can't do everything.
    Be sure to check out the video version of this episode for any demos.
    This episode is from our YouTube Live show on March 28, 2024 (Stream 260).
    ★Topics★K2D websiteK2D Docs
    Creators & Guests

    Cristi Cotovan - Editor
    Beth Fisher - Producer
    Bret Fisher - Host
    Neil Cresswell - Guest
    Nirmal Mehta - Host
    Steven Kang - Guest

    (00:00) - Intro
    (02:40) - Introducing the guests
    (03:56) - Why K2D? Architecture and Motivations
    (05:55) - How Efficient is K2D?
    (10:25) - K2D Architecture Explained: Components and Operations
    (20:42) - What Happens When Resources are Exhausted?
    (23:18) - K2D for Edge Deployment with Portainer or Argo CD
    (28:22) - K2D Future Roadmap
    (30:36) - Getting Started with K2D

    You can also support my free material by subscribing to my YouTube channel and my weekly newsletter at bret.news!
    Grab the best coupons for my Docker and Kubernetes courses.Join my cloud native DevOps community on Discord.Grab some merch at Bret's Loot BoxHomepage bretfisher.com

    • 32 min

Customer Reviews

4.5 out of 5
51 Ratings

51 Ratings

ASobering ,

Entertaining, insightful, and actionable! 🔥

Bret and his incredibly knowledgeable guests deliver nothing but value in each and every episode. Comprised of all the traditional things that make a DevOps show fabulous (deconstruction, innovation, etc.) coupled with authenticity and insight you won’t find anywhere else. Thanks for putting out such a wonderful show, Bret - keep up the great work!

CarlosLizaola ,

Great Podcast with really useful information

I really like this podcast and all the premium content Bret Fisher makes!! 👍🏼

Top Podcasts In Education

The Jefferson Fisher Podcast
Civility Media
The Mel Robbins Podcast
Mel Robbins
The Jordan B. Peterson Podcast
Dr. Jordan B. Peterson
The Jamie Kern Lima Show
Jamie Kern Lima
This Is Purdue
Purdue University
TED Talks Daily
TED

You Might Also Like

Kubernetes Podcast from Google
Abdel Sghiouar, Kaslin Fields
The Cloudcast
Massive Studios
AWS Podcast
Amazon Web Services
Software Engineering Radio - the podcast for professional software developers
se-radio@computer.org
Go Time: Golang, Software Engineering
Changelog Media
Software Engineering Daily
Software Engineering Daily