Make it Work

Gerhard Lazu

Tech infrastructure that gets us excited. Conversations & screen sharing. 🔧 💻

  1. Pairing-up on a CDN PURGE with Elixir

    JUN 29

    Pairing-up on a CDN PURGE with Elixir

    Listen to the full pairing session for pull request #549. The focus is on replacing an existing Fastly implementation with Jerod's Pipedream, which is built on top of the open-source Varnish HTTP Cache. We cover the initial problem, the proposed solution, the implementation details, and the testing process. The process begins with a pull request that, for the sake of rapid feedback, is set up to automatically deploy to new production. This allows for real-time testing in a production setting without affecting the actual production traffic. The new production - changelog-2025-05-05 - serves as a production replica for testing the new PURGE functionality. To understand how the PURGE works, we first examine the cache headers of a request. The cache-status header reveals whether a request was a hit, miss, or stale. A stale status indicates that the cached content has expired but is still being served while a fresh version is fetched in the background. The goal of the new system is to explicitly purge the cache, ensuring that users always get the latest content. A manual purge is performed using a PURGE request with curl. This demonstrates how a single instance can be cleared. However, the real challenge lies in purging all CDN instances globally. This requires a mechanism to discover all the instances and send a purge request to each one. The existing solution for purging all instances is a bash one-liner that uses dig to perform a DNS lookup, retrieves all the IP addresses of the CDN instances, and then loops through them, sending a curl purge request to each. The task is to replicate this logic in Elixir. The first step is to perform the DNS lookup in Elixir. A new module is created that uses Erlang's :inet_res module to resolve the IPv6 addresses of the CDN instances. This provides the list of all instances that need to be purged. Next, a new Pipedream module is created to handle the purging logic. This module is designed to be a drop-in replacement for the existing Fastly module. It will have the same interface, allowing for a seamless transition. The core of this module is a purge function that takes a URL, retrieves the list of CDN instances, and then sends a purge request to each instance. The implementation of the Pipedream module is done using Test-Driven Development (TDD). This involves writing a failing test first and then writing the code to make the test pass. This ensures that the code is correct and behaves as expected. The first test is to verify that a purge request is sent to a single CDN instance. This involves mocking the DNS lookup to return a single IP address and then asserting that an HTTP request is made to that address. The test is then extended to handle multiple instances, ensuring that the looping logic is correct. A key challenge in testing is handling the deconstruction of the URL. The purge/1 function receives a full URL, but the purge request needs to be sent to a specific IP address with the original host as a header. This requires parsing the URL to extract the host and the path. Once the unit tests are passing, the new purge functionality is deployed to the new production environment for real-world testing. This allows for verification of the entire workflow, from triggering a purge to observing the cache status of subsequent requests. The testing process involves editing an episode, which triggers a purge, and then using curl to check the cache headers. A miss indicates that the purge was successful. The tests are performed on both the application and the static assets, ensuring that all backends are purged correctly. With the core functionality in place, the next steps involve refining the implementation and adding more features. This includes: Configuration: Moving hardcoded values, such as the application name and port, to a configuration file.Error Handling: Implementing robust error handling for DNS lookups and HTTP requests.Security: Adding a token to the purge request to prevent unauthorized purges.Observability: Using tools like Honeycomb.io to monitor the purge requests and ensure that they are being processed correctly.By following a methodical approach that combines TDD, a staging environment, and careful consideration of the implementation details, it is possible to build a robust and reliable global CDN purge system with Elixir. This not only improves the performance and reliability of the CDN but also provides a solid foundation for future enhancements. 🍿 This entire conversation is available to Make it Work members as full videos served from the CDN, and also a Jellyfin media server: makeitwork.tv/cdn-purge-with-elixir 👈 Scroll to the bottom of the page for CDN & media server info LINKS 🐙 github.com/thechangelog/changelog.com pull request #549🐙 github.com/thechangelog/pipely EPISODE CHAPTERS (00:00) - The Goal (03:54) - The Elixir Way (07:18) - Pipedream vs Pipely (09:26) - Copy, paste & start TDD-ing (13:36) - TDD talk (17:08) - Let's TDD! (24:45) - Does it work? (30:24) - It works! (33:15) - Should we test DNS failures? (35:02) - Let's test the HTTP part (37:15) - All tests passing (37:53) - Let's test this in production (40:29) - Let's check if it's working as expected (41:43) - Does purging the static backend work? (43:54) - Next steps (47:35) - Let's look at requests in Honeycomb.io (51:56) - How does it feel to be this close to finishing this? (52:45) - Remember how this started?

    56 min
  2. I LOVE TLS

    MAY 29

    I LOVE TLS

    In the world of web infrastructure, what starts as a simple goal can often lead you down a fascinating rabbit hole of history, philosophy, and clever engineering. This is the story of our journey to build a simple, single-purpose, open-source CDN for changelog.com and the one major hurdle that stood in our way: Varnish, our HTTP caching layer of choice, doesn't support TLS backends. Enter Nabeel Sulieman, a shipit.show guest, who had previously introduced us to KCert, a simpler alternative to cert-manager. We knew if anyone could help us solve this TLS conundrum, it was him. After a couple of false starts, we finally recorded the final solution. As Nabeel aptly put it: Third time is the charm. 🍿 This entire conversation is available to Make it Work members as full videos served from the CDN, and also a Jellyfin media server: makeitwork.tv/i-love-tls 👈 Scroll to the bottom of the page for CDN & media server info LINKS 🐙 github.com/thechangelog/pipely pull-request #8🐙 github.com/nabsul/tls-exterminator 👀 Varnish - Why no SSL?🚲 PHKs Bikeshed🏡 bikeshed.org EPISODE CHAPTERS (00:00) - How this started (02:05) - What makes TLS & SSL interesting for you? (05:58) - Disabling issues & pull requests (08:19) - What is Pipely? (14:03) - Why no SSL? (in Varnish) (15:36) - Who is Poul-Henning Kamp? (17:30) - The Bikeshed (19:46) - Pipely pull request #8 (23:56) - Dagger instead of Docker (29:41) - pipely Dagger module (36:52) - What is saswqatch? (40:44) - ghcr.io/gerhard/sysadmin (43:45) - Let's benchmark! (51:52) - What happens next? (01:00:17) - Wrap-up

    1h 3m
  3. DevOps Sushi

    APR 29

    DevOps Sushi

    In this episode, we sit down for a deep-dive conversation with Mischa van den Burg, a former nurse who made the leap into the world of DevOps. We explore the practical realities, technical challenges, and hard-won wisdom gained from building and managing modern infrastructure. This isn't your typical high-level overview; we get into the weeds on everything from homelab setups to the nuances of GitOps tooling. We start by exploring the journey from nursing to DevOps - the why behind the career change (00:54) - focusing on the transferable skills and the mindset required to succeed in a field defined by continuous learning and complex problem-solving. What are the most engaging aspects of DevOps (04:49)? We discuss the satisfaction of automating complex workflows and building resilient systems. Conversely, we also tackle the hardest parts of the job (05:48), moving beyond the cliché "it's the people" to discuss the genuine technical and architectural hurdles faced in production environments. We move past the buzzword and into the practical application of "breaking down silos" (07:36). The conversation details concrete strategies for fostering collaboration between development and operations, emphasising shared ownership, transparent communication, and the cultural shift required to make it work. We discuss critical lessons learned from the field (13:07), including the importance of simplicity, the dangers of over-engineering, and the necessity of building systems that are as easy to decommission as they are to deploy. The heart of the conversation tackles an important perspective: Why choose Kubernetes for a homelab? (23:06) We break down the decision-making process, comparing it to alternatives like Nomad and Docker Swarm. The discussion covers the benefits of using a consistent, API-driven environment for both personal projects and professional development. We also touch on the hardest Talos OS issue encountered (36:17), providing a specific, real-world example of troubleshooting in an immutable infrastructure environment. Two of Everything & No in-place upgrades are important pillars of this mindset, and we cover them both (41:14). We then pivot to a practical comparison of GitOps tools, detailing the migration from ArgoCD to Flux (46:50) and the specific technical reasons that motivated the change. We conclude (50:40) by reflecting on the core principles of DevOps and platform engineering, emphasising the human element and the ultimate goal of delivering value, not just managing technology. 🍿 This entire conversation, as well as the screen sharing part, is available to Make it Work members as full videos served from the CDN, and also a Jellyfin media server: DevOps Sushi 1 - conversational partDevOps Sushi 2 - screen sharing partScroll to the bottom of those pages 👆 for CDN & media server info LINKS 🍣 Jiro Dreams of Sushi✍️ I'm In Love with my Work: Lessons from a Japanese Sushi Master🎬 Why I Use Kubernetes For My Homelab🐙 Mischa's homelab GitHub repository🎁 Mischa's Free DevOps Community🎓 KubeCraft DevOps School EPISODE CHAPTERS (00:00) - Intro (00:54) - From Nurse to DevOps Engineer - Why? (04:49) - What are the fun DevOps things? (05:48) - Hardest part in DevOps (07:36) - What does breaking down silos mean to you? (13:07) - Hard earned lessons that are worth sharing (17:44) - The Bear that Dreams of DevOps (23:06) - Why I use Kubernetes for my Homelab? (29:04) - Your recommendation for someone starting today (36:17) - Hardest Talos issue that you've hit (41:14) - No in-place upgrades (46:50) - From ArgoCD to Flux (50:40) - Remembering what's important

    59 min
  4. Fast Infrastructure

    FEB 28

    Fast Infrastructure

    Hugo Santos, founder & CEO of Namespace Labs joins us today to share his passion for fast infrastructure. From sharing childhood stories & dial-up modem phone line wiring experiences, we get to speed testing Hugo's current home internet connection: 25 gigabit FTTP. We shift focus to Namespace, and talk about how it evolved from software-defined storage to building an application platform that starts Kubernetes clusters in seconds. The underlying infrastructure is fast, custom built and is able to: Spin up thousands of isolated, virtual machine-based Kubernetes clustersRun millions of jobs concurrentlyControl everything from CPU/RAM allocation to networking setupDeliver exceptionally low latency at high concurrencyA significant portion of the conversation centres on a major service degradation Namespace experienced in October 2024. Hugo shares the full story, including: How a hardware delivery delay combined with network issues from a third-party provider created problemsThe difficult decision to rebuild the network setup rather than depend on unreliable componentsThe emotional toll of not meeting self-imposed high standards despite working around the clockThe surprising customer loyalty, with no customers leaving despite an impact on their build systemHugo emphasizes taking full responsibility for this incident: "That's on us. We decide which companies we work with..." The episode concludes with Hugo sharing his philosophy on excellence: "I find that it's usually some kind of unrelenting curiosity that really propels people beyond just being good to being excellent... When we approach how we build our products, it's with that same level of unrelenting curiosity and willingness to break through and change things." 🍿 This entire conversation, including all three YouTube videos, is available for members only as a 1h+ long movie at makeitwork.tv/fast-infrastructure LINKS Post mortem: Oct 22, 2024 outage🐙 namespacelabs/foundationGoogle's Boq (mention)🎬 Open-source application platform inspired by Google's Boq🎬 Why is this 25 gigabit home internet slow?🎬 Remote Docker build faster than local?EPISODE CHAPTERS (00:33) - Weekend projects (03:16) - Love for all things infrastructure (09:58) - Hugo's 25 gigabit home internet connection (13:33) - How does this love for infrastructure translate to Namespace.so? (15:28) - What does it mean for a Kubernetes cluster to spin up fast? (20:24) - What does a job mean in infrastructure terms? (23:12) - Let's talk about your last major outage (37:15) - What does Namespace.so look in practice? (39:51) - Namespace Foundation - Open-source Kubernetes app platform (40:54) - Complex preview scenarios (42:37) - One last thought

    45 min
  5. Keep Alert Chaos in Check

    JAN 26

    Keep Alert Chaos in Check

    Today we talk with Matvey Kukuy and Tal Borenstein, co-founders of Keep, a startup focused on helping companies manage and make sense of their alert systems. The discussion comes three years after Matvey's previous appearance - https://shipit.show/36 - where he talked about Grafana Labs' acquisition of his previous startup Amixr (now Grafana OnCall). Keep tackles a significant challenge in modern tech infrastructure: managing the overwhelming volume of alerts that companies receive from their various monitoring systems. Some enterprises deal with up to 70,000 alerts daily, making it crucial to identify which ones represent actual incidents requiring attention. We explore real-world examples of major incidents, including the significant CrowdStrike outage in July 2024 that caused widespread system crashes and resulted in an estimated $10 billion in worldwide damages. This incident highlighted how critical it is to quickly identify and respond to serious issues among numerous alerts. Matvey tells us about his most black swan experience. The episode concludes with a hint that some of Keep's AI features may eventually be released as open source once they're sufficiently polished. LINKS 🎧 Keep on-call simpleCrowdStrike - Wikipedia🎬 The Black Swan TheoryKeep PlaygroundShow HN: Keep - GitHub Actions for your monitoring toolsEPISODE CHAPTERS (00:00) - What is new after three years? (02:58) - Take us through the last memorable incident (07:16) - My most black swan (08:50) - How would have Keep made the CrowdStrike experience different? (12:38) - How do companies end up in that place? (15:29) - Keep name origin (17:40) - Why would someone pick Keep? (23:22) - Let's think about our use case (25:03) - Demo ends (28:21) - Reporting capabilities? (30:25) - Deploying & running Keep (33:12) - 2025 for Keep (38:50) - Until next time

    41 min
  6. TalosCon 2024

    09/28/2024

    TalosCon 2024

    We have 3 conversations from TalosCon 2024: 1. Vincent Behar & Louis Fradin from Ubisoft tell us how how they are building the next generation of game servers on Kubernetes. Recorded in a coffee shop. 2. We catch up with David Flanagan on the AI stack that he had success with in the context of rawkode.academy. David also tells us the full story behind his office burning down earlier this year. Recorded in the hallway track. 3. As for the last conversation, Gerhard finally gets together with Justin Garrison in person. They talked about TalosCon, some of the reasons behind users migrating off Cloud, and why Kubernetes & Talos hold a special place in their minds. Recorded in the workshop cinema room. LINKS 🎬 25,000 servers at Ubisoft - Vincent Behar & Louis Fradin - TalosCon 2024Agones is a library for hosting, running and scaling dedicated game servers on Kubernetes🎬 Managing Talos with CUElang - David Flanagan - TalosCon 2024Xiu is a simple, high performance and secure live media server written in pure Rust🎬 From Homelab to Production - Gerhard Lazu - TalosCon 2024EPISODE CHAPTERS (00:00) - Intro (00:52) - Vincent + Louis: Cinema conference talk (02:09) - Vincent + Louis: What do you do? (03:06) - Vincent + Louis: How do you split work? (04:58) - Vincent + Louis: Game servers on Kubernetes (08:07) - Vincent + Louis: What made you choose Omni & Talos? (11:14) - Vincent + Louis: What could be better about them? (12:58) - Vincent + Louis: Tell us about your talk (16:50) - Vincent + Louis: What if Omni didn't exist? (18:11) - Vincent + Louis: Last takeaway for the listeners (18:53) - David: What is your AI stack for creating content? (20:31) - David: Can AI guide me through running OCR on a video? (21:18) - David: Which AI tools worked best for you? (23:09) - David: Any nice AI tools which are worth mentioning? (24:20) - David: My office went on fire in March (26:13) - David: Which Linux distro do you use? (27:18) - David: The extended version behind the office fire (30:37) - David: What are you looking forward to? (33:07) - David: What tech stack runs rawkode.academy? (38:44) - Justin: Finally meeting in person! (39:13) - Justin: What was your contribution to TalosCon 2024? (41:21) - Justin: What would you improve for next time? (43:59) - Justin: What did you love about this conference? (46:00) - Justin: Help us visualize the venue (47:16) - Justin: What are you thinking for the next TalosCon? (49:22) - Justin: What is most interesting for you in Talos & Omni? (55:25) - Justin: What is missing? (01:00:25) - Justin: How do you see the growing discontent with the Cloud & Kubernetes? (01:07:55) - Justin: What are your takeaways from TalosCon 2024?

    1h 12m

About

Tech infrastructure that gets us excited. Conversations & screen sharing. 🔧 💻