We talk about it, then we share our screen and show the practical side of CDNs, homelabs, CI/CD, DevOps & everything in between.

All Episodes

Latest Episodes

All Episodes
#15

Pairing-up on a CDN PURGE with Elixir

Listen to the full pairing session for pull request #549. The focus is on replacing an existing Fastly implementation with Jerod's Pipedream, which is built on top of the open-source Varnish HTTP Cache. We cover the initial problem, the proposed solution, the implementation details, and the testing process.The process begins with a pull request that, for the sake of rapid feedback, is set up to automatically deploy to new production. This allows for real-time testing in a production setting without affecting the actual production traffic. The new production - changelog-2025-05-05 - serves as a production replica for testing the new PURGE functionality.To understand how the PURGE works, we first examine the cache headers of a request. The cache-status header reveals whether a request was a hit, miss, or stale. A stale status indicates that the cached content has expired but is still being served while a fresh version is fetched in the background. The goal of the new system is to explicitly purge the cache, ensuring that users always get the latest content.A manual purge is performed using a PURGE request with curl. This demonstrates how a single instance can be cleared. However, the real challenge lies in purging all CDN instances globally. This requires a mechanism to discover all the instances and send a purge request to each one.The existing solution for purging all instances is a bash one-liner that uses dig to perform a DNS lookup, retrieves all the IP addresses of the CDN instances, and then loops through them, sending a curl purge request to each. The task is to replicate this logic in Elixir.The first step is to perform the DNS lookup in Elixir. A new module is created that uses Erlang's :inet_res module to resolve the IPv6 addresses of the CDN instances. This provides the list of all instances that need to be purged.Next, a new Pipedream module is created to handle the purging logic. This module is designed to be a drop-in replacement for the existing Fastly module. It will have the same interface, allowing for a seamless transition. The core of this module is a purge function that takes a URL, retrieves the list of CDN instances, and then sends a purge request to each instance.The implementation of the Pipedream module is done using Test-Driven Development (TDD). This involves writing a failing test first and then writing the code to make the test pass. This ensures that the code is correct and behaves as expected.The first test is to verify that a purge request is sent to a single CDN instance. This involves mocking the DNS lookup to return a single IP address and then asserting that an HTTP request is made to that address. The test is then extended to handle multiple instances, ensuring that the looping logic is correct.A key challenge in testing is handling the deconstruction of the URL. The purge/1 function receives a full URL, but the purge request needs to be sent to a specific IP address with the original host as a header. This requires parsing the URL to extract the host and the path.Once the unit tests are passing, the new purge functionality is deployed to the new production environment for real-world testing. This allows for verification of the entire workflow, from triggering a purge to observing the cache status of subsequent requests.The testing process involves editing an episode, which triggers a purge, and then using curl to check the cache headers. A miss indicates that the purge was successful. The tests are performed on both the application and the static assets, ensuring that all backends are purged correctly.With the core functionality in place, the next steps involve refining the implementation and adding more features. This includes:Configuration: Moving hardcoded values, such as the application name and port, to a configuration file.Error Handling: Implementing robust error handling for DNS lookups and HTTP requests.Security: Adding a token to the purge request to prevent unauthorized purges.Observability: Using tools like Honeycomb.io to monitor the purge requests and ensure that they are being processed correctly.By following a methodical approach that combines TDD, a staging environment, and careful consideration of the implementation details, it is possible to build a robust and reliable global CDN purge system with Elixir. This not only improves the performance and reliability of the CDN but also provides a solid foundation for future enhancements.🍿 This entire conversation is available to Make it Work members as full videos served from the CDN, and also a Jellyfin media server: makeitwork.tv/cdn-purge-with-elixir 👈 Scroll to the bottom of the page for CDN & media server infoLINKS🐙 github.com/thechangelog/changelog.com pull request #549🐙 github.com/thechangelog/pipelyEPISODE CHAPTERS
#13

DevOps Sushi

We sit down for a deep-dive conversation with Mischa van den Burg, a former nurse who made the leap into the world of DevOps. We explore the practical realities, technical challenges, and hard-won wisdom gained from building and managing modern infrastructure. This isn't your typical high-level overview; we get into the weeds on everything from homelab setups to the nuances of GitOps tooling.We start by exploring the journey from nursing to DevOps - the why behind the career change (00:54) - focusing on the transferable skills and the mindset required to succeed in a field defined by continuous learning and complex problem-solving.What are the most engaging aspects of DevOps (04:49)? We discuss the satisfaction of automating complex workflows and building resilient systems. Conversely, we also tackle the hardest parts of the job (05:48), moving beyond the cliché "it's the people" to discuss the genuine technical and architectural hurdles faced in production environments.We move past the buzzword and into the practical application of "breaking down silos" (07:36). The conversation details concrete strategies for fostering collaboration between development and operations, emphasising shared ownership, transparent communication, and the cultural shift required to make it work.We discuss critical lessons learned from the field (13:07), including the importance of simplicity, the dangers of over-engineering, and the necessity of building systems that are as easy to decommission as they are to deploy.The heart of the conversation tackles an important perspective: Why choose Kubernetes for a homelab? (23:06) We break down the decision-making process, comparing it to alternatives like Nomad and Docker Swarm. The discussion covers the benefits of using a consistent, API-driven environment for both personal projects and professional development. We also touch on the hardest Talos OS issue encountered (36:17), providing a specific, real-world example of troubleshooting in an immutable infrastructure environment. Two of Everything & No in-place upgrades are important pillars of this mindset, and we cover them both (41:14). We then pivot to a practical comparison of GitOps tools, detailing the migration from ArgoCD to Flux (46:50) and the specific technical reasons that motivated the change.We conclude (50:40) by reflecting on the core principles of DevOps and platform engineering, emphasising the human element and the ultimate goal of delivering value, not just managing technology.🍿 This entire conversation, as well as the screen sharing part, is available to Make it Work members as full videos served from the CDN, and also a Jellyfin media server:DevOps Sushi 1 - conversational partDevOps Sushi 2 - screen sharing partScroll to the bottom of those pages 👆 for CDN & media server infoLINKS🍣 Jiro Dreams of Sushi✍️ I'm In Love with my Work: Lessons from a Japanese Sushi Master🎬 Why I Use Kubernetes For My Homelab🐙 Mischa's homelab GitHub repository🎁 Mischa's Free DevOps Community🎓 KubeCraft DevOps SchoolEPISODE CHAPTERS
#12

Fast Infrastructure

Hugo Santos, founder & CEO of Namespace Labs joins us today to share his passion for fast infrastructure. From sharing childhood stories & dial-up modem phone line wiring experiences, we get to speed testing Hugo's current home internet connection: 25 gigabit FTTP.We shift focus to Namespace, and talk about how it evolved from software-defined storage to building an application platform that starts Kubernetes clusters in seconds. The underlying infrastructure is fast, custom built and is able to:Spin up thousands of isolated, virtual machine-based Kubernetes clustersRun millions of jobs concurrentlyControl everything from CPU/RAM allocation to networking setupDeliver exceptionally low latency at high concurrencyA significant portion of the conversation centres on a major service degradation Namespace experienced in October 2024. Hugo shares the full story, including:How a hardware delivery delay combined with network issues from a third-party provider created problemsThe difficult decision to rebuild the network setup rather than depend on unreliable componentsThe emotional toll of not meeting self-imposed high standards despite working around the clockThe surprising customer loyalty, with no customers leaving despite an impact on their build systemHugo emphasizes taking full responsibility for this incident: "That's on us. We decide which companies we work with..."The episode concludes with Hugo sharing his philosophy on excellence: "I find that it's usually some kind of unrelenting curiosity that really propels people beyond just being good to being excellent... When we approach how we build our products, it's with that same level of unrelenting curiosity and willingness to break through and change things."🍿 This entire conversation, including all three YouTube videos, is available for members only as a 1h+ long movie at makeitwork.tv/fast-infrastructureLINKSPost mortem: Oct 22, 2024 outage🐙 namespacelabs/foundationGoogle's Boq (mention)🎬 Open-source application platform inspired by Google's Boq🎬 Why is this 25 gigabit home internet slow?🎬 Remote Docker build faster than local?EPISODE CHAPTERS