Pairing-up on a CDN PURGE with Elixir
Download MP3Listen to the full pairing session for pull request #549.
The focus is on replacing an existing Fastly implementation with Jerod's Pipedream, which is built on top of the open-source Varnish HTTP Cache. We cover the initial problem, the proposed solution, the implementation details, and the testing process.
The process begins with a pull request that, for the sake of rapid feedback, is set up to automatically deploy to new production. This allows for real-time testing in a production setting without affecting the actual production traffic. The new production - changelog-2025-05-05 - serves as a production replica for testing the new PURGE functionality.
To understand how the PURGE works, we first examine the cache headers of a request. The cache-status header reveals whether a request was a hit, miss, or stale. A stale status indicates that the cached content has expired but is still being served while a fresh version is fetched in the background. The goal of the new system is to explicitly purge the cache, ensuring that users always get the latest content.
A manual purge is performed using a PURGE request with curl. This demonstrates how a single instance can be cleared. However, the real challenge lies in purging all CDN instances globally. This requires a mechanism to discover all the instances and send a purge request to each one.
The existing solution for purging all instances is a bash one-liner that uses dig to perform a DNS lookup, retrieves all the IP addresses of the CDN instances, and then loops through them, sending a curl purge request to each. The task is to replicate this logic in Elixir.
The first step is to perform the DNS lookup in Elixir. A new module is created that uses Erlang's :inet_res module to resolve the IPv6 addresses of the CDN instances. This provides the list of all instances that need to be purged.
Next, a new Pipedream module is created to handle the purging logic. This module is designed to be a drop-in replacement for the existing Fastly module. It will have the same interface, allowing for a seamless transition. The core of this module is a purge function that takes a URL, retrieves the list of CDN instances, and then sends a purge request to each instance.
The implementation of the Pipedream module is done using Test-Driven Development (TDD). This involves writing a failing test first and then writing the code to make the test pass. This ensures that the code is correct and behaves as expected.
The first test is to verify that a purge request is sent to a single CDN instance. This involves mocking the DNS lookup to return a single IP address and then asserting that an HTTP request is made to that address. The test is then extended to handle multiple instances, ensuring that the looping logic is correct.
A key challenge in testing is handling the deconstruction of the URL. The purge/1 function receives a full URL, but the purge request needs to be sent to a specific IP address with the original host as a header. This requires parsing the URL to extract the host and the path.
Once the unit tests are passing, the new purge functionality is deployed to the new production environment for real-world testing. This allows for verification of the entire workflow, from triggering a purge to observing the cache status of subsequent requests.
The testing process involves editing an episode, which triggers a purge, and then using curl to check the cache headers. A miss indicates that the purge was successful. The tests are performed on both the application and the static assets, ensuring that all backends are purged correctly.
With the core functionality in place, the next steps involve refining the implementation and adding more features. This includes:
The focus is on replacing an existing Fastly implementation with Jerod's Pipedream, which is built on top of the open-source Varnish HTTP Cache. We cover the initial problem, the proposed solution, the implementation details, and the testing process.
The process begins with a pull request that, for the sake of rapid feedback, is set up to automatically deploy to new production. This allows for real-time testing in a production setting without affecting the actual production traffic. The new production - changelog-2025-05-05 - serves as a production replica for testing the new PURGE functionality.
To understand how the PURGE works, we first examine the cache headers of a request. The cache-status header reveals whether a request was a hit, miss, or stale. A stale status indicates that the cached content has expired but is still being served while a fresh version is fetched in the background. The goal of the new system is to explicitly purge the cache, ensuring that users always get the latest content.
A manual purge is performed using a PURGE request with curl. This demonstrates how a single instance can be cleared. However, the real challenge lies in purging all CDN instances globally. This requires a mechanism to discover all the instances and send a purge request to each one.
The existing solution for purging all instances is a bash one-liner that uses dig to perform a DNS lookup, retrieves all the IP addresses of the CDN instances, and then loops through them, sending a curl purge request to each. The task is to replicate this logic in Elixir.
The first step is to perform the DNS lookup in Elixir. A new module is created that uses Erlang's :inet_res module to resolve the IPv6 addresses of the CDN instances. This provides the list of all instances that need to be purged.
Next, a new Pipedream module is created to handle the purging logic. This module is designed to be a drop-in replacement for the existing Fastly module. It will have the same interface, allowing for a seamless transition. The core of this module is a purge function that takes a URL, retrieves the list of CDN instances, and then sends a purge request to each instance.
The implementation of the Pipedream module is done using Test-Driven Development (TDD). This involves writing a failing test first and then writing the code to make the test pass. This ensures that the code is correct and behaves as expected.
The first test is to verify that a purge request is sent to a single CDN instance. This involves mocking the DNS lookup to return a single IP address and then asserting that an HTTP request is made to that address. The test is then extended to handle multiple instances, ensuring that the looping logic is correct.
A key challenge in testing is handling the deconstruction of the URL. The purge/1 function receives a full URL, but the purge request needs to be sent to a specific IP address with the original host as a header. This requires parsing the URL to extract the host and the path.
Once the unit tests are passing, the new purge functionality is deployed to the new production environment for real-world testing. This allows for verification of the entire workflow, from triggering a purge to observing the cache status of subsequent requests.
The testing process involves editing an episode, which triggers a purge, and then using curl to check the cache headers. A miss indicates that the purge was successful. The tests are performed on both the application and the static assets, ensuring that all backends are purged correctly.
With the core functionality in place, the next steps involve refining the implementation and adding more features. This includes:
- Configuration: Moving hardcoded values, such as the application name and port, to a configuration file.
- Error Handling: Implementing robust error handling for DNS lookups and HTTP requests.
- Security: Adding a token to the purge request to prevent unauthorized purges.
- Observability: Using tools like Honeycomb.io to monitor the purge requests and ensure that they are being processed correctly.
By following a methodical approach that combines TDD, a staging environment, and careful consideration of the implementation details, it is possible to build a robust and reliable global CDN purge system with Elixir. This not only improves the performance and reliability of the CDN but also provides a solid foundation for future enhancements.
🍿 This entire conversation is available to Make it Work members as full videos served from the CDN, and also a Jellyfin media server: makeitwork.tv/cdn-purge-with-elixir 👈 Scroll to the bottom of the page for CDN & media server info
LINKS
EPISODE CHAPTERS
- (00:00) - The Goal
- (03:54) - The Elixir Way
- (07:18) - Pipedream vs Pipely
- (09:26) - Copy, paste & start TDD-ing
- (13:36) - TDD talk
- (17:08) - Let's TDD!
- (24:45) - Does it work?
- (30:24) - It works!
- (33:15) - Should we test DNS failures?
- (35:02) - Let's test the HTTP part
- (37:15) - All tests passing
- (37:53) - Let's test this in production
- (40:29) - Let's check if it's working as expected
- (41:43) - Does purging the static backend work?
- (43:54) - Next steps
- (47:35) - Let's look at requests in Honeycomb.io
- (51:56) - How does it feel to be this close to finishing this?
- (52:45) - Remember how this started?
Creators and Guests

Guest
Jerod Santo
Hosts Changelog News, co-hosts The Changelog & takes out the trash (his old code) once in a while
