Building an Anycast CDN for fun and profit

In simple terms, anycast is just a route with multiple next-hops. More generally it's the routing method that allows a single IP address to be routed to multiple endpoints. While seemingly basic enough, it allows for some really interesting network use cases. It's also hard to experiment with in a lab because there are so many unknowns in internet routing that add for extra confusion when actually going to deploy. Recently I was interested in trying it out for myself, so I built an Anycast CDN for DNS and HTTP traffic. Here's how I did it.

Anycast Networking

The problem with anycast that makes it unapproachable for most people is that you need to have a network capable of making BGP announcements to the internet, in addition having IP address blocks of at least the minimum routable prefix lengths (/24 for IPv4 and /48 for IPv6). Luckily I already had both of those requirements checked off so I was able to get up and running fairly quickly. I have a few other posts about BGP and the internet if you're interested in that side of the network.

The network configuration for an anycast setup is actually way more basic then the more traditional BGP next-hop-self or OSPF IGP infrastructure that you might see in a typical unicast AS. With anycast you just announce the same prefixes in multiple locations. No IGP needed.


Anycast networking has quite a few advantages over DNS + GeoIP type CDNs. The primary being availability and load balancing. With a properly configured anycast network, routes end up where they should go without "intelligent" DNS redirection based on IP geolocation or other means. This comes with the added advantage of not relying on a hostname to power the CDN; services can be pointed directly at the IP address. Anycast also helps by localizing certain types of DoS attacks, where if a node is overloaded it's as simple as dropping the node and letting the others automatically take over. There's no need to even tell the other nodes about an outage, anycast takes care of the routing for you.

Edge Nodes and Caching

The term "Edge Computing" has been thrown around a lot lately, most often when referring to 5G and serving content directly from cell sites. I'm not running anything that close to the edge, but I do refer to the PoPs as edge nodes due to their (ideally) close proximity to end users.

How the actual edge nodes (servers) are provisioned entirely depending on how much content you want to store on the CDN. For this project I'm only storing a few DNS zones and small static websites. The entire cluster only stores about 5G of total data. As such I decided to just serve everything from everywhere. In other terms, every edge node stores all of the CDN's data. Other CDNs use caching software like Varnish to only store parts of the CDN content at the edge, and rely on a smaller number of more powerful content servers that store the bulk of the content. The topic of efficient caching is a whole field of study itself so I won't get into that here.


It's super important to automate when provisioning nearly-identical configs to many machines. I decided to go with Ansible for this project due to it's simplicity and widespread use. There are tons of other automation platforms that would work too, but Ansible is super easy to use and worked great for this. If you haven't used Ansible it's essentially a platform for scripting and automating common configuration tasks with YAML.

  - name: Add debian sid repo
      repo: deb sid main
      state: present

  - name: Add caddy repo
      repo: deb [trusted=yes] /
      state: present

Scripts are referred to as "playbooks". I have a few playbooks that I used for this, an installer that provisions a new node, refresh that pushes new configs (DNS/HTTP) to the cluster, and status that checks service status for BGP, DNS, and HTTP, as well as running DNS and HTTP queries to the nodes to make sure it's actually serving content.


Due to the distributed nature of the project as well as the geographic scope, a central monitoring server wouldn't be all that helpful. Instead I opted to have each node monitor itself as well as the aforementioned Ansible playbook to run manual checks. The edge nodes run a script on a cron job to check for high local response times (which would signify too high load or some other problem) and if the times exceed the allowable threshold (currently 1 second) then the node will trigger an administrative shutdown of the BGP daemon and withdraw routes from peers.

There are more advanced ways this could be done but a service response time check is simple and effective for just a educational project. If I were to deploy a production CDN there would have to be much more monitoring in place to keep trace of global response times from other edge nodes.


The idea with CDNs is to geographically distribute the content nearest the consumers (or "eyeballs" as they're referred to in the networking space). I have a variety of PoPs for this project, some being connected to IXPs and transit, and some with only one. In either case, every edge node announces the same /24 and /48 to the BGP peers.

Concluding thoughts

One thing to keep in mind is anycast isn't a magic guarantee that the path will always be the best one. LinkedIn Engineering has a great blog post that discusses some of the shortcomings of using anycast in a CDN environment. That being said, building an anycast CDN was a lot of fun and a great learning exercise, which were the main objectives of the project in the first place.

Show Comments