-
Notifications
You must be signed in to change notification settings - Fork 0
Comparison with external dns
external-dns is an established project in the Kubernetes ecosystem. It's widely used in production environments to publish DNS records from all kinds of Kubernetes sources. The project was started in 2017 and is owned by the Kubernetes Network Special Interest Group.
Meanwhile, kubernetes-dns-sync
is a personal project started in 2020
to solve several problems that external-dns
had trouble handling.
Because the codebase is smaller and younger, and supports less providers,
kubernetes-dns-sync
can be more easily refactored.
As discussed in #1923 External-dns project scope,
the primary focus of external-dns
is managing A
and CNAME
records.
The project focuses on handling as many Kubernetes use-cases as possible
as long as they involve A
or CNAME
records.
octodns, a Python project, is referenced
as an example of a tool better suited for managing more of a DNS zone.
It doesn't do anything Kubernetes specific and the OSS distribution focuses
on static records such as redirect domains, email hosting, and TXT
verifications/SPF.
The project also supports copying between providers.
kubernetes-dns-sync
tries to stand in between the two projects.
We support several common Kubernetes -> DNS use-cases,
which includes a CRD source to allow for arbitrary records.
So you can have Ingresses furnishing CNAME
or A
/AAAA
records
alongside CRD
s furnishing MX
s for email and TXT
s for verifications.
With the scope compromise, a Kubernetes cluster is now capable of managing all of the records in a given DNS zone.
The record registries for external-dns and kubernetes-dns-sync have a lot in common. Very similar records are created by both projects, however the actual implementations are very different nowadays.
Once external-dns
creates a registry TXT
, it basically leaves the record alone.
So if the record becomes outdated (e.g. the source Kubernetes resource changes),
it sorta stays that way.
The approach that kubernetes-dns-sync
takes is to run registry records
through the same comparison engine that the other records go through.
So it will maintain its own TXT
records as needed.
This project is mostly compatible with inheriting DNS records in a zone previously managed by external-dns.
The primary difference is that each record type is now explicitly registered/owned.
This means that if a managed subdomain (FQDN) also has extra records such as MX
in the provider,
kubernetes-dns-sync
will initially assume it is supposed to manage the extra records.
This record type ownership is only a concern when inheriting external-dns
registry records.
It's highly recommended to run kubernetes-dns-sync
without the --yes
parameter during first setup.
I tried using external-dns
for more serious DNS management
(records for a whole zone; such as managing apex records pointing to dual-stack CDNs)
in late 2020 and ran into numerous issues:
- Lack of
AAAA
support from most sources- For example: If a node has an IPv6
ExternalIP
, external-dns tries adding as anA
anyway (which fails) - Open issue: Tracking for
ingress
- Open issue: Tracking for
service
- For example: If a node has an IPv6
- Lack of
AAAA
,TXT
orMX
'planning' support overall- external-dns can't be used to manage email even with the CRD source :(
- Open PR: adding
AAAA
specifically - Merged in 2021: Support for
NS
specifically
- Lack of partial ownership - won't add
A
to the apex record if anyTXT
s already exist there - CRD source lacks event stream support
- CRD source doesn't provide strong feedback in Status key
- Need to run multiple external-dns instances for multi-provider, differing annotation filters, etc
- I was up to 5 for a split-horizon horizon... Should only be 2 at most
- Open issue: Support multiple providers via a CRD operator
- Individual providers like Vultr can be quite behind
- Vultr provider entirely lacks multiple-target support (DNS round-robin)
- Can't find an issue for this as of Feb 2022
- Vultr provider continuously updates on invalid TTL
- Can't find an issue for this as of Feb 2022
- Vultr provider was using v1 of their API and making repetitive API calls; v2 is the latest API version
- Vultr provider entirely lacks multiple-target support (DNS round-robin)
After trying to refactor enough to support several of these needs, I decided to to my hand at a from-scratch replacement. I ended up learning a fair bit about the related issues in the process!