docs/contributing/dev-guide.md
The external-dns is the work of thousands of contributors, and is maintained by a small team within kubernetes-sigs. This document covers basic needs to work with external-dns codebase. It contains instructions to build, run, and test external-dns.
Building and/or testing external-dns requires additional tooling.
Additional Go-based tools are managed in go.tool.mod and used for code generation:
| Tool | Purpose |
|---|---|
| controller-gen | Generates CRD manifests and deepcopy methods |
| yq | YAML processing (splitting, filtering CRD outputs) |
| yamlfmt | YAML formatting |
List all installed Go tools:
make go-tools
Update Go tools to their latest versions:
make update-tools-deps
Note: Updates are done manually because Dependabot does not yet support
go.tool.mod(dependabot-core#12050).
Configure Development Environment
You must have a working Go environment, compile the build, and set up testing.
git clone https://github.com/kubernetes-sigs/external-dns.git && cd external-dns
The project uses the make build system. It'll run code generators, tests and static code analysis.
Build, run tests and lint the code:
make go-lint
make test
make cover-html
If added any flags or metrics, re-generate documentation
make generate-flags-documentation
make generate-metrics-documentation
We require all changes to be covered by acceptance tests and/or unit tests, depending on the situation.
In the context of the external-dns, acceptance tests are tests of interactions with providers, such as creating, reading information about, and destroying DNS resources. In contrast, unit tests test functionality wholly within the codebase itself, such as function tests.
Testing log messages within codebase provides significant advantages, especially when it comes to debugging, monitoring, and gaining a deeper understanding of system behavior. Log library build-in testing functionality
This practice enables:
To illustrate how to unit test log output within functions, consider the following example:
import (
"testing"
"sigs.k8s.io/external-dns/internal/testutils"
)
func TestMe(t *testing.T) {
hook := testutils.LogsUnderTestWithLogLevel(log.WarnLevel, t)
... function under tests ...
testutils.TestHelperLogContains("example warning message", hook, t)
// provide negative assertion
testutils.TestHelperLogNotContains("this message should not be shown", hook, t)
}
The DNSEndpoint CRD manifest is generated from Go types using controller-gen and must be regenerated whenever the types in endpoint/ or apis/ change.
make crd
This runs scripts/generate-crd.sh which:
DeepCopy methods for types in endpoint/ and apis/config/crd/standard/charts/external-dns/crds/The controller-gen.kubebuilder.io/version annotation in the generated YAML reflects the version of controller-gen from go.tool.mod at generation time and is updated automatically.
Integration tests live in tests/integration/ and verify behavior that spans multiple sources or wrappers together, using a fake Kubernetes client — no real cluster is required.
flowchart TD
E2E["E2E Tests
Real cluster + real DNS provider
Slow · requires cloud credentials"]
IT["Integration Tests ← tests/integration/
Fake Kubernetes API · no cluster needed
Tests source + wrapper combinations · fast
Declarative YAML scenarios"]
UT["Unit Tests
One source or wrapper in isolation
Mocked or minimal Kubernetes client"]
E2E --> IT --> UT
style IT fill:#bbf7d0,stroke:#15803d,stroke-width:2px
flowchart LR
subgraph yaml["tests/integration/scenarios/tests.yaml"]
RES["resources
Service · Ingress · Pod"]
CFG["config
sources · filters · wrappers"]
EXP["expected
endpoints"]
end
subgraph toolkit["toolkit — fake Kubernetes"]
PARSE["ParseResources()"]
FAKE["fake.Clientset"]
WRAP["CreateWrappedSource()"]
end
subgraph pipeline["ExternalDNS pipeline under test"]
SRC["Source(s)
service · ingress · ..."]
WRP["Wrapper(s)
dedup · targetFilter · NAT64"]
OUT["Endpoints"]
end
ASSERT["ValidateEndpoints()
DNSName · Targets
RecordType · TTL"]
RES --> PARSE --> FAKE --> WRAP
CFG --> WRAP
WRAP --> SRC --> WRP --> OUT --> ASSERT
EXP --> ASSERT
When to add an integration test:
service, ingress) and want to verify it produces the correct endpoints end-to-end.service and ingress both pointing to the same hostname) and their combined output.How to add a scenario:
Add an entry to tests/integration/scenarios/tests.yaml. Each scenario declares Kubernetes resources (Service, Ingress, etc.), the ExternalDNS source configuration, and the expected endpoints:
- name: my-new-scenario
description: >
Brief explanation of what behavior this scenario validates.
config:
sources: ["service"]
resources:
- resource:
apiVersion: v1
kind: Service
metadata:
name: my-svc
namespace: default
annotations:
external-dns.alpha.kubernetes.io/hostname: my.example.com
spec:
type: LoadBalancer
status:
loadBalancer:
ingress:
- ip: 1.2.3.4
expected:
- dnsName: my.example.com
targets: ["1.2.3.4"]
recordType: A
How to run:
go test ./tests/integration/...
It's possible to run ExternalDNS locally. CoreDNS can be used for easier testing. See the related tutorials for full instructions.
When submitting a pull request, you'll notice that we run several automated processes on your proposed change. Some of these processes are tests to ensure your contribution aligns with our standards. While we strive for accuracy, some users may find these tests confusing.
The external-dns does not require make build. You could compile and run Go program with the command
go run main.go \
--provider=aws \
--registry=txt \
--source=fake \
--log-level=info
For this command to run successfully, it will require AWS credentials and access to local or remote access.
To run local cluster please refer to running local cluster
After building local images, it is often useful to deploy those images in a local cluster
We use Minikube but it could be Kind or any other solution.
For simplicity, minikube can be used to create a single node cluster.
You can set a specific Kubernetes version by setting the node's container image. See basic controls within the documentation about configuration for more details on this.
Once you have a configuration in place, create the cluster with that configuration:
minikube start \
--profile=external-dns \
--memory=2000 \
--cpus=2 \
--disk-size=5g \
--kubernetes-version=v1.31 \
--driver=docker
minikube profile external-dns
After the new Kubernetes cluster is ready, identify the cluster is running as the single node cluster:
❯❯ kubectl get nodes
NAME STATUS ROLES AGE VERSION
external-dns Ready control-plane 16s v1.31.4
When building local images with ko you can't specify the registry used to create the image names. It will always be ko.local.
Note: You could skip this step if you build and push image to your private registry or using an official external-dns image
❯❯ export KO_DOCKER_REPO=ko.local
❯❯ export VERSION=v1
❯❯ docker context use rancher-desktop ## (optional) this command is only required when using rancher-desktop
❯❯ ls -al /var/run/docker.sock ## (optional) validate that docker runtime is configured correctly and symlink exists
❯❯ ko build --tags ${VERSION}
❯❯ docker images
$$ ko.local/external-dns-9036f6870f30cbdefa42a10f30bada63 local-v1
Push image to minikube
Refer to load image
❯❯ minikube image load ko.local/external-dns-9036f6870f30cbdefa42a10f30bada63:local-v1
❯❯ minikube image ls
$$ registry.k8s.io/pause:3.10
$$ ...
$$ ko.local/external-dns-9036f6870f30cbdefa42a10f30bada63:local-v1
$$ ...
❯❯ kubectl run external-dns --image=ko.local/external-dns-9036f6870f30cbdefa42a10f30bada63:local-v1 --image-pull-policy=Never
Build and push directly in minikube
Any docker command you run in this current terminal will run against the docker inside minikube cluster.
Refer to push directly
❯❯ eval $(minikube -p external-dns docker-env)
❯❯ echo $MINIKUBE_ACTIVE_DOCKERD
$$ external-dns
❯❯ export VERSION=v1
❯❯ ko build --local --tags ${VERSION}
❯❯ docker images
$$ REPOSITORY TAG
$$ registry.k8s.io/kube-apiserver v1.31.4
$$ ....
$$ ko.local/external-dns-9036f6870f30cbdefa42a10f30bada63 minikube-v1
$$ ...
❯❯ eval $(minikube docker-env -u) ## unset minikube
Pushing to an in-cluster using Registry addon
Refer to pushing images for a full configuration
❯❯ export KO_DOCKER_REPO=$(minikube ip):5000
❯❯ export VERSION=registry-v1
❯❯ minikube addons enable registry
❯❯ ko build --tags ${VERSION}
Build container image and push to a specific registry
make build.push IMAGE=your-registry/external-dns
Build local images if required, load them on a local cluster, and deploy helm charts, run:
Render chart templates locally and display the output
❯❯ helm lint --debug charts/external-dns
❯❯ helm template external-dns charts/external-dns --output-dir _scratch
Deploy manifests to a cluster with required values
❯❯ kubectl apply -f _scratch --recursive=true
Modify chart or values and validate the diff
❯❯ helm template external-dns charts/external-dns --output-dir _scratch
❯❯ kubectl diff -f _scratch/external-dns --recursive=true --show-managed-fields=false
This helm chart comes with a JSON schema generated from values with helm schema plugin.
❯❯ scripts/helm-tools.sh --install
❯❯ scripts/helm-tools.sh --diff
❯❯ scripts/helm-tools.sh --schema
❯❯ scripts/helm-tools.sh --lint
❯❯ scripts/helm-tools.sh --docs
❯❯ make helm-test
## UNRELEASED section and open pull requestNote; kubernetes manifest are not up to date. Consider to create an
examplesfolder
kubectl apply -f kustomize --recursive=true --dry-run=client
All documentation is in docs folder. If new page is added or removed, make sure mkdocs.yml is also updated.
Install required dependencies. In order to not to break system packages, we are going to use virtual environments with pipenv.
❯❯ pipenv shell
❯❯ pip install -r docs/scripts/requirements.txt
❯❯ mkdocs serve
$$ ...
$$ Serving on http://127.0.0.1:8000/
Let's say we are improving tutorial location in docs/tutorials/aws.md.
docs/snippets/aws/<snippet-name>.<snippet-extension>docs/tutorials/aws.md[[% raw %]]
```extension
[[% include 'snippets/aws/<snippet-name>.<snippet-extension>' %]]
```
[[% endraw %]]