docs/local-cre/environment/advanced.md
This page groups the advanced Local CRE topics in one place.
Local CRE can bootstrap the billing service with --with-billing, and the smoke package also includes billing coverage. Use this when validating workflow billing integrations locally rather than only running the base DON stack.
New Local CRE extension work should be expressed as a feature, not through the older installable-capability path.
At a high level, adding a feature means:
system-tests/lib/cre/features/...PreEnvStartup and PostEnvStartupIn the current implementation:
InstallableCapability is deprecated for new workFeature is the primary interface for new capabilities and related setupThis is contributor-facing material. If you are only choosing or running topologies, use Topologies and Capabilities instead.
Local CRE supports swapping:
Use hot swapping when you want to refresh part of the running environment without doing a full env stop and env start.
To recreate the Chainlink node containers with updated images or rebuilt local code:
go run . env swap nodes
This command:
Useful flags:
--force to force container removal--wait-time to control how long Local CRE waits before retrying after removal/startup issuesUse env swap nodes when:
The capability swap flow is still the fastest local loop when you need to rebuild a plugin without recreating the whole environment.
Typical command:
go run . env swap capability -n cron -b /path/to/cron
env swap capability supports:
--name--binary--forceThis command:
Use env swap capability when:
Only capabilities supported by the swappable capability provider can be hot-reloaded this way.
Use this part of the stack when you need to answer questions like:
--with-beholder
Starts the Beholder stack used by the CRE tests for workflow-related messages and heartbeat validation.--with-observability
Starts the observability stack.--with-dashboards
Starts observability and provisions the Grafana dashboards used for inspection.go run . obs up
Manages the observability stack directly from the Local CRE CLI.Use plain container logs first when:
Use Beholder when:
Use observability and dashboards when:
In Local CRE, DX refers to usage tracking for the Local CRE tooling itself, not workflow/node telemetry.
The CLI records events such as:
The tracker configuration in the Local CRE code uses:
API_TOKEN_LOCAL_CRElocal_creThis is separate from observability, Loki, Grafana, Beholder logs, or any workflow-level tracing.
If you are debugging Local CRE usage instrumentation, look at the tracking hooks in the environment commands. If you are debugging workflow execution, logs, or message flow, use the observability and Beholder paths described above instead.
--with-beholder if the scenario depends on workflow messages--with-observability or --with-dashboards when you need Grafana/LokiUse this path when you need reproducible node images or stable node identity instead of whatever Local CRE generates from the working tree.
By default, most Docker-based topologies build the Chainlink node image from the local checkout:
[nodesets.node_specs.node]
docker_ctx = "../../../.."
docker_file = "core/chainlink.Dockerfile"
To pin a prebuilt image instead, replace the build settings with an explicit image:
[nodesets.node_specs.node]
image = "chainlink-tmp:latest"
Apply that change to every node spec in the nodeset that should use the pinned image.
Use explicit images when:
The example override file at core/scripts/cre/environment/configs/examples/workflow-don-overrides.toml shows this pattern in practice.
Local CRE normally generates fresh node keys. The lower-level CRE types also support importing an existing node-secrets payload instead of generating new keys.
That path is useful when you need:
The implementation detail to be aware of is:
NodeKeyInput.ImportedSecrets bypasses key generation and imports existing encrypted node secretsThis is a contributor or integrator workflow rather than a normal Local CRE quickstart path. If you need stable keys, treat that as a deliberate topology/configuration change and validate the resulting peer IDs and on-chain addresses after startup.
The default Local CRE flow uses local Anvil chains. Switch to external or public blockchains only when the workflow or capability truly depends on a non-local RPC.
When you stop using only local Anvil chains, you need all of the following to line up:
[[blockchains]] entries must use the correct chain type and chain IDFor a non-local chain, the practical pattern is to provide a blockchain entry whose output URLs are already known, instead of asking Local CRE to spin up a local chain container.
Example:
[[blockchains]]
chain_id = "11155111"
type = "anvil"
[blockchains.out]
type = "anvil"
use_cache = true
[[blockchains.out.nodes]]
ws_url = "wss://0xrpc.io/sep"
http_url = "https://0xrpc.io/sep"
internal_ws_url = "wss://0xrpc.io/sep"
internal_http_url = "https://0xrpc.io/sep"
Then make sure the DON that needs that chain supports it. For example:
[[nodesets]]
name = "bootstrap-gateway"
supported_evm_chains = [1337, 11155111]
And if a workflow DON needs chain-specific capabilities on that chain, its capability list must include the matching flag, for example:
capabilities = ["read-contract-11155111", "write-evm-11155111"]
Use it when:
Before you run env start, verify:
Compared with the default Anvil flow, this setup is much more sensitive to RPC health, endpoint latency, and mismatches between topology config and the actual remote chain.
Kubernetes is the alternative infra mode to the default Docker setup.
Switch the topology to Kubernetes by setting:
[infra]
type = "kubernetes"
Use Kubernetes when:
Unlike Docker mode, Kubernetes mode assumes the nodes are already running in the cluster and Local CRE connects to them by generating the expected service URLs.
Before using Kubernetes mode, make sure you have:
kubectl access to the clusterThe Kubernetes fields live under infra.kubernetes:
[infra]
type = "kubernetes"
[infra.kubernetes]
namespace = "my-namespace"
external_domain = "example.com"
external_port = 80
label_selector = "app=chainlink"
node_api_user = "[email protected]"
node_api_password = "secure-password-here"
What these fields are for:
namespace
The namespace where the DON nodes are already running.external_domain
The domain used to derive externally reachable service URLs.external_port
The ingress port, usually 80.label_selector
The selector used to discover the relevant Chainlink pods.node_api_user and node_api_password
The credentials Local CRE uses to talk to the nodes.In Docker mode, many topologies use:
docker_ctx = "../../../.."
docker_file = "core/chainlink.Dockerfile"
In Kubernetes mode, prefer explicit images instead:
[nodesets.node_specs.node]
image = "chainlink:your-tag"
[jd]
image = "job-distributor:your-tag"
Kubernetes is therefore the wrong choice for fast local code iteration and the right choice for image-based validation.
Kubernetes mode is designed to work with deployments that accept node-specific config overlays.
The expected model is:
The original Local CRE guidance called out the expected objects:
<node-name>-config-override<node-name>-secrets-overrideIf your chart or deployment setup does not support that overlay pattern, Kubernetes mode will not behave like the standard Local CRE flow.
Representative Kubernetes-connected topology:
[[blockchains]]
chain_id = "1337"
type = "anvil"
[blockchains.out]
use_cache = true
type = "anvil"
family = "evm"
chain_id = "1337"
[[blockchains.out.nodes]]
ws_url = "wss://anvil-service-rpc.example.com"
http_url = "https://anvil-service-rpc.example.com"
internal_ws_url = "ws://anvil-service:8545"
internal_http_url = "http://anvil-service:8545"
[infra]
type = "kubernetes"
[infra.kubernetes]
namespace = "my-namespace"
external_domain = "example.com"
external_port = 80
label_selector = "app=chainlink"
node_api_user = "[email protected]"
node_api_password = "secure-password-here"
[jd]
csa_encryption_key = "d1093c0060d50a3c89c189b2e485da5a3ce57f3dcb38ab7e2c0d5f0bb2314a44"
image = "job-distributor:your-tag"
Local CRE derives service URLs in Kubernetes from naming conventions. Using my-namespace as an example:
http://workflow-bt-0:6688https://my-namespace-workflow-bt-0.example.comhttp://workflow-0:6688https://my-namespace-workflow-0.example.comThis is why the namespace, external domain, and label selector matter so much in Kubernetes mode.
Before using Kubernetes:
topology show and generated topology docs