cni/README.md
The Istio CNI Node Agent is responsible for several things
/etc/cni/net.d), and watching the config and binary paths to reinstall if things are modified.NET_ADMIN privileged initContainers container, istio-init, injected in the pods along with istio-proxy sidecars. This removes the need for a privileged, NET_ADMIN container in the Istio users' application pods.The Istio cni-plugin has a hard dependency on Linux. Some efforts have been made to allow non-functional builds on non-Linux OSes but these are not universal. For most any reasonable intents and purposes only building on Linux is supported. If you are on a non-Linux development environment use make shell.
Most any Linux architecture supported by Go should work. Istio is only tested on AMD64 and ARM64.
Regardless of mode, the Istio CNI Node Agent requires privileged node permissions, and will require allow-listing in constrained environments that block privileged workloads by default. If using sidecar repair mode or ambient mode, the node agent additionally needs permissions to enter pod network namespaces and perform networking configuration in them. If either sidecar repair or ambient mode are enabled, on startup the container will drop all Linux capabilities via (drop:ALL), and re-add back the ones sidecar repair/ambient explicitly require to function, namely:
See architecture doc.
Broadly, istio-cni accomplishes ambient redirection by instructing ztunnel to set up sockets within the application pod network namespace, where:
and setting up iptables rules to funnel traffic thru that socket "tube" to ztunnel and back.
This effectively behaves like ztunnel is an in-pod sidecar, without actually requiring the injection of ztunnel as a sidecar into the pod manifest, or mutating the application pod in any way.
Additionally, it does not require any network rules/routing/config in the host network namespace, which greatly increases ambient mode compatibility with 3rd-party CNIs. In virtually all cases, this "in-pod" ambient CNI is exactly as compatible with 3rd-party CNIs as sidecars are/were.
| Env Var | Default | Purpose |
|---|---|---|
| HOST_PROBE_SNAT_IP | "169.254.7.127" | Applied to SNAT host probe packets, so they can be identified/skipped podside. To override the default SNAT IP, use any address from the 169.254.0.0/16 block. |
| HOST_PROBE_SNAT_IPV6 | "fd16:9254:7127:1337:ffff:ffff:ffff:ffff" | IPv6 link local ranges are designed to be collision-resistant by default, and so this probably never needs to be overridden. |
Istio CNI injection is currently based on the same Pod annotations used in init-container/inject mode.
The annotation based control is currently only supported in 'sidecar' mode. See plugin/redirect.go for details.
The code automatically detects the proxyUID and proxyGID from RunAsUser/RunAsGroup and exclude them from interception, defaulting to 1337
install-cni daemonset - main function is to install and help the node CNI, but it is also a proper server and interacts with K8S, watching Pods for recovery.istio-cni-config configmap with CNI plugin config to add to CNI plugin chained configistio-cni with ClusterRoleBinding to allow gets on pods' info and delete/modifications for recovery.install-cni container
istio-cni and istio-iptables to /opt/cni/bin.conf, .conflist)CNI_CONF_NAME env varCNI_NETWORK_CONFIG into the plugins list in /etc/cni/net.d/${CNI_CONF_NAME}istio-cni
/opt/cni/binistio-iptables with params to setup pod netnsistio-iptables
CmdAdd is triggered when there is a new pod created. This runs on the node, in a chain of CNI plugins - Istio is
run after the main CNI sets up the pod IP and networking.
nsenter --net=<k8s pod netns> /opt/cni/bin/istio-iptables .... Following conditions will prevent the redirect rules to be setup in the pods:
sidecar.istio.io/inject set to false or has no key sidecar.istio.io/status in annotationsistio-init initContainer - this indicates a pod running its own injection setup.istioctl/helmvalues.global.logging.level="cni:debug,ambient:debug"istio-cni Daemonset pod on a specific node.The CNI plugins are executed by threads in the kubelet process. The CNI plugins logs end up the syslog
under the kubelet process. On systems with journalctl the following is an example command line
to view the last 1000 kubelet logs via the less utility to allow for vi-style searching:
$ journalctl -t kubelet -n 1000 | less
Each GKE cluster's will have many categories of logs collected by Stackdriver. Logs can be monitored via
the project's log viewer and/or the gcloud logging read
capability.
The following example grabs the last 10 kubelet logs containing the string "cmdAdd" in the log message.
$ gcloud logging read "resource.type=k8s_node AND jsonPayload.SYSLOG_IDENTIFIER=kubelet AND jsonPayload.MESSAGE:cmdAdd" --limit 10 --format json
The framework for this implementation of the CNI plugin is based on the containernetworking sample plugin
The details for the deployment & installation of this plugin were pretty much lifted directly from the Calico CNI plugin.
Specifically:
calico-node Daemonset and its install-cni container deployment