docs/guides/service-mesh-integration/page.md
{% answer %} GoFr services run unchanged on Istio or Linkerd because the framework speaks plain HTTP/gRPC. The mesh adds mTLS, traffic policy, and L7 telemetry through a sidecar — but you should pick one owner for retries and circuit breaking, since GoFr's HTTP client already provides both. {% /answer %}
GoFr already ships several patterns commonly cited as reasons to adopt a mesh:
AddHTTPService.pkg/gofr/service/new.go)./.well-known/health and /.well-known/alive for readiness/liveness probes.A mesh becomes worth its sidecar overhead when you need:
If your fleet is GoFr-only and you mainly want resilience, GoFr's built-in features may be enough.
In Istio, apply a PeerAuthentication policy in STRICT mode and a DestinationRule with tls.mode: ISTIO_MUTUAL. GoFr requires no change — the sidecar transparently terminates and re-encrypts traffic.
In Linkerd, mTLS is automatic between meshed pods. Annotate the namespace with linkerd.io/inject: enabled and redeploy.
For the exact CRD syntax, follow the canonical docs:
https://istio.io/latest/docs/tasks/security/authentication/https://linkerd.io/2/features/automatic-mtls/GoFr emits W3C TraceContext (traceparent, tracestate) on inbound requests and propagates them on outbound HTTP service calls. When you add a mesh:
propagation.TraceContext + Baggage (see pkg/gofr/otel.go), which matches the W3C standard Istio and Linkerd use.TRACE_EXPORTER and TRACER_URL.This is where teams burn themselves. If both GoFr and the mesh retry, a 503 on a downstream service can multiply into 9+ retries (3 from GoFr times 3 from the mesh).
Recommendation: own resilience in one layer, not both.
AddHTTPService with CircuitBreakerConfig and RetryConfig: turn off mesh-level retries and outlier detection for those routes.AddHTTPService without retry/circuit-breaker options.GoFr's circuit breaker uses /.well-known/alive to probe recovery. If you delegate to the mesh, the mesh's outlier detection plays the same role.
A sidecar adds CPU, memory, and ~1–3ms of latency per hop. For a low-QPS internal service the overhead is usually fine; for a hot path with strict latency budgets, benchmark before adopting. GoFr's library-level resilience has no sidecar cost.
Set Kubernetes probes on the GoFr ports, not the sidecar:
livenessProbe:
httpGet:
path: /.well-known/alive
port: 8000
readinessProbe:
httpGet:
path: /.well-known/health
port: 8000
/.well-known/alive is the liveness signal; /.well-known/health includes dependency status and may be slower.
{% faq %}
{% faq-item question="Do I need to change GoFr code to enable mTLS via Istio or Linkerd?" %}
No. The sidecar handles TLS at the network layer, so a plain HTTP listener inside the pod is fine. You only change Kubernetes manifests.
{% /faq-item %}
{% faq-item question="Should the mesh or GoFr own retries?" %}
Pick one. Running both layers at default settings can multiply request volume on a struggling downstream. If you keep GoFr's RetryConfig, disable mesh retries for those routes.
{% /faq-item %}
{% faq-item question="Will mesh-injected spans break GoFr tracing?" %}
No. GoFr uses W3C TraceContext, the same standard Istio and Linkerd use, so spans stitch together if both export to the same collector.
{% /faq-item %}
{% /faq %}