internal/website/docs/performance.md
go-micro is designed for developer productivity and ease of use while maintaining good performance for most use cases. This document explains the performance characteristics and trade-offs.
go-micro uses Go's reflection package to enable its core feature: registering any Go struct as a service handler without code generation or boilerplate.
// Simple handler registration - no proto files, no code generation
type GreeterService struct{}
func (g *GreeterService) SayHello(ctx context.Context, req *Request, rsp *Response) error {
rsp.Message = "Hello " + req.Name
return nil
}
server.Handle(server.NewHandler(&GreeterService{}))
This simplicity is only possible with reflection. Alternative approaches (like gRPC or psrpc) require:
.proto filesReflection adds approximately 40-60 microseconds (0.04-0.06ms) overhead per RPC call for:
This totals ~50μs on average, though the exact overhead depends on the complexity of the handler signature and request/response types.
Context: In typical RPC scenarios:
| Component | Typical Time |
|---|---|
| Network I/O | 1-10ms |
| Protobuf serialization | 0.1-0.5ms |
| Business logic | Variable (often 1-100ms+) |
| Reflection + framework overhead | ~0.06ms (0.6-6% of total) |
Reflection overhead is only significant when ALL of these conditions are true:
For 99% of applications, database queries, external services, and business logic dominate performance. Reflection is negligible.
Always measure before assuming reflection is your bottleneck:
# Enable pprof in your service
import _ "net/http/pprof"
# Profile CPU usage
go tool pprof http://localhost:6060/debug/pprof/profile?seconds=30
If reflection shows up as <5% of CPU time, optimizing elsewhere will have more impact.
Common optimization opportunities (typically 10-100x more impact than removing reflection):
go-micro supports multiple transports:
Choose based on your deployment:
import "go-micro.dev/v5/server/grpc"
// Use gRPC for better performance
service := micro.NewService(
micro.Server(grpc.NewServer()),
)
Reuse connections to avoid handshake overhead:
// Client-side connection pooling (enabled by default)
client := service.Client()
go-micro supports multiple codecs:
// Protobuf (fastest, binary)
import "go-micro.dev/v5/codec/proto"
// JSON (human-readable, slower)
import "go-micro.dev/v5/codec/json"
// MessagePack (compact, fast)
import "go-micro.dev/v5/codec/msgpack"
Protobuf is 2-5x faster than JSON for most payloads.
If you've profiled and determined reflection is genuinely a bottleneck (rare), consider:
Pros:
Cons:
.proto filesUse when: You need absolute maximum performance and can invest in proto definitions.
Pros:
Cons:
Use when: You're building LiveKit-style distributed systems and need pub/sub primitives.
Pros:
Cons:
Use when: Developer productivity and code simplicity matter more than squeezing every microsecond.
Synthetic benchmarks (single request/response, no business logic):
| Framework | Latency (p50) | Throughput | Notes |
|---|---|---|---|
| Direct function call | ~1μs | 1M+ RPS | No serialization, no networking |
| go-micro (reflection) | ~60μs | ~16k RPS | ~50μs reflection + ~10μs framework |
| gRPC (generated code) | ~40μs | ~25k RPS | ~10μs codegen + ~30μs framework |
Real-world (with database, business logic):
| Scenario | go-micro | gRPC | Difference |
|---|---|---|---|
| REST API + DB | 15ms | 14.95ms | 0.3% |
| Microservice call | 5ms | 4.95ms | 1% |
| Batch processing | 100ms | 100ms | 0% |
Reflection overhead is lost in the noise for realistic workloads.
Possible future improvements (without removing reflection):
reflect.Value slicesThese could reduce reflection overhead by 50-70% while maintaining the simple API.
For most applications, go-micro's productivity benefits far outweigh the minimal reflection overhead.