docs/book/src/migration/namespace-scoped.md
This guide covers converting existing cluster-scoped projects to namespace-scoped deployment.
<aside class="note"> <h1>Creating New Namespace-Scoped Projects</h1>If you're creating a new project, simply use:
kubebuilder init --domain example.com --namespaced
All files including cmd/main.go and RBAC configurations will be scaffolded correctly. All controllers created with kubebuilder create api will automatically have the namespace= parameter in their RBAC markers. No manual changes or migration steps are needed.
By default, Kubebuilder scaffolds cluster-scoped managers that watch and manage resources across all namespaces. This guide shows how to convert an existing cluster-scoped project to namespace-scoped deployment, limiting the manager to watch only specific namespace(s).
Use namespace-scoped when:
Use cluster-scoped (default) when:
This migration involves updating RBAC markers across multiple controller files. If you're using an AI coding assistant, see the AI-Assisted Migration section for ready-to-use instructions.
</aside>Quick Summary:
kubebuilder edit --namespaced --force - scaffolds Role/RoleBinding and updates manager.yamlnamespace= parameter to RBAC markers in existing controller filesmake manifests - regenerate RBAC from updated markersThe edit command scaffolds RBAC files and updates manager.yaml automatically (with --force), but cannot update existing controller files or cmd/main.go.
You must manually update:
Note: New controllers created after enabling namespaced mode will have correct RBAC markers automatically.
</aside>kubebuilder edit --namespaced --force
This command automatically:
namespaced: true in your PROJECT fileconfig/rbac/role.yaml with kind: Role (namespace-scoped)config/rbac/role_binding.yaml with kind: RoleBindingconfig/manager/manager.yaml with WATCH_NAMESPACE environment variablekind: Role (namespace-scoped) for all existing APIsNote: The --force flag regenerates config/manager/manager.yaml. Without --force, you must manually add WATCH_NAMESPACE (see below).
The edit command cannot update cmd/main.go automatically. You must manually add namespace-scoped configuration.
a. Add import:
import (
// ... existing imports ...
"sigs.k8s.io/controller-runtime/pkg/cache"
)
b. Add helper functions (after init() and before main()):
// getWatchNamespace returns the namespace(s) the manager should watch for changes.
// It reads the value from the WATCH_NAMESPACE environment variable.
func getWatchNamespace() (string, error) {
watchNamespaceEnvVar := "WATCH_NAMESPACE"
ns, found := os.LookupEnv(watchNamespaceEnvVar)
if !found {
return "", fmt.Errorf("%s must be set", watchNamespaceEnvVar)
}
return ns, nil
}
// setupCacheNamespaces configures the cache to watch specific namespace(s).
func setupCacheNamespaces(namespaces string) cache.Options {
defaultNamespaces := make(map[string]cache.Config)
for _, ns := range strings.Split(namespaces, ",") {
defaultNamespaces[strings.TrimSpace(ns)] = cache.Config{}
}
return cache.Options{
DefaultNamespaces: defaultNamespaces,
}
}
c. In main() function, before ctrl.NewManager(), add:
// Get the namespace(s) for namespace-scoped mode from WATCH_NAMESPACE environment variable.
watchNamespace, err := getWatchNamespace()
if err != nil {
setupLog.Error(err, "Unable to get WATCH_NAMESPACE")
os.Exit(1)
}
d. Update manager creation to use namespace-scoped cache:
mgrOptions := ctrl.Options{
Scheme: scheme,
Metrics: metricsServerOptions,
WebhookServer: webhookServer,
HealthProbeBindAddress: probeAddr,
LeaderElection: enableLeaderElection,
LeaderElectionID: "your-leader-election-id",
// ... other existing options ...
}
// Configure cache to watch namespace(s) specified in WATCH_NAMESPACE
mgrOptions.Cache = setupCacheNamespaces(watchNamespace)
setupLog.Info("Watching namespace(s)", "namespaces", watchNamespace)
mgr, err := ctrl.NewManager(ctrl.GetConfigOrDie(), mgrOptions)
if err != nil {
setupLog.Error(err, "Failed to start manager")
os.Exit(1)
}
If you ran kubebuilder edit --namespaced without --force, manually add WATCH_NAMESPACE to config/manager/manager.yaml:
spec:
template:
spec:
containers:
- name: manager
env:
- name: WATCH_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
With --force, this is done automatically. Skip if you used --force.
For each existing controller file, add the namespace= parameter to RBAC markers.
Find controller files:
func (r *SomeReconciler) Reconcile(internal/controller/*_controller.goIn internal/controller/cronjob_controller.go:
Before (cluster-scoped):
// +kubebuilder:rbac:groups=batch.tutorial.kubebuilder.io,resources=cronjobs,verbs=get;list;watch;create;update;patch;delete
// +kubebuilder:rbac:groups=batch.tutorial.kubebuilder.io,resources=cronjobs/status,verbs=get;update;patch
// +kubebuilder:rbac:groups=batch.tutorial.kubebuilder.io,resources=cronjobs/finalizers,verbs=update
// Reconcile is part of the main kubernetes reconciliation loop
func (r *CronJobReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) {
After (namespace-scoped):
// +kubebuilder:rbac:groups=batch.tutorial.kubebuilder.io,namespace=<project-name>-system,resources=cronjobs,verbs=get;list;watch;create;update;patch;delete
// +kubebuilder:rbac:groups=batch.tutorial.kubebuilder.io,namespace=<project-name>-system,resources=cronjobs/status,verbs=get;update;patch
// +kubebuilder:rbac:groups=batch.tutorial.kubebuilder.io,namespace=<project-name>-system,resources=cronjobs/finalizers,verbs=update
// Reconcile is part of the main kubernetes reconciliation loop
func (r *CronJobReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) {
Replace project-system with your namespace (found in config/default/kustomization.yaml under the namespace: field).
After running kubebuilder edit --namespaced --force, any new controllers created will automatically have the namespace= parameter:
kubebuilder create api --group myapp --version v1 --kind MyNewKind --controller=true --resource=true
Generated controller will include:
// +kubebuilder:rbac:groups=myapp.example.com,namespace=<project-name>-system,resources=mynewkinds,verbs=...
Only existing controllers need manual updates!
</aside>After updating RBAC markers in Step 3, regenerate the RBAC manifests:
make manifests # Regenerate RBAC from updated controller markers
Verify the generated files show kind: Role instead of kind: ClusterRole:
config/rbac/role.yaml:
kind: Role
metadata:
name: manager-role
# Note: namespace is added by kustomize during build, not in source
**config/rbac/*_editor_role.yaml, _viewer_role.yaml, _admin_role.yaml:
kind: Role
metadata:
name: cronjob-editor-role
# Note: namespace is added by kustomize during build, not in source
The config/rbac/metrics_auth_role.yaml will remain kind: ClusterRole - this is correct. The metrics authentication uses cluster-scoped APIs (TokenReview, SubjectAccessReview) and must stay cluster-scoped even in namespace-scoped projects.
Run tests to verify everything works:
make generate # Regenerate code
make test # Run tests
Deploy and verify:
make deploy IMG=<your-image>
# Verify RBAC is namespace-scoped (not cluster-scoped)
kubectl get role,rolebinding -n <manager-namespace>
# Test: Create a resource in the manager's namespace - should be reconciled
kubectl apply -f config/samples/ -n <manager-namespace>
# Test: Create a resource in a different namespace - should NOT be reconciled
kubectl apply -f config/samples/ -n other-namespace
If your project has webhooks, the manager cache is restricted to WATCH_NAMESPACE, but webhooks receive requests from all namespaces by default.
The Problem:
Your webhook server receives admission requests from all namespaces, but the cache only has data from WATCH_NAMESPACE. If a webhook handler queries the cache for an object outside the watched namespaces, the lookup fails.
Solution:
Configure namespaceSelector or objectSelector on your webhooks to align webhook scope with the cache. Currently, controller-gen does not have markers for this. You must add these manually using Kustomize patches.
See the Webhook Bootstrap Problem guide for detailed steps on creating and applying namespace selector patches.
</aside>If you're using an AI coding assistant (Cursor, GitHub Copilot, etc.), you can automate the manual migration steps.
<aside class="note"> <h1>AI Migration Instructions</h1>Instructions to provide to your AI assistant:
I need to migrate this Kubebuilder project from cluster-scoped to namespace-scoped.
First, get the namespace value:
- Read config/default/kustomization.yaml and find the "namespace:" field
- Use that value for all namespace= parameters in RBAC markers
Context:
By default, Kubebuilder projects are cluster-scoped. Namespace-scoped projects watch only
specific namespace(s) via the WATCH_NAMESPACE environment variable.
References:
- Kubebuilder Book: https://book.kubebuilder.io/reference/manager-scope.html
Steps to execute:
1. Enable namespace-scoped mode:
Run: kubebuilder edit --namespaced
This automatically:
- Updates PROJECT file with namespaced: true
- Scaffolds Role/RoleBinding (instead of ClusterRole/ClusterRoleBinding)
- Regenerates admin/editor/viewer roles with kind: Role
2. Add WATCH_NAMESPACE to config/manager/manager.yaml:
Find the manager container under spec.template.spec.containers (name: manager)
and add the env section:
spec:
template:
spec:
containers:
- name: manager
env:
- name: WATCH_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
3. Update cmd/main.go:
a. Add import:
import (
// ... existing imports ...
"sigs.k8s.io/controller-runtime/pkg/cache"
)
b. Add these two helper functions after init() and before main():
// getWatchNamespace returns the namespace(s) the manager should watch for changes.
// It reads the value from the WATCH_NAMESPACE environment variable.
func getWatchNamespace() (string, error) {
watchNamespaceEnvVar := "WATCH_NAMESPACE"
ns, found := os.LookupEnv(watchNamespaceEnvVar)
if !found {
return "", fmt.Errorf("%s must be set", watchNamespaceEnvVar)
}
return ns, nil
}
// setupCacheNamespaces configures the cache to watch specific namespace(s).
func setupCacheNamespaces(namespaces string) cache.Options {
defaultNamespaces := make(map[string]cache.Config)
for _, ns := range strings.Split(namespaces, ",") {
defaultNamespaces[strings.TrimSpace(ns)] = cache.Config{}
}
return cache.Options{
DefaultNamespaces: defaultNamespaces,
}
}
c. In main() function, find ctrl.SetLogger() and add right after it:
// Get the namespace(s) for namespace-scoped mode from WATCH_NAMESPACE environment variable.
watchNamespace, err := getWatchNamespace()
if err != nil {
setupLog.Error(err, "Unable to get WATCH_NAMESPACE")
os.Exit(1)
}
d. Find the ctrl.NewManager() call and replace it with:
mgrOptions := ctrl.Options{
Scheme: scheme,
Metrics: metricsServerOptions,
WebhookServer: webhookServer,
HealthProbeBindAddress: probeAddr,
LeaderElection: enableLeaderElection,
LeaderElectionID: "your-leader-election-id",
// ... keep all other existing options from the original ctrl.NewManager call ...
}
// Configure cache to watch namespace(s) specified in WATCH_NAMESPACE
mgrOptions.Cache = setupCacheNamespaces(watchNamespace)
setupLog.Info("Watching namespace(s)", "namespaces", watchNamespace)
mgr, err := ctrl.NewManager(ctrl.GetConfigOrDie(), mgrOptions)
if err != nil {
setupLog.Error(err, "Failed to start manager")
os.Exit(1)
}
5. Update RBAC markers in existing controller files:
Important: Only update RBAC markers in controller files (files containing "Reconcile" function).
Do not modify webhook files (files in internal/webhook/ or api/*/webhook.go).
How to find controller files in this project:
- Search for all Go files containing "func (r *" and "Reconcile("
- Common locations: internal/controller/, internal/controller/*/, controllers/
- File pattern: *_controller.go (but verify by checking for Reconcile function)
For EACH controller file found:
- Locate ALL +kubebuilder:rbac markers in that file
- Add namespace=<value-from-kustomization> parameter to each marker
Example transformation:
Before:
// +kubebuilder:rbac:groups=myapp.example.com,resources=mykinds,verbs=get;list;watch;create;update;patch;delete
// +kubebuilder:rbac:groups=myapp.example.com,resources=mykinds/status,verbs=get;update;patch
// +kubebuilder:rbac:groups=myapp.example.com,resources=mykinds/finalizers,verbs=update
// +kubebuilder:rbac:groups=core,resources=events,verbs=create;patch
// +kubebuilder:rbac:groups=apps,resources=deployments,verbs=get;list;watch;create;update;patch;delete
After:
// +kubebuilder:rbac:groups=myapp.example.com,namespace=<value-from-kustomization>,resources=mykinds,verbs=get;list;watch;create;update;patch;delete
// +kubebuilder:rbac:groups=myapp.example.com,namespace=<value-from-kustomization>,resources=mykinds/status,verbs=get;update;patch
// +kubebuilder:rbac:groups=myapp.example.com,namespace=<value-from-kustomization>,resources=mykinds/finalizers,verbs=update
// +kubebuilder:rbac:groups=core,namespace=<value-from-kustomization>,resources=events,verbs=create;patch
// +kubebuilder:rbac:groups=apps,namespace=<value-from-kustomization>,resources=deployments,verbs=get;list;watch;create;update;patch;delete
Important rules:
- Add namespace= after the groups= parameter
- Use the namespace value from config/default/kustomization.yaml
- Update all +kubebuilder:rbac markers in each controller file
- Do not modify webhook files - webhooks use certificate-based auth, not RBAC
- Do not add namespace= to metrics-auth-role markers (those stay cluster-scoped)
6. Regenerate RBAC manifests:
Run: make manifests
This regenerates config/rbac/role.yaml from the updated controller markers.
Verify it shows kind: Role (not ClusterRole).
7. Verify the migration:
Run: make generate
Verify files were updated correctly:
- config/rbac/role.yaml - should be kind: Role
- config/manager/manager.yaml - should have WATCH_NAMESPACE env var
- cmd/main.go - should have getWatchNamespace() and setupCacheNamespaces() functions
- All controller files - should have namespace= in RBAC markers
Done! After this migration:
- The project is now namespace-scoped
- Existing controllers have been updated with namespace= RBAC markers
- Future controllers created with `kubebuilder create api` will automatically include
namespace= in their RBAC markers - no manual updates needed!
The WATCH_NAMESPACE environment variable supports comma-separated values to watch multiple specific namespaces:
env:
- name: WATCH_NAMESPACE
value: "namespace-1,namespace-2,namespace-3"
Note: You'll need to create Role/RoleBinding in each namespace for proper RBAC.
To revert back to cluster-scoped:
kubebuilder edit --namespaced=false --force
This command automatically:
namespaced: false in your PROJECT fileconfig/rbac/role.yaml with kind: ClusterRoleconfig/rbac/role_binding.yaml with kind: ClusterRoleBinding--force: Regenerates config/manager/manager.yaml without WATCH_NAMESPACE env varManual steps required:
namespace= parameter from RBAC markers in all controller filesmake manifests to regenerate cluster-scoped RBACcmd/main.go:
getWatchNamespace() functionsetupCacheNamespaces() functionfmt, strings, cache) if not used elsewhere--force, manually remove WATCH_NAMESPACE from config/manager/manager.yaml+kubebuilder:rbac markers in controller files (files with Reconcile function). Webhook files do NOT use RBAC markers - webhooks use certificate-based authentication with the API server.namespace= parameter in controller RBAC markers determines whether controller-gen generates Role (namespace-scoped) or ClusterRole (cluster-scoped). Without the namespace= parameter, controller-gen always generates ClusterRole.make manifests, controller-gen will regenerate config/rbac/role.yaml based on your controller RBAC markers. The initial Role scaffold from kubebuilder edit --namespaced=true serves as a template, but controller-gen manages the actual content.namespace=<your-namespace> in controller RBAC markers, typically namespace=<project-name>-system to match your deployment namespace.metrics-auth-role uses cluster-scoped APIs (TokenReview, SubjectAccessReview) and correctly remains a ClusterRole without namespace parameter.namespaceSelector or objectSelector markers for webhooks. See the webhook section above for details.