docs/book/src/reference/crd-scope.md
This document explains CustomResourceDefinition (CRD) scope in Kubernetes: how CRDs can be defined as namespace-scoped or cluster-scoped resources.
<aside class="note"> <h1>CRD Scope vs Manager Scope</h1>CRD scope is independent from manager scope. See Understanding Scopes for an explanation of how these two concepts differ.
</aside>CRD scope determines the visibility and availability of custom resources:
| Scope | Description | Example Resources |
|---|---|---|
| Namespace-scoped (default) | Resources exist within a specific namespace | Deployments, Services, ConfigMaps, Pods |
| Cluster-scoped | Resources are global across the entire cluster | Nodes, ClusterRoles, Namespaces, PersistentVolumes |
By default, Kubebuilder creates namespace-scoped CRDs:
kubebuilder create api --group cache --version v1alpha1 --kind Memcached
Generated CRD manifest:
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
name: memcacheds.cache.example.com
spec:
scope: Namespaced # Default
group: cache.example.com
names:
kind: Memcached
plural: memcacheds
versions:
- name: v1alpha1
# ...
Custom resources are created in specific namespaces:
kubectl apply -f memcached.yaml -n my-namespace
kubectl get memcacheds -n my-namespace
When to use:
Considerations:
Cluster-scoped CRDs create resources that are global across the entire cluster.
When creating the API, use the --namespaced=false flag:
kubebuilder create api --group infrastructure --version v1 --kind Database --namespaced=false
Generated CRD manifest:
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
name: databases.infrastructure.example.com
spec:
scope: Cluster # Cluster-scoped
group: infrastructure.example.com
names:
kind: Database
plural: databases
versions:
- name: v1
# ...
Custom resources are cluster-wide (no namespace):
kubectl apply -f database.yaml
kubectl get databases # No namespace needed
When to use:
Examples:
After creating an API, you can change its scope using the +kubebuilder:resource:scope marker:
For cluster-scoped:
//+kubebuilder:object:root=true
//+kubebuilder:subresource:status
//+kubebuilder:resource:scope=Cluster
// Database is the Schema for the databases API
type Database struct {
metav1.TypeMeta `json:",inline"`
metav1.ObjectMeta `json:"metadata,omitempty"`
Spec DatabaseSpec `json:"spec,omitempty"`
Status DatabaseStatus `json:"status,omitempty"`
}
For namespace-scoped:
//+kubebuilder:object:root=true
//+kubebuilder:subresource:status
//+kubebuilder:resource:scope=Namespaced
// Memcached is the Schema for the memcacheds API
type Memcached struct {
metav1.TypeMeta `json:",inline"`
metav1.ObjectMeta `json:"metadata,omitempty"`
Spec MemcachedSpec `json:"spec,omitempty"`
Status MemcachedStatus `json:"status,omitempty"`
}
After updating markers, regenerate manifests:
make manifests
Changing CRD scope from Namespaced to Cluster (or vice versa) is a breaking change:
Only change scope during initial development before any production usage.
</aside>Controllers watching namespace-scoped CRDs use namespace-scoped RBAC:
//+kubebuilder:rbac:groups=cache.example.com,resources=memcacheds,verbs=get;list;watch;create;update;patch;delete
//+kubebuilder:rbac:groups=cache.example.com,resources=memcacheds/status,verbs=get;update;patch
//+kubebuilder:rbac:groups=cache.example.com,resources=memcacheds/finalizers,verbs=update
Generated RBAC (cluster-scoped manager):
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: manager-role
rules:
- apiGroups: ["cache.example.com"]
resources: ["memcacheds"]
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
Generated RBAC (namespace-scoped manager):
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: manager-role
namespace: manager-namespace
rules:
- apiGroups: ["cache.example.com"]
resources: ["memcacheds"]
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
Controllers watching cluster-scoped CRDs must use cluster-wide RBAC:
//+kubebuilder:rbac:groups=infrastructure.example.com,resources=databases,verbs=get;list;watch;create;update;patch;delete
//+kubebuilder:rbac:groups=infrastructure.example.com,resources=databases/status,verbs=get;update;patch
//+kubebuilder:rbac:groups=infrastructure.example.com,resources=databases/finalizers,verbs=update
Generated RBAC (always ClusterRole for cluster-scoped CRDs):
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: manager-role
rules:
- apiGroups: ["infrastructure.example.com"]
resources: ["databases"]
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
Even if your manager is namespace-scoped (watches only one namespace), if it manages cluster-scoped CRDs, it still needs ClusterRole permissions for those resources.
Manager scope and CRD scope are independent:
scope field (resource visibility)For namespace-scoped CRDs with multiple versions, conversion webhooks must account for namespace scope:
//+kubebuilder:webhook:path=/convert,mutating=false,failurePolicy=fail,groups=cache.example.com,resources=memcacheds,verbs=create;update,versions=v1;v1beta1,name=cmemcached.kb.io,sideEffects=None,admissionReviewVersions=v1
The webhook must handle conversion for resources in any namespace. See the multi-version tutorial for details.
# Create resource in namespace
kubectl apply -f config/samples/cache_v1alpha1_memcached.yaml -n test-namespace
# Verify it exists in that namespace only
kubectl get memcacheds -n test-namespace
kubectl get memcacheds -n other-namespace # Should not find it
# Create cluster-scoped resource (no namespace)
kubectl apply -f config/samples/infrastructure_v1_database.yaml
# Verify it's cluster-wide
kubectl get databases # No namespace needed