Back to Feast

API Reference

infra/feast-operator/docs/api/markdown/ref.md

0.63.044.5 KB
Original Source

API Reference

Packages

feast.dev/v1

Package v1 contains API Schema definitions for the v1 API group

Resource Types

AuthzConfig

AuthzConfig defines the authorization settings for the deployed Feast services.

Appears in:

FieldDescription
kubernetes KubernetesAuthz
oidc OidcAuthz

AutoscalingConfig

AutoscalingConfig defines HPA settings for the FeatureStore deployment.

Appears in:

FieldDescription
minReplicas integerMinReplicas is the lower limit for the number of replicas. Defaults to 1.
maxReplicas integerMaxReplicas is the upper limit for the number of replicas. Required.
metrics MetricSpec arrayMetrics contains the specifications for which to use to calculate the desired replica count.
If not set, defaults to 80% CPU utilization.
behavior HorizontalPodAutoscalerBehaviorBehavior configures the scaling behavior of the target.

BatchEngineConfig

BatchEngineConfig defines the batch compute engine configuration.

Appears in:

FieldDescription
configMapRef LocalObjectReferenceReference to a ConfigMap containing the batch engine configuration.
The ConfigMap should contain YAML-formatted config with 'type' and engine-specific fields.
configMapKey stringKey name in the ConfigMap. Defaults to "config" if not specified.

ContainerConfigs

ContainerConfigs k8s container settings for the server

Appears in:

FieldDescription
image string
env EnvVar
envFrom EnvFromSource
imagePullPolicy PullPolicy
resources ResourceRequirements
nodeSelector map[string]string

CronJobContainerConfigs

CronJobContainerConfigs k8s container settings for the CronJob

Appears in:

FieldDescription
image string
env EnvVar
envFrom EnvFromSource
imagePullPolicy PullPolicy
resources ResourceRequirements
nodeSelector map[string]string
commands string arrayArray of commands to be executed (in order) against a Feature Store deployment.
Defaults to "feast apply" & "feast materialize-incremental $(date -u +'%Y-%m-%dT%H:%M:%S')"

DefaultCtrConfigs

DefaultCtrConfigs k8s container settings that are applied by default

Appears in:

FieldDescription
image string

FeastCronJob

FeastCronJob defines a CronJob to execute against a Feature Store deployment.

Appears in:

FieldDescription
annotations object (keys:string, values:string)Annotations to be added to the CronJob metadata.
jobSpec JobSpecSpecification of the desired behavior of a job.
containerConfigs CronJobContainerConfigs
schedule stringThe schedule in Cron format, see https://en.wikipedia.org/wiki/Cron.
timeZone stringThe time zone name for the given schedule, see https://en.wikipedia.org/wiki/List_of_tz_database_time_zones.
If not specified, this will default to the time zone of the kube-controller-manager process.
The set of valid time zone names and the time zone offset is loaded from the system-wide time zone
database by the API server during CronJob validation and the controller manager during execution.
If no system-wide time zone database can be found a bundled version of the database is used instead.
If the time zone name becomes invalid during the lifetime of a CronJob or due to a change in host
configuration, the controller will stop creating new new Jobs and will create a system event with the
reason UnknownTimeZone.
More information can be found in https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/#time-zones
startingDeadlineSeconds integerOptional deadline in seconds for starting the job if it misses scheduled
time for any reason. Missed jobs executions will be counted as failed ones.
concurrencyPolicy ConcurrencyPolicySpecifies how to treat concurrent executions of a Job.
Valid values are:
  • "Allow" (default): allows CronJobs to run concurrently;
  • "Forbid": forbids concurrent runs, skipping next run if previous run hasn't finished yet;
  • "Replace": cancels currently running job and replaces it with a new one | | suspend boolean | This flag tells the controller to suspend subsequent executions, it does not apply to already started executions. | | successfulJobsHistoryLimit integer | The number of successful finished jobs to retain. Value must be non-negative integer. | | failedJobsHistoryLimit integer | The number of failed finished jobs to retain. Value must be non-negative integer. |

FeastInitOptions

FeastInitOptions defines how to run a feast init.

Appears in:

FieldDescription
minimal boolean
template stringTemplate for the created project

FeastProjectDir

FeastProjectDir defines how to create the feast project directory.

Appears in:

FieldDescription
git GitCloneOptions
init FeastInitOptions

FeatureStore

FeatureStore is the Schema for the featurestores API

FieldDescription
apiVersion stringfeast.dev/v1
kind stringFeatureStore
metadata ObjectMetaRefer to Kubernetes API documentation for fields of metadata.
spec FeatureStoreSpec
status FeatureStoreStatus

FeatureStoreRef

FeatureStoreRef defines which existing FeatureStore's registry should be used

Appears in:

FieldDescription
name stringName of the FeatureStore
namespace stringNamespace of the FeatureStore

FeatureStoreServices

FeatureStoreServices defines the desired feast services. An ephemeral onlineStore feature server is deployed by default.

Appears in:

FieldDescription
offlineStore OfflineStore
onlineStore OnlineStore
registry Registry
ui ServerConfigsCreates a UI server container
deploymentStrategy DeploymentStrategy
securityContext PodSecurityContext
podAnnotations object (keys:string, values:string)PodAnnotations are annotations to be applied to the Deployment's PodTemplate metadata.
This enables annotation-driven integrations like OpenTelemetry auto-instrumentation,
Istio sidecar injection, Vault agent injection, etc.
disableInitContainers booleanDisable the 'feast repo initialization' initContainer
runFeastApplyOnInit booleanRuns feast apply on pod start to populate the registry. Defaults to true. Ignored when DisableInitContainers is true.
volumes Volume arrayVolumes specifies the volumes to mount in the FeatureStore deployment. A corresponding VolumeMount should be added to whichever feast service(s) require access to said volume(s).
scaling ScalingConfigScaling configures horizontal scaling for the FeatureStore deployment (e.g. HPA autoscaling).
For static replicas, use spec.replicas instead.
podDisruptionBudgets PDBConfigPodDisruptionBudgets configures a PodDisruptionBudget for the FeatureStore deployment.
Only created when scaling is enabled (replicas > 1 or autoscaling).
topologySpreadConstraints TopologySpreadConstraint arrayTopologySpreadConstraints defines how pods are spread across topology domains.
When scaling is enabled and this is not set, the operator auto-injects a soft
zone-spread constraint (whenUnsatisfiable: ScheduleAnyway).
Set to an empty array to disable auto-injection.
affinity AffinityAffinity defines the pod scheduling constraints for the FeatureStore deployment.
When scaling is enabled and this is not set, the operator auto-injects a soft
pod anti-affinity rule to prefer spreading pods across nodes.

FeatureStoreSpec

FeatureStoreSpec defines the desired state of FeatureStore

Appears in:

FieldDescription
feastProject stringFeastProject is the Feast project id. This can be any alphanumeric string with underscores and hyphens, but it cannot start with an underscore or hyphen. Required.
feastProjectDir FeastProjectDir
services FeatureStoreServices
authz AuthzConfig
cronJob FeastCronJob
batchEngine BatchEngineConfig
replicas integerReplicas is the desired number of pod replicas. Used by the scale sub-resource.
Mutually exclusive with services.scaling.autoscaling.
materialization MaterializationConfigMaterialization controls feature materialization behavior (batch size, pull strategy).
Written into feature_store.yaml for all service pods.
openlineage OpenLineageConfigOpenLineage enables OpenLineage data lineage tracking for Feast operations.
Written into feature_store.yaml for all service pods.

FeatureStoreStatus

FeatureStoreStatus defines the observed state of FeatureStore

Appears in:

FieldDescription
applied FeatureStoreSpecShows the currently applied feast configuration, including any pertinent defaults
clientConfigMap stringConfigMap in this namespace containing a client feature_store.yaml for this feast deployment
cronJob stringCronJob in this namespace for this feast deployment
conditions Condition array
feastVersion string
phase string
serviceHostnames ServiceHostnames
replicas integerReplicas is the current number of ready pod replicas (used by the scale sub-resource).
selector stringSelector is the label selector for pods managed by the FeatureStore deployment (used by the scale sub-resource).
scalingStatus ScalingStatusScalingStatus reports the current scaling state of the FeatureStore deployment.

GitCloneOptions

GitCloneOptions describes how a clone should be performed.

Appears in:

FieldDescription
url stringThe repository URL to clone from.
ref stringReference to a branch / tag / commit
configs object (keys:string, values:string)Configs passed to git via -c
e.g. http.sslVerify: 'false'
OR 'url."https://api:\${TOKEN}@github.com/".insteadOf': 'https://github.com/'
featureRepoPath stringFeatureRepoPath is the relative path to the feature repo subdirectory. Default is 'feature_repo'.
env EnvVar
envFrom EnvFromSource

JobSpec

JobSpec describes how the job execution will look like.

Appears in:

FieldDescription
podTemplateAnnotations object (keys:string, values:string)PodTemplateAnnotations are annotations to be applied to the CronJob's PodTemplate
metadata. This is separate from the CronJob-level annotations and must be
set explicitly by users if they want annotations on the PodTemplate.
parallelism integerSpecifies the maximum desired number of pods the job should
run at any given time. The actual number of pods running in steady state will
be less than this number when ((.spec.completions - .status.successful) < .spec.parallelism),
i.e. when the work left to do is less than max parallelism.
More info: https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/
completions integerSpecifies the desired number of successfully finished pods the
job should be run with. Setting to null means that the success of any
pod signals the success of all pods, and allows parallelism to have any positive
value. Setting to 1 means that parallelism is limited to 1 and the success of that
pod signals the success of the job.
More info: https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/
activeDeadlineSeconds integerSpecifies the duration in seconds relative to the startTime that the job
may be continuously active before the system tries to terminate it; value
must be positive integer. If a Job is suspended (at creation or through an
update), this timer will effectively be stopped and reset when the Job is
resumed again.
podFailurePolicy PodFailurePolicySpecifies the policy of handling failed pods. In particular, it allows to
specify the set of actions and conditions which need to be
satisfied to take the associated action.
If empty, the default behaviour applies - the counter of failed pods,
represented by the jobs's .status.failed field, is incremented and it is
checked against the backoffLimit. This field cannot be used in combination
with restartPolicy=OnFailure.

This field is beta-level. It can be used when the JobPodFailurePolicy feature gate is enabled (enabled by default). | | backoffLimit integer | Specifies the number of retries before marking this job failed. | | backoffLimitPerIndex integer | Specifies the limit for the number of retries within an index before marking this index as failed. When enabled the number of failures per index is kept in the pod's batch.kubernetes.io/job-index-failure-count annotation. It can only be set when Job's completionMode=Indexed, and the Pod's restart policy is Never. The field is immutable. This field is beta-level. It can be used when the JobBackoffLimitPerIndex feature gate is enabled (enabled by default). | | maxFailedIndexes integer | Specifies the maximal number of failed indexes before marking the Job as failed, when backoffLimitPerIndex is set. Once the number of failed indexes exceeds this number the entire Job is marked as Failed and its execution is terminated. When left as null the job continues execution of all of its indexes and is marked with the Complete Job condition. It can only be specified when backoffLimitPerIndex is set. It can be null or up to completions. It is required and must be less than or equal to 10^4 when is completions greater than 10^5. This field is beta-level. It can be used when the JobBackoffLimitPerIndex feature gate is enabled (enabled by default). | | ttlSecondsAfterFinished integer | ttlSecondsAfterFinished limits the lifetime of a Job that has finished execution (either Complete or Failed). If this field is set, ttlSecondsAfterFinished after the Job finishes, it is eligible to be automatically deleted. When the Job is being deleted, its lifecycle guarantees (e.g. finalizers) will be honored. If this field is unset, the Job won't be automatically deleted. If this field is set to zero, the Job becomes eligible to be deleted immediately after it finishes. | | completionMode CompletionMode | completionMode specifies how Pod completions are tracked. It can be NonIndexed (default) or Indexed.

NonIndexed means that the Job is considered complete when there have been .spec.completions successfully completed Pods. Each Pod completion is homologous to each other.

Indexed means that the Pods of a Job get an associated completion index from 0 to (.spec.completions - 1), available in the annotation batch.kubernetes.io/job-completion-index. The Job is considered complete when there is one successfully completed Pod for each index. When value is Indexed, .spec.completions must be specified and .spec.parallelism must be less than or equal to 10^5. In addition, The Pod name takes the form $(job-name)-$(index)-$(random-string), the Pod hostname takes the form $(job-name)-$(index).

More completion modes can be added in the future. If the Job controller observes a mode that it doesn't recognize, which is possible during upgrades due to version skew, the controller skips updates for the Job. | | suspend boolean | suspend specifies whether the Job controller should create Pods or not. If a Job is created with suspend set to true, no Pods are created by the Job controller. If a Job is suspended after creation (i.e. the flag goes from false to true), the Job controller will delete all active Pods associated with this Job. Users must design their workload to gracefully handle this. Suspending a Job will reset the StartTime field of the Job, effectively resetting the ActiveDeadlineSeconds timer too. | | podReplacementPolicy PodReplacementPolicy | podReplacementPolicy specifies when to create replacement Pods. Possible values are:

  • TerminatingOrFailed means that we recreate pods when they are terminating (has a metadata.deletionTimestamp) or failed.
  • Failed means to wait until a previously created Pod is fully terminated (has phase Failed or Succeeded) before creating a replacement Pod.

When using podFailurePolicy, Failed is the the only allowed value. TerminatingOrFailed and Failed are allowed values when podFailurePolicy is not in use. This is an beta field. To use this, enable the JobPodReplacementPolicy feature toggle. This is on by default. |

KubernetesAuthz

KubernetesAuthz provides a way to define the authorization settings using Kubernetes RBAC resources. https://kubernetes.io/docs/reference/access-authn-authz/rbac/

Appears in:

FieldDescription
roles string arrayThe Kubernetes RBAC roles to be deployed in the same namespace of the FeatureStore.
Roles are managed by the operator and created with an empty list of rules.
See the Feast permission model at https://docs.feast.dev/getting-started/concepts/permission
The feature store admin is not obligated to manage roles using the Feast operator, roles can be managed independently.
This configuration option is only providing a way to automate this procedure.
Important note: the operator cannot ensure that these roles will match the ones used in the configured Feast permissions.

LocalRegistryConfig

LocalRegistryConfig configures the registry service

Appears in:

FieldDescription
server RegistryServerConfigsCreates a registry server container
persistence RegistryPersistence

MaterializationConfig

MaterializationConfig controls feature materialization behavior written into feature_store.yaml.

Appears in:

FieldDescription
onlineWriteBatchSize integerNumber of rows per batch when writing to the online store during materialization.
Prevents OOM for large feature views. Supported engines: local, spark, ray.
If unset, all rows are written in a single batch.
extraConfig object (keys:string, values:string)ExtraConfig passes additional materialization key-value settings inline into
feature_store.yaml.

McpConfig

McpConfig enables MCP (Model Context Protocol) server support in the feature server. When this field is set on ServingConfig, the feature server type is switched to "mcp".

Appears in:

FieldDescription
enabled booleanEnable the MCP server.
serverName stringMCP server name for identification. Defaults to "feast-mcp-server".
serverVersion stringMCP server version string. Defaults to "1.0.0".
transport stringMCP transport protocol.

OfflinePushBatchingConfig

OfflinePushBatchingConfig controls batching of writes to the offline store via the /push endpoint. Recommended for high-throughput push workloads (streaming pipelines, IoT) to prevent OOM.

Appears in:

FieldDescription
enabled booleanEnable offline push batching.
batchSize integerMaximum number of rows per offline write batch.
batchIntervalSeconds integerSeconds between batch flushes to the offline store.

OfflineStore

OfflineStore configures the offline store service

Appears in:

FieldDescription
server ServerConfigsCreates a remote offline server container
persistence OfflineStorePersistence

OfflineStoreDBStorePersistence

OfflineStoreDBStorePersistence configures the DB store persistence for the offline store service

Appears in:

FieldDescription
type stringType of the persistence type you want to use.
secretRef LocalObjectReferenceData store parameters should be placed as-is from the "feature_store.yaml" under the secret key. "registry_type" & "type" fields should be removed.
secretKeyName stringBy default, the selected store "type" is used as the SecretKeyName

OfflineStoreFilePersistence

OfflineStoreFilePersistence configures the file-based persistence for the offline store service

Appears in:

FieldDescription
type string
pvc PvcConfig

OfflineStorePersistence

OfflineStorePersistence configures the persistence settings for the offline store service

Appears in:

FieldDescription
file OfflineStoreFilePersistence
store OfflineStoreDBStorePersistence

OidcAuthz

OidcAuthz defines the authorization settings for deployments using an Open ID Connect identity provider. https://auth0.com/docs/authenticate/protocols/openid-connect-protocol

Appears in:

FieldDescription
issuerUrl stringOIDC issuer URL. The operator appends /.well-known/openid-configuration to derive the discovery endpoint.
secretRef LocalObjectReferenceSecret with OIDC properties (auth_discovery_url, client_id, client_secret). issuerUrl takes precedence.
secretKeyName stringKey in the Secret containing all OIDC properties as a YAML value. If unset, each key is a property.
tokenEnvVar stringEnv var name for client pods to read an OIDC token from. Sets token_env_var in client config.
verifySSL booleanVerify SSL certificates for the OIDC provider. Defaults to true.
caCertConfigMap OidcCACertConfigMapConfigMap with the CA certificate for self-signed OIDC providers. Auto-detected on RHOAI/ODH.

OidcCACertConfigMap

OidcCACertConfigMap references a ConfigMap containing a CA certificate for OIDC provider TLS.

Appears in:

FieldDescription
name stringConfigMap name.
key stringKey in the ConfigMap holding the PEM certificate. Defaults to "ca-bundle.crt".

OnlineStore

OnlineStore configures the online store service

Appears in:

FieldDescription
server ServerConfigsCreates a feature server container
persistence OnlineStorePersistence
serving ServingConfigServing configures the Feast feature_server section written into feature_store.yaml for the online serve pod.
Controls metrics granularity, offline push batching, and MCP.

OnlineStoreDBStorePersistence

OnlineStoreDBStorePersistence configures the DB store persistence for the online store service

Appears in:

FieldDescription
type stringType of the persistence type you want to use.
secretRef LocalObjectReferenceData store parameters should be placed as-is from the "feature_store.yaml" under the secret key. "registry_type" & "type" fields should be removed.
secretKeyName stringBy default, the selected store "type" is used as the SecretKeyName

OnlineStoreFilePersistence

OnlineStoreFilePersistence configures the file-based persistence for the online store service

Appears in:

FieldDescription
path string
pvc PvcConfig

OnlineStorePersistence

OnlineStorePersistence configures the persistence settings for the online store service

Appears in:

FieldDescription
file OnlineStoreFilePersistence
store OnlineStoreDBStorePersistence

OpenLineageConfig

OpenLineageConfig enables OpenLineage data lineage tracking for Feast operations. Lineage events are emitted during feast apply and materialization when enabled.

Appears in:

FieldDescription
enabled booleanEnable OpenLineage integration.
transportType stringTransport type for lineage events.
transportUrl stringURL for HTTP transport (e.g. http://marquez:5000). Required when transportType is "http".
transportEndpoint stringAPI endpoint path appended to transportUrl. Defaults to "api/v1/lineage".
apiKeySecretRef LocalObjectReferenceReference to a Secret containing the key "api_key" for lineage server authentication.
extraConfig object (keys:string, values:string)ExtraConfig holds additional OpenLineage key-value settings written inline into
the openlineage block of feature_store.yaml alongside the typed fields above.
Use this for non-core settings (e.g. namespace, producer, emit_on_apply,
emit_on_materialize) and transport-specific options (e.g. kafka
bootstrap_servers, topic; file path). Boolean values ("true"/"false") and
integer values are automatically coerced to their native YAML types.
Keys must be valid Feast OpenLineageConfig YAML field names.

OptionalCtrConfigs

OptionalCtrConfigs k8s container settings that are optional

Appears in:

FieldDescription
env EnvVar
envFrom EnvFromSource
imagePullPolicy PullPolicy
resources ResourceRequirements
nodeSelector map[string]string

PDBConfig

PDBConfig configures a PodDisruptionBudget for the FeatureStore deployment. Exactly one of minAvailable or maxUnavailable must be set.

Appears in:

FieldDescription
minAvailable IntOrStringMinAvailable specifies the minimum number/percentage of pods that must remain available.
Mutually exclusive with maxUnavailable.
maxUnavailable IntOrStringMaxUnavailable specifies the maximum number/percentage of pods that can be unavailable.
Mutually exclusive with minAvailable.

PvcConfig

PvcConfig defines the settings for a persistent file store based on PVCs. We can refer to an existing PVC using the Ref field, or create a new one using the Create field.

Appears in:

FieldDescription
ref LocalObjectReferenceReference to an existing field
create PvcCreateSettings for creating a new PVC
mountPath stringMountPath within the container at which the volume should be mounted.
Must start by "/" and cannot contain ':'.

PvcCreate

PvcCreate defines the immutable settings to create a new PVC mounted at the given path. The PVC name is the same as the associated deployment & feast service name.

Appears in:

FieldDescription
accessModes PersistentVolumeAccessMode arrayAccessModes k8s persistent volume access modes. Defaults to ["ReadWriteOnce"].
storageClassName stringStorageClassName is the name of an existing StorageClass to which this persistent volume belongs. Empty value
means that this volume does not belong to any StorageClass and the cluster default will be used.
resources VolumeResourceRequirementsResources describes the storage resource requirements for a volume.
Default requested storage size depends on the associated service:
  • 10Gi for offline store
  • 5Gi for online store
  • 5Gi for registry |

Registry

Registry configures the registry service. One selection is required. Local is the default setting.

Appears in:

FieldDescription
local LocalRegistryConfig
remote RemoteRegistryConfig

RegistryDBStorePersistence

RegistryDBStorePersistence configures the DB store persistence for the registry service

Appears in:

FieldDescription
type stringType of the persistence type you want to use.
secretRef LocalObjectReferenceData store parameters should be placed as-is from the "feature_store.yaml" under the secret key. "registry_type" & "type" fields should be removed.
secretKeyName stringBy default, the selected store "type" is used as the SecretKeyName

RegistryFilePersistence

RegistryFilePersistence configures the file-based persistence for the registry service

Appears in:

FieldDescription
path string
pvc PvcConfig
s3_additional_kwargs map[string]string
cache_ttl_seconds integerCacheTTLSeconds defines the TTL (in seconds) for the registry cache.
cache_mode stringCacheMode defines the registry cache update strategy.
Allowed values are "sync" and "thread".

RegistryPersistence

RegistryPersistence configures the persistence settings for the registry service

Appears in:

FieldDescription
file RegistryFilePersistence
store RegistryDBStorePersistence

RegistryServerConfigs

RegistryServerConfigs creates a registry server for the feast service, with specified container configurations.

Appears in:

FieldDescription
image string
env EnvVar
envFrom EnvFromSource
imagePullPolicy PullPolicy
resources ResourceRequirements
nodeSelector map[string]string
tls TlsConfigs
logLevel stringLogLevel sets the logging level for the server
Allowed values: "debug", "info", "warning", "error", "critical".
metrics booleanMetrics exposes Prometheus-compatible metrics for the Feast server when enabled.
volumeMounts VolumeMount arrayVolumeMounts defines the list of volumes that should be mounted into the feast container.
This allows attaching persistent storage, config files, secrets, or other resources
required by the Feast components. Ensure that each volume mount has a corresponding
volume definition in the Volumes field.
workerConfigs WorkerConfigsWorkerConfigs defines the worker configuration for the Feast server.
These options are primarily used for production deployments to optimize performance.
restAPI booleanEnable REST API registry server.
grpc booleanEnable gRPC registry server. Defaults to true if unset.

RemoteRegistryConfig

RemoteRegistryConfig points to a remote feast registry server. When set, the operator will not deploy a registry for this FeatureStore CR. Instead, this FeatureStore CR's online/offline services will use a remote registry. One selection is required.

Appears in:

FieldDescription
hostname stringHost address of the remote registry service - <domain>:<port>, e.g. registry.<namespace>.svc.cluster.local:80
feastRef FeatureStoreRefReference to an existing FeatureStore CR in the same k8s cluster.
tls TlsRemoteRegistryConfigs

ScalingConfig

ScalingConfig configures horizontal scaling for the FeatureStore deployment.

Appears in:

FieldDescription
autoscaling AutoscalingConfigAutoscaling configures a HorizontalPodAutoscaler for the FeatureStore deployment.
Mutually exclusive with spec.replicas.

ScalingStatus

ScalingStatus reports the observed scaling state.

Appears in:

FieldDescription
currentReplicas integerCurrentReplicas is the current number of pod replicas.
desiredReplicas integerDesiredReplicas is the desired number of pod replicas.

SecretKeyNames

SecretKeyNames defines the secret key names for the TLS key and cert.

Appears in:

FieldDescription
tlsCrt stringdefaults to "tls.crt"
tlsKey stringdefaults to "tls.key"

ServerConfigs

ServerConfigs creates a server for the feast service, with specified container configurations.

Appears in:

FieldDescription
image string
env EnvVar
envFrom EnvFromSource
imagePullPolicy PullPolicy
resources ResourceRequirements
nodeSelector map[string]string
tls TlsConfigs
logLevel stringLogLevel sets the logging level for the server
Allowed values: "debug", "info", "warning", "error", "critical".
metrics booleanMetrics exposes Prometheus-compatible metrics for the Feast server when enabled.
volumeMounts VolumeMount arrayVolumeMounts defines the list of volumes that should be mounted into the feast container.
This allows attaching persistent storage, config files, secrets, or other resources
required by the Feast components. Ensure that each volume mount has a corresponding
volume definition in the Volumes field.
workerConfigs WorkerConfigsWorkerConfigs defines the worker configuration for the Feast server.
These options are primarily used for production deployments to optimize performance.

ServiceHostnames

ServiceHostnames defines the service hostnames in the format of <domain>:<port>, e.g. example.svc.cluster.local:80

Appears in:

FieldDescription
offlineStore string
onlineStore string
registry string
registryRest string
ui string

ServingConfig

ServingConfig configures the feature_server section of the generated feature_store.yaml. When Mcp is set, the feature server type is switched to "mcp"; otherwise "local" is used.

Appears in:

FieldDescription
metrics ServingMetricsConfigMetrics configures per-category Prometheus metrics for the feature server.
Coexists with the server.metrics bool flag — both can be set simultaneously.
offlinePushBatching OfflinePushBatchingConfigOfflinePushBatching batches writes to the offline store via the /push endpoint.
mcp McpConfigMcp enables MCP (Model Context Protocol) server support. When set, feature server type is "mcp".

ServingMetricsConfig

ServingMetricsConfig controls per-category Prometheus metrics for the feature server. Setting Enabled to true activates the metrics HTTP server on port 8000. All metric categories default to true when enabled; use Categories to selectively disable them.

Appears in:

FieldDescription
enabled booleanEnable the Prometheus metrics endpoint on port 8000.
categories object (keys:string, values:boolean)Categories selectively enables or disables individual Feast metric categories.
Keys are Feast MetricsConfig field names (e.g. "resource", "request",
"online_features", "push", "materialization", "freshness"). Omitted keys
default to true when metrics is enabled.

TlsConfigs

TlsConfigs configures server TLS for a feast service. in an openshift cluster, this is configured by default using service serving certificates.

Appears in:

FieldDescription
secretRef LocalObjectReferencereferences the local k8s secret where the TLS key and cert reside
secretKeyNames SecretKeyNames
disable booleanwill disable TLS for the feast service. useful in an openshift cluster, for example, where TLS is configured by default

TlsRemoteRegistryConfigs

TlsRemoteRegistryConfigs configures client TLS for a remote feast registry. in an openshift cluster, this is configured by default when the remote feast registry is using service serving certificates.

Appears in:

FieldDescription
configMapRef LocalObjectReferencereferences the local k8s configmap where the TLS cert resides
certName stringdefines the configmap key name for the client TLS cert.

WorkerConfigs

WorkerConfigs defines the worker configuration for Feast servers. These settings control gunicorn worker processes for production deployments.

Appears in:

FieldDescription
workers integerWorkers is the number of worker processes. Use -1 to auto-calculate based on CPU cores (2 * CPU + 1).
Defaults to 1 if not specified.
workerConnections integerWorkerConnections is the maximum number of simultaneous clients per worker process.
Defaults to 1000.
maxRequests integerMaxRequests is the maximum number of requests a worker will process before restarting.
This helps prevent memory leaks. Defaults to 1000.
maxRequestsJitter integerMaxRequestsJitter is the maximum jitter to add to max-requests to prevent
thundering herd effect on worker restart. Defaults to 50.
keepAliveTimeout integerKeepAliveTimeout is the timeout for keep-alive connections in seconds.
Defaults to 30.
registryTTLSeconds integerRegistryTTLSeconds is the number of seconds after which the registry is refreshed.
Higher values reduce refresh overhead but increase staleness. Defaults to 60.