changelogs/CHANGELOG-1.16.md
https://github.com/vmware-tanzu/velero/releases/tag/v1.16.0
velero/velero:v1.16.0
https://velero.io/docs/v1.16/upgrade-to-1.16/
In v1.16, Velero supports to run in Windows clusters and backup/restore Windows workloads, either stateful or stateless:
Check the epic issue https://github.com/vmware-tanzu/velero/issues/8289 for more information.
v1.16 now supports to back up item blocks in parallel. Specifically, during backup, correlated resources are grouped in item blocks and Velero backup engine creates a thread pool to back up the item blocks in parallel. This significantly improves the backup throughput, especially when there are large scale of resources.
Pre/post hooks also belongs to item blocks, so will also run in parallel along with the item blocks.
Users are allowed to configure the parallelism through the --item-block-worker-count Velero server parameter. If not configured, the default parallelism is 1.
For more information, check issue https://github.com/vmware-tanzu/velero/issues/8334.
In previous releases, for each volume of WaitForFirstConsumer mode, data mover restore is only allowed to happen in the node that the volume is attached. This severely degrades the parallelism and the balance of node resource(CPU, memory, network bandwidth) consumption for data mover restore (https://github.com/vmware-tanzu/velero/issues/8044).
In v1.16, users are allowed to configure data mover restores running and spreading evenly across all nodes in the cluster. The configuration is done through a new flag ignoreDelayBinding in node-agent configuration (https://github.com/vmware-tanzu/velero/issues/8242).
In 1.16, some observability enhancements are added:
The outputs are in the same node-agent log and enabled automatically.
In previous releases, a unnecessary VolumeSnapshotContent object is retained for each backup and synced to other clusters sharing the same backup storage location. And during restore, the retained VolumeSnapshotContent is also restored unnecessarily.
In 1.16, the retained VolumeSnapshotContent is removed from the backup, so no unnecessary CSI objects are synced or restored.
For more information, check issue https://github.com/vmware-tanzu/velero/issues/8725.
In v1.16, some enhancements of backup repository maintenance are added to improve the observability and resiliency:
RecentMaintenance, is added to the BackupRepository CR. Specifically, for each BackupRepository, including start/completion time, completion status and error message. (https://github.com/vmware-tanzu/velero/issues/7810)normalGC, fastGC and eagerGC, through the fullMaintenanceInterval parameter in backupRepository configuration. (https://github.com/vmware-tanzu/velero/issues/8364)In v1.16, Volume Policy is extended to support filtering volumes by PVC labels. (https://github.com/vmware-tanzu/velero/issues/8256).
In v1.16, users are allowed to define whether to restore resource status per object through an annotation velero.io/restore-status set on the object. (https://github.com/vmware-tanzu/velero/issues/8204).
In v1.16, Velero banaries, i.e., velero, velero-helper and velero-restore-helper, are all included into the single Velero image. (https://github.com/vmware-tanzu/velero/issues/8484).
Golang runtime: 1.23.7
kopia: 0.19.0
--from-schedule flag in create backup command to prevent empty or whitespace-only values. (#8665, @aj-2000)fullMaintenanceInterval where fastGC (12 hours), and eagerGC (6 hours) allowing for faster removal of deleted velero backups from kopia repo. (#8581, @kaovilai)