Kubernetes Volume Group Snapshots Go GA in v1.36: What You Need to Know
Volume group snapshots have reached General Availability (GA) in Kubernetes v1.36, marking a significant milestone for stateful workloads that rely on multiple volumes. This feature, first introduced as alpha in v1.27, enables crash-consistent snapshots of multiple PersistentVolumeClaims at the exact same point in time, eliminating the need for application quiescence. Below, we answer the most common questions about this powerful capability.
What are volume group snapshots in Kubernetes?
Volume group snapshots are a Kubernetes abstraction for creating crash-consistent snapshots across a set of volumes simultaneously. When an application spans multiple volumes—for example, one for data and another for logs—individual snapshots taken at different times can result in an inconsistent state if restored. A group snapshot captures all volumes at the same instant, ensuring write-order consistency across the group. Kubernetes uses a label selector to group PersistentVolumeClaim objects that should be snapshotted together. The resulting group snapshot can be used to either provision new volumes pre-populated with the snapshot data or restore existing volumes to that precise recovery point.
How did the feature evolve from alpha to GA?
The journey began with Kubernetes v1.27, where volume group snapshots were introduced as an alpha feature. After community feedback and enhancements, it moved to beta in v1.32, and then to a second beta in v1.34 to address remaining stability concerns. With the v1.36 release, the feature has achieved General Availability (GA), meaning it is now stable, enabled by default, and ready for production use. The extension APIs for group snapshots—VolumeGroupSnapshot, VolumeGroupSnapshotContent, and VolumeGroupSnapshotClass—are now considered mature and backward-compatible.
What are the key benefits over individual volume snapshots?
The primary advantage is crash consistency across multiple volumes. Applications such as databases or CMS platforms often store data across several volumes (e.g., data, logs, configuration). Individual snapshots taken sequentially can be out of sync, leading to corruption upon restore. Group snapshots guarantee that all volumes are captured at the same logical time. Additionally, you no longer need to quiesce (pause) the application before snapshotting each volume individually—a process that can be time-consuming or even impossible for high‑availability services. The feature also simplifies automation: a single API call creates a consistent snapshot set for the entire group, reducing complexity and operational overhead.
Which Kubernetes APIs are used for volume group snapshots?
Three custom resource definitions form the foundation:
- VolumeGroupSnapshot – Created by a user or automation to request a group snapshot for a set of PersistentVolumeClaims.
- VolumeGroupSnapshotContent – Represents the provisioned cluster resource; it is created by the snapshot controller and binds to a VolumeGroupSnapshot.
- VolumeGroupSnapshotClass – Defines the parameters (e.g., CSI driver, snapshot interval) for creating group snapshots.
These APIs work together with the existing CSI driver integration to orchestrate the snapshot lifecycle seamlessly.
What prerequisites are needed to use volume group snapshots?
This feature is only supported for CSI (Container Storage Interface) volume drivers. The underlying storage system must also provide the ability to create crash-consistent snapshots across multiple volumes simultaneously. Not all CSI drivers support this capability—check your driver’s documentation. Additionally, you need the VolumeGroupSnapshot API enabled (default in v1.36) and the snapshot controller running in your cluster. Proper RBAC rules must be in place to allow users to create and manage these resources.
Can I restore volumes from a group snapshot?
Yes. A group snapshot can be used in two ways:
- Restore to new volumes – Provision new PersistentVolumeClaims pre-populated with the snapshot data, effectively rehydrating the workload from a consistent recovery point.
- Restore existing volumes – Roll back an existing set of volumes to the state captured by the group snapshot, as long as the storage system supports in-place restore.
This flexibility makes group snapshots ideal for disaster recovery, backup/restore workflows, and test environment cloning.
How do group snapshots improve workload portability?
Kubernetes’ goal of workload portability is strengthened by group snapshots because they provide a storage‑agnostic abstraction for consistent multi‑volume snapshots. Administrators can define snapshot policies that work across different CSI drivers, and application developers can rely on a uniform API to protect complex stateful applications. The feature reduces vendor lock‑in: if a storage system supports group snapshots, the Kubernetes layer handles the orchestration uniformly. Combined with the existing VolumeSnapshot API, group snapshots give teams a comprehensive toolkit to manage data protection without scripting driver‑specific commands.