OpenEBS Local PV ZFS is a CSI plugin for implementation of ZFS backed persistent volumes for Kubernetes. It is a local storage solution, which means the device, volume and the application are on the same host. It doesn't contain any dataplane, i.e only its simply a control-plane for the kernel zfs volumes. It mainly comprises of two components which are implemented in accordance to the CSI Specs:
- CSI Controller - Frontends the incoming requests and initiates the operation.
- CSI Node Plugin - Serves the requests by performing the operations and making the volume available for the initiator.
- Lightweight, easy to set up storage provisoner for host-local volumes in k8s ecosystem.
- Makes ZFS stack available to K8s, allowing end users to use the ZFS functionalites like snapshot, restore, clone, thin provisioning, resize, encryption, compression, dedup, etc for their Persistent Volumes.
- Cloud native, i.e based on CSI spec, so developed to run on K8s.
LocalPV refers to storage that is directly attached to a specific node in the Kubernetes cluster. It uses locally available disks (e.g., SSDs, HDDs) on the node.
Use Case: Ideal for workloads that require low-latency access to storage or when data locality is critical (e.g., databases, caching systems).
- Node-bound: The volume is tied to the node where the disk is physically located.
- No replication: Data is not replicated across nodes, so if the node fails, the data may become inaccessible.
- High performance: Since the storage is local, it typically offers lower latency compared to network-attached storage.
The diagram below depicts the mapping to the host disks, the ZFS stack on top of the disks and the kubernetes persistent volumes to be consumed by the workload. Local PV ZFS CSI Controller upon creation of the Persistent Volume Claim, creates a ZFSVolume CR, which emits an event for Local PV ZFS CSI Node Plugin to create the zvol/dataset. When workloads are scheduled the Local PV ZFS CSI Node Plugin makes this zvol/dataset available via a mount point on the host.
graph TD;
subgraph Node2["Node 2"]
subgraph K8S_NODE1[" "]
N1_PV1["PV"] --> N1_APP1["APP"]
N1_PV2["PV"] --> N1_APP2["APP"]
end
subgraph LVM_Stack2["ZFS Stack"]
V1_1 --> L1_1["ZVOL"]
V1_1 --> L3_1["ZVOL"]
L1_1 --> N1_PV1
L3_1 --> N1_PV2
end
subgraph Blockdevices1[" "]
D1["/dev/sdc"] --> V1_1["ZPOOL"]
D2["/dev/sdb"] --> V1_1["ZPOOL"]
end
end
subgraph Node1["Node 1"]
subgraph K8S_NODE2[" "]
N2_PV1["PV"] --> N2_APP1["APP"]
end
subgraph LVM_Stack1["ZFS Stack"]
V2_2 --> Z2_2["ZVOL"]
Z2_2 --> N2_PV1
end
subgraph Blockdevices2[" "]
D3["/dev/sdb"] --> V2_2["ZPOOL"]
end
end
classDef pv fill:#FFCC00,stroke:#FF9900,color:#000;
classDef app fill:#99CC00,stroke:#66CC00,color:#000;
classDef disk fill:#FF6666,stroke:#FF3333,color:#000;
classDef zfs fill:#99CCFF,stroke:#6699FF,color:#000;
class N1_PV1,N1_PV2,N2_PV1 pv;
class N1_APP1,N1_APP2,N2_APP1 app;
class D1,D2,D3 disk;
class V1_1,V2_2 zfs;
class L1_1,L3_1,Z2_2 zfs;
Name Version K8S 1.23+ Distro Alpine, Arch, CentOS, Debian, Fedora, NixOS, SUSE, RHEL, Ubuntu Kenel oldest supported kernel is 2.6.32 ZFS 0.7, 0.8, 2.2.3 Memory ECC Memory is highly recommended RAM 8GiB for best perf with Dedup enabled. (Will work with 2GiB or less without Dedup)
Check the features supported for each k8s version.
- Prerequisites
- Quickstart
- Developer Setup
- Testing
- Contibuting Guidelines
- Governance
- Changelog
- Release Process
- Access Modes
- ReadWriteOnce
ReadOnlyManyReadWriteMany
- Volume modes
-
Filesystem
mode -
Block
mode
-
- Supports fsTypes:
ext4
,btrfs
,xfs
,zfs
- Volume metrics
- [Snapshot]
- Clone
- Volume Resize
- Raw Block Volume
- Backup/Restore
- Ephemeral inline volume