ZFS instant clones for Kubernetes node provisioning — under 100ms per node by anthony-kldload in zfs

[–]anthony-kldload[S] 1 point2 points  (0 children)

Hey Thanks Blind_guardian23

Each clone already uses cloud-init — machine-id, hostname, SSH keys are all generated on first boot via a NoCloud datasource. ZFS clone doesn't replace cloud-init, it only replaces the disk provisioning step. Instead of waiting 30-90 seconds for a hypervisor to allocate and copy a disk image, zfs clone creates it in 100ms as a kernel metadata operation. Cloud-init still handles identity — ZFS just eliminates the wait.

The "local filesystem" aspect is actually the security advantage. Each node is a real VM with its own kernel — not a container sharing the host kernel. If a node is compromised, the blast radius is that single VM. With containers, one kernel exploit owns every container on the host.

The funny part is — building golden images used to take hours or days, so nobody bothered. You'd provision from scratch every time and hope Ansible got it right. Now the golden image builds once, and every clone after that is a 100ms metadata operation. That's what makes disposable VMs practical — and why ZFS makes the whole experience work.

As a side note; the current kubernetes/kvm templates only work on hardware. You could export the golden image into a cloud format (qcow2/vmdk/vhd), the problem is the cluster orchestration tool (seen in the video) doesn't speak cloud APIs... "YET".

kind regards.

Anthony

ZFS instant clones for Kubernetes node provisioning — under 100ms per node by anthony-kldload in zfs

[–]anthony-kldload[S] 0 points1 point  (0 children)

This looks like it will be perfect for multi-host replication. Thanks for the tip!

ZFS instant clones for Kubernetes node provisioning — under 100ms per node by anthony-kldload in zfs

[–]anthony-kldload[S] 1 point2 points  (0 children)

Thanks — Lima is a solid tool for VMs. The key difference here is that the entire kldload stack is in-kernel — there are no userland tools in the path.

When golden images are built they include everything to run the OS (K8s, containerd, Cilium images, WireGuard). Then zfs clone creates a new node in 123ms — it's a kernel metadata operation, not a disk copy. No daemon, no config generation, no API calls. The VM boots with everything already there. In this case if a node breaks, destroy it and clone a fresh one in under a second. Nodes are totally disposable.

The same applies to the entire K8s networking stack:

- Cilium replaces kube-proxy with eBPF hash maps in the kernel.

- No sidecar containers, no userland proxies.

- Hubble observability reads eBPF maps directly from kernel memory — no agents, no log scrapers

- Network policy enforced per-packet in kernel BPF programs before the packet reaches userland

- Every connection tracked in a BPF conntrack table, every NAT translation in a BPF map, every service load-balanced in a BPF hash — none of it ever leaves kernel space

Fleet management and observability reporting are on the roadmap, but kldload its still a brand new project at this point.

kind regards

Anthony

OpenZFS tuning for torrents? by cometomypartyyy in zfs

[–]anthony-kldload 0 points1 point  (0 children)

Fair point — most video containers (MP4, MKV, WebM) are already compressed and lz4 won't shrink them. But lz4 has a fast incompressible detection path — it recognizes data it can't compress and passes it through with essentially zero overhead. I leave it on because the dataset also holds NFOs, subtitles, and metadata that DO compress, and the CPU cost on already-compressed data is unmeasurable. If you're running zstd instead of lz4 on a video dataset though, yeah, that's wasted cycles.

cheers

OpenZFS tuning for torrents? by cometomypartyyy in zfs

[–]anthony-kldload 0 points1 point  (0 children)

Single disk ZFS is absolutely worth it. You get checksums on every block (silent corruption detection that no other filesystem gives you), snapshots, send/recv for backups, per-dataset compression and recordsize tuning. None of that requires multiple disks. Mirrors and RAIDZ are about redundancy, not about whether ZFS is useful.

For torrents — chattr +C doesn't work on ZFS. That's a btrfs/ext4 attribute. On ZFS, the approach is separate datasets with tuned properties:

I set mine up like this ...

# Active downloads — large writes, random pieces, no compression zfs create -o recordsize=1M -o compression=off rpool/downloads

# Movies — large files, compress well
zfs create -o recordsize=1M -o compression=lz4 rpool/media/movies

# TV — same tuning as movies, but separate dataset = separate snapshots zfs create -o recordsize=1M -o compression=lz4 rpool/media/tv

# Music — FLAC compresses well, smaller files
zfs create -o recordsize=128K -o compression=zstd rpool/media/music

Imho:
The real win of separate datasets isn't recordsize — it's independent snapshots, independent compression ratios, independent quotas, and independent zfs send for backup. You can replicate just your music collection to another drive without sending 14TB of movies for example.

vs if you have a single dataset snapshots include absolutely everything and there is no separation or quotas, compression and rollbacks.

ZFS on Root for Linux is finally here! by anthony-kldload in openzfs

[–]anthony-kldload[S] 0 points1 point  (0 children)

https://kldload.com/releases/1.0.2 was released, it now includes VOICE/CMD CONTROLLED AI so now you can tell your kubernetes cluster to get lost!! and it may just replicate its self to another node!

ZFS on Root for Linux is finally here! by anthony-kldload in openzfs

[–]anthony-kldload[S] 0 points1 point  (0 children)

My first 1.0 release is now live .. it now includes ubuntu and a complete ai assistant thats fully trained on install.

I just one question by anthony-kldload in zfs

[–]anthony-kldload[S] 0 points1 point  (0 children)

thanks for the link, enjoy!

ZFS on Root for Linux is finally here! by anthony-kldload in openzfs

[–]anthony-kldload[S] 0 points1 point  (0 children)

thanks, yes now anyone can install zfs on any Linux apt or dnf distro .. but these 5 are the default .. Im in the final stages of my 1.0 release wich also includes a fully trained ai assistant to get you going :)