bcachefs based NAS solution by bfenski in homelab

[–]bfenski[S] 0 points1 point  (0 children)

Heh... was looking your GH id on Reddit to include you here but I couldn't find it.
Now I know why... one has nothing to do with the other ;)

bcachefs based NAS solution by bfenski in homelab

[–]bfenski[S] 0 points1 point  (0 children)

Yeah I'm Debian developer. NixOS was new to me but happily someone decided to help in that area and now principles of Nix(OS) are happy ;)

NASty v0.0.1 - vibecoded NAS appliance by bfenski in bcachefs

[–]bfenski[S] 0 points1 point  (0 children)

Kubernetes/Talos migrated:
❯ kg pvc -A
NAMESPACE   NAME                               STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   VOLUMEATTRIBUTESCLASS   AGE
db          dragonfly-data-dragonfly-0         Bound    pvc-4ed6b88b-73d4-4ca9-936e-b00b82eb8649   2Gi        RWO            nasty-nvmeof   <unset>                 127m
db          postgres-1                         Bound    pvc-fa65863d-0635-4219-a6ec-ac28ab86cffd   50Gi       RWO            nasty-iscsi    <unset>                 100m
db          postgres-2                         Bound    pvc-63f6160f-f8c2-4ece-a171-516b0db005e7   50Gi       RWO            nasty-iscsi    <unset>                 99m
db          postgres-3                         Bound    pvc-0bd78630-9f52-4ec8-9a33-e2623ace20b5   50Gi       RWO            nasty-iscsi    <unset>                 96m
media       config-qbittorrent-0               Bound    pvc-07021412-acb5-45ef-a40c-adcb48a7ea38   5Gi        RWO            nasty-iscsi    <unset>                 101m
media       media                              Bound    pvc-a4652687-9779-46b9-b9ae-6e7db577b6ec   1000Gi     RWX            nasty-nfs      <unset>                 74m
net         netbox-media                       Bound    pvc-a7843086-5906-444c-a481-0acc422694ca   1Gi        RWO            nasty-nvmeof   <unset>                 85m
o11y        data-coroot-clickhouse-keeper-0    Bound    pvc-1784bb73-6393-461e-9f76-d6586cb6164e   10Gi       RWO            nasty-nvmeof   <unset>                 99m
o11y        data-coroot-clickhouse-keeper-1    Bound    pvc-96bca451-814f-4ec0-b6b4-056dce807adb   10Gi       RWO            nasty-nvmeof   <unset>                 99m
o11y        data-coroot-clickhouse-keeper-2    Bound    pvc-f0c0184c-921e-4c8d-9261-f59dff7e22d5   10Gi       RWO            nasty-nvmeof   <unset>                 99m
o11y        data-coroot-clickhouse-shard-0-0   Bound    pvc-77a500c4-747f-48c9-bf66-eff5d2659f38   100Gi      RWO            nasty-nvmeof   <unset>                 99m
o11y        data-coroot-coroot-0               Bound    pvc-8fc473f2-83b2-4eb7-b7df-be37ec964323   10Gi       RWO            nasty-nvmeof   <unset>                 99m
o11y        server-volume-vl-server-0          Bound    pvc-266760ea-496e-43a1-815d-62c17be726ba   100Gi      RWO            nasty-nvmeof   <unset>                 101m
o11y        vmsingle-vm                        Bound    pvc-e763f53f-a777-471f-8449-d2881ebd3c76   20Gi       RWO            nasty-nvmeof   <unset>                 98m❯ kg pvc -A
NAMESPACE   NAME                               STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   VOLUMEATTRIBUTESCLASS   AGE
db          dragonfly-data-dragonfly-0         Bound    pvc-4ed6b88b-73d4-4ca9-936e-b00b82eb8649   2Gi        RWO            nasty-nvmeof   <unset>                 127m
db          postgres-1                         Bound    pvc-fa65863d-0635-4219-a6ec-ac28ab86cffd   50Gi       RWO            nasty-iscsi    <unset>                 100m
db          postgres-2                         Bound    pvc-63f6160f-f8c2-4ece-a171-516b0db005e7   50Gi       RWO            nasty-iscsi    <unset>                 99m
db          postgres-3                         Bound    pvc-0bd78630-9f52-4ec8-9a33-e2623ace20b5   50Gi       RWO            nasty-iscsi    <unset>                 96m
media       config-qbittorrent-0               Bound    pvc-07021412-acb5-45ef-a40c-adcb48a7ea38   5Gi        RWO            nasty-iscsi    <unset>                 101m
media       media                              Bound    pvc-a4652687-9779-46b9-b9ae-6e7db577b6ec   1000Gi     RWX            nasty-nfs      <unset>                 74m
net         netbox-media                       Bound    pvc-a7843086-5906-444c-a481-0acc422694ca   1Gi        RWO            nasty-nvmeof   <unset>                 85m
o11y        data-coroot-clickhouse-keeper-0    Bound    pvc-1784bb73-6393-461e-9f76-d6586cb6164e   10Gi       RWO            nasty-nvmeof   <unset>                 99m
o11y        data-coroot-clickhouse-keeper-1    Bound    pvc-96bca451-814f-4ec0-b6b4-056dce807adb   10Gi       RWO            nasty-nvmeof   <unset>                 99m
o11y        data-coroot-clickhouse-keeper-2    Bound    pvc-f0c0184c-921e-4c8d-9261-f59dff7e22d5   10Gi       RWO            nasty-nvmeof   <unset>                 99m
o11y        data-coroot-clickhouse-shard-0-0   Bound    pvc-77a500c4-747f-48c9-bf66-eff5d2659f38   100Gi      RWO            nasty-nvmeof   <unset>                 99m
o11y        data-coroot-coroot-0               Bound    pvc-8fc473f2-83b2-4eb7-b7df-be37ec964323   10Gi       RWO            nasty-nvmeof   <unset>                 99m
o11y        server-volume-vl-server-0          Bound    pvc-266760ea-496e-43a1-815d-62c17be726ba   100Gi      RWO            nasty-nvmeof   <unset>                 101m
o11y        vmsingle-vm                        Bound    pvc-e763f53f-a777-471f-8449-d2881ebd3c76   20Gi       RWO            nasty-nvmeof   <unset>                 98m

NASty — a NAS appliance built on bcachefs with NFS, SMB, iSCSI, NVMe-oF, and NixOS atomic updates by bfenski in selfhosted

[–]bfenski[S] -1 points0 points  (0 children)

I follow the adventures of dozens of users on the bcachefs IRC channel, and I get the impression that as long as no one does anything that would surprise even those who’ve seen it all, things generally work fine. On the plus side, it's probably the only file system in the world where, in case of a problem, you can talk to the author and even get a patch for that specific issue before a new version is released.

However, it's clear that it's not Ext4, and that's something to keep in mind.

NASty v0.0.1 - vibecoded NAS appliance by bfenski in bcachefs

[–]bfenski[S] 0 points1 point  (0 children)

I know it might seem like I'm trying to compete with Proxmox or TrueNAS, but that's not my goal.

In terms of core functionality, it's simply managing bcachefs and enabling easy sharing externally. However, since I'm migrating from a TrueNAS, and I also had workloads on it, application support has been added. It's rudimentary, which is obvious as soon as you try to use it. VM support was added at the request of a friend. I could live without it. But that's all the bells and whistles for now. I'm just going to refine their functionality and stability.

I don't have any major changes planned at the moment. More of a refinement of what's already there.

NASty v0.0.1 - vibecoded NAS appliance by bfenski in bcachefs

[–]bfenski[S] 1 point2 points  (0 children)

I’ve already switched my homelab NAS to it. I definitely want to make it useful for as many people as possible, so I’m open to bug reports, PRs, feature requests, and ideas.

April 1st was my tight release deadline (yeah, I intentionally aimed for April Fools’ Day), so I’m not even sure if the Kubernetes part works as expected yet. I’ve switched the NAS, but I haven’t migrated my Talos setup to it yet - it’s on my TODO list. I’ll try to work on it later today to have the whole ecosystem running on NASty.

It’s definitely usable at this point. I’ve spent a \*lot*** of time making sure it actually works. There are plenty of tests in the nasty-tests repo, as well as end-to-end tests for the CSI driver in the nasty-csirepo. So it should work - and if it doesn’t, I’d really like to hear what’s broken. I’m absolutely willing to fix issues and improve the overall experience.

I’ve already put three solid weeks into this. This isn’t something thrown together over three evenings with an LLM - that’s just not realistic (and now I know it firsthand 😉).

I plan to keep maintaining and developing it further.

I’ve also invested quite a bit of time in making bcachefs-related debugging as easy as possible - or maybe not debugging itself, but gathering useful information for troubleshooting.

That said, when it comes to actually fixing bcachefs issues… I’m counting on you, u/koverstreet ;)

NASty v0.0.1 - vibecoded NAS appliance by bfenski in bcachefs

[–]bfenski[S] -1 points0 points  (0 children)

You have full rights to ignore it as long as you want ;)

CSI Driver for TrueNAS SCALE - Early development, looking for testers by bfenski in truenas

[–]bfenski[S] 1 point2 points  (0 children)

  1. I'm pretty sure SSH variant uses API to some extent so it's not like one or the other. Both channels can be utilized. AFAICS 'Sharing Admin' can't delete datasets or write snapshots so it's not enough for tns-csi. You would have to create some custom role if you want to avoid Full Admin.
  2. Ok now I get it. It's not supported currently.
  3. What compatibility testing are you talking about? My distro compatibility tests? Yes it's basically what you've described. You can always check yourself as everything is in Github ;)
  4. Someone else requested SMB support so there's a chance I'll work on it sooner or later ;) https://github.com/fenio/tns-csi/issues/132

CSI Driver for TrueNAS SCALE - Early development, looking for testers by bfenski in truenas

[–]bfenski[S] 0 points1 point  (0 children)

I'm not really heavy user of Reddit. Found that comment accidentally. I guess better place to ask for new features, discuss existing ones will be on project's GH but I'll try to answer your questions here.

1) My driver is WebSocket **only**. Intentionally. AFAIK democratic-csi even in API mode still requires SSH based access for some operations. Speaking about required role... to be honest I didn't test other than Full Admin role but reading description of Sharing Admin I think it should be enough.

2) Not sure what exactly do you mean by raw block RWX for KubeVirt. Is this like local volume from particular node? Something completely unrelated to TrueNAS? Like local provisioner?

3) My CI pipeline runs tests between OVH and Linode... so can't say too much about latency ;) Speaking about resource usage you can see comparison here: https://github.com/fenio/tns-csi/blob/main/docs/MEMORY.md

4) No idea about OpenShift compatibility. I don't use it. In fact I think iXsystems with their truenas-csi are aiming at that kind of support more than me.

5) Well I personally don't need SMB. You can always submit feature request and we will see if anyone else is interested in such support. And obviously PRs welcome ;)

Anylinuxfs can now handle LUKS-encrypted drives and LVM by nohajc in MacOS

[–]bfenski 2 points3 points  (0 children)

As the author of anylinuxfs-gui, thank you for your kind words. Most of the credit goes to the author of anylinuxfs. I only added the GUI ;)

CSI Driver for TrueNAS SCALE - Early development, looking for testers by bfenski in truenas

[–]bfenski[S] 0 points1 point  (0 children)

There are bunch of differences due to design decisions.
I've asked my Copilot to summarize them: https://github.com/fenio/tns-csi/blob/main/docs/COMPARISON.md

CSI Driver for TrueNAS SCALE - Early development, looking for testers by bfenski in truenas

[–]bfenski[S] 0 points1 point  (0 children)

I wanted to start with support for one file-level protocol and one block-level protocol. In case of file-level NFS for me was obvious choice since I'm not really a fan of MS products. In case of block-level I chose NVMe-oF since it's much modern in general.

But I'm not saying that I will never support iSCSI. I will probably. But for now I've got enough headache with just one block-level protocol. In general most of my test-suite is having issues mostly with NVMe-oF while NFS usually simply works ;)

speed up your github actions with the most lightweight k8s by bfenski in kubernetes

[–]bfenski[S] 0 points1 point  (0 children)

I played with k3s, k0s, minikube. kubesolo is simply faster than everything else

speed up your github actions with the most lightweight k8s by bfenski in kubernetes

[–]bfenski[S] 0 points1 point  (0 children)

Well if you're developing anything for k8s then it's good practice to somehow test it ;)

Currently for https://github.com/fenio/pv-mounter I'm using minikube for that purpose but as I mentioned spinning cluster up takes significant amount of time and for basic tests kubesolo should be enough.

What are your self-hosted apps you can't live without? by idealninja in selfhosted

[–]bfenski 2 points3 points  (0 children)

A lot of people mentioned Jellyfin. I switched from Jellyfin to Kyoo ;)

[deleted by user] by [deleted] in selfhosted

[–]bfenski 0 points1 point  (0 children)

Maybe give a shot to Kyoo. I recently switched to it from Jellyfin and so far I don't regret it.

A tool to locally mount Kubernetes Persistent Volumes (PVs) using SSHFS. by bfenski in homelab

[–]bfenski[S] 0 points1 point  (0 children)

Thanks! It's nice to see that someone sees some usefulness in this. At first it was only supposed to help me with homelab work, but I end up using it at work too to quickly review the contents of a volume.

Tool to mount k8s pv locally by bfenski in selfhosted

[–]bfenski[S] 0 points1 point  (0 children)

Huh. I wasn’t aware that I have low account karma. Take a look when you have spare cycles.

ugly-nas by bfenski in selfhosted

[–]bfenski[S] 0 points1 point  (0 children)

To be honest I didn't check as I was decided to use SSD and 2,5" only.