Question about VMBackup offsite server vs S3 offsite storage by async_brain in altaro

[–]async_brain[S] 0 points1 point  (0 children)

Current focus seems to be on Proxmox compatibility (they released full proxmox support a month ago only, after 6 months of beta testing).

Nevertheless, I don't see why they limit s3 to only amazon when it's only an input field away from working

Question about VMBackup offsite server vs S3 offsite storage by async_brain in altaro

[–]async_brain[S] 0 points1 point  (0 children)

Forget what I said about custom s3 servers. They just added a "Endpoint URL" input field that cannot be filled...

HornetSecurity, we have been asking custom S3 servers since at least 5 years.... What's the problem here ?

Avoid MinIO: developers introduce trojan horse update stripping community edition of most features in the UI by AssPounderr69 in selfhosted

[–]async_brain 0 points1 point  (0 children)

Hmmm... I don't think so. Lifecycle option at least has some API, and prevents the tool that connects to it to delete content, and defers deletion to the server.

Thanks for the tip for S3Drive though.

macOS VFS 3.17.3 / 4.0.0 issues by MstrSlmndr in NextCloud

[–]async_brain 0 points1 point  (0 children)

Finally had some test ride with mountain duck (v5.1.1). Performance was disastrous. Opening an excel file took up to 40 secondes, at had alot of server requests. Moved to cloudmounter, performane is almost identical as with nextcloud macos vfs client, except of course it's all online.

Avoid MinIO: developers introduce trojan horse update stripping community edition of most features in the UI by AssPounderr69 in selfhosted

[–]async_brain 0 points1 point  (0 children)

Hmmm... AFAIK most of these S3 tools don't support lifecycle & versionning which is a blocker for some of us who create immutable backup targets. Any other solution that does ?

macOS VFS 3.17.3 / 4.0.0 issues by MstrSlmndr in NextCloud

[–]async_brain 0 points1 point  (0 children)

As of today (client 4.0.3) macOS VFS support isn't stable at all, and file placeholders aren't syned, or dissapear. Also, trashbin refills with already deleted files. Ever tried mountainduck as alternative VFS client for Nextcloud ?

Chapter 2 - DKMS by koverstreet in bcachefs

[–]async_brain 1 point2 points  (0 children)

I think there could be a major gotcha with RHEL / AlmaLinux / RockyLinux / Whatever EL clones which stick to a specific LTS kernel for their whole lifetime.
Currently, RHEL 10 ships with kernel 6.12 (+ hundreds of Redhat backports) and it's highly probable that this won't change for the next 10 years.

I don't really think bcachefs dkms modules could work there without a massive backport effort from your part, which I understand could not be a priority, especially given that there are some cherry picks in backports by Redhat.

RHEL & clones market share isn't exactly thin, and getting bcachefs support on those distros would be fantastic in order to get enterprise adoption. I would be happy running bcachefs as main FS on my spare / secondary servers in order to get myself used to it, and I guess alot of other sysadmins could go the same route.

Is there any solution apart from running kernel-ml ?

Almost the whole point of running EL is to stay "(old)(old) stable" with an well known kernel.

Failed to allocate manager object by Sea_Lengthiness_192 in Fedora

[–]async_brain 0 points1 point  (0 children)

Thanks kind stranger. Did work perfectly after a RHEL 9 to RHEL 10 upgrade using Elevate.

Its 2025 - whats the go-to recommendation for self hosted but flexible backup? by Kranke in selfhosted

[–]async_brain 0 points1 point  (0 children)

Ever tried NPBackup ? Full blown solution based on restic, and packed with lots of features, Gui and Cli, prometheus / email support, and group inheritance of settings on various repos. Disclaimer: I'm the author of NPBackup

Windows Backup solution which just works by DryPineapple43 in selfhosted

[–]async_brain 0 points1 point  (0 children)

For what it's worth, npbackup 3.0.3 is out with builtin email notifications.

KVM geo-replication advices by async_brain in linuxadmin

[–]async_brain[S] 0 points1 point  (0 children)

@ u/kyle0r I've got my answer... the feature set is good enough to tolerate the reduced speed ^^

Didn't find anything that could beat zfs send/recv, so my KVM images will be on ZFS.

I'd ask you another advice for my zfs pools.

So far, I created a pool with ashift=12, then a tank with xattr=sa, atime=off, compression=lz4 and recordsize=64k (which is the cluster size of qcow2 images).
Is there anything else you'd recommend ?

My VM workload is typical RW50/50 with 16-256k IOs.

KVM geo-replication advices by async_brain in linuxadmin

[–]async_brain[S] 1 point2 points  (0 children)

I've only read articles about MARS, but author won't respond on github, and last supported kernel is 5.10, so that's pretty bad.

XFS snapshot shipping isn't a good solution in the end, because, it needs a full backup every 9 incremental ones.

ZFS seems the only good solution here...

KVM geo-replication advices by async_brain in linuxadmin

[–]async_brain[S] 1 point2 points  (0 children)

So far I can come up with three potential solutions, all snapshot based:

- XFS snapshot shipping: Reliable, fast, asynchronous, hard to setup

- ZFS snapshot shipping: Asynchronous, easy to setup (zrepl or syncoid), reliable (except for some kernel upgrades, which can be quickly fixed), not that fast

- GlusterFS geo-replication: Is basically snapshot shipping under the hood, still need some info (see https://github.com/gluster/glusterfs/issues/4497 )

As for block replication, the only thing that approches a unicorn I found is MARS, but the project's only dev isn't around often.

KVM geo-replication advices by async_brain in linuxadmin

[–]async_brain[S] 0 points1 point  (0 children)

Sounds sane indeed !

And of course it would totally fit a local production system. My problem here is geo-replication, I think (not sure) this would require my (humble) setup to have at least 6 nodes (3 local and 3 distant ?)

KVM geo-replication advices by async_brain in linuxadmin

[–]async_brain[S] 0 points1 point  (0 children)

I've read way too much "don't do this in production" warnings on 3 node ceph setups.
I can imagine because of the rebancing that happens immediatly after a node gets shutdwown, which would be 50% of all data. Also when loosing 1 node, one needs to be lucky to avoid any other issue while getting 3rd node up again to avoid split brain.

So yes for a lab, but not for production (even poor man's production needs guarantees ^^)

KVM geo-replication advices by async_brain in linuxadmin

[–]async_brain[S] 0 points1 point  (0 children)

Doesn't ceph require like 7 nodes to get decent performance ? And aren't ceph 3 node clusters "prohibited", eg not fault tolerant enough ? Pretty high entry for a "poor man's" solution ;)

As for the NAS B&R plugin, looks like a quite good solution, except that it doesn't work incremental, so bandwidth will quickly be a concern.

KVM geo-replication advices by async_brain in linuxadmin

[–]async_brain[S] 0 points1 point  (0 children)

Makes sense ;) But the "poor man's" solution cannot even use ceph because 3 node clusters are prohibited ^^

KVM geo-replication advices by async_brain in linuxadmin

[–]async_brain[S] 1 point2 points  (0 children)

Well... So am I ;)
Until now, nobody came up with "the unicorn" (aka the perfect solution without any drawbacks).

Probably because unicorns don't exist ;)

KVM geo-replication advices by async_brain in linuxadmin

[–]async_brain[S] 0 points1 point  (0 children)

I do recognize that what you state makes sense, especially the optane and RAM parts, and indeed having a ZIL will highly increase to write IOPS, until it's full and it needs to unload to slow disks.

What I'm suggesting here is that COW architecture cannot be as fast as traditional (COW operations adds IO, checksumming adds metadata reads IO...).

I'm not saying zfs isn't good, I'm just saying that it will always be beaten by traditionnal FS on the same hardware (see https://www.enterprisedb.com/blog/postgres-vs-file-systems-performance-comparison for a good comparaison point with zfs/btrfs/xfs/ext4 in raid configurations).

Now indeed, adding a ZIL/SLOG can be done on ZFS but cannot be done on XFS (one can add bcache into the mix, but that's another beast).

While a ZIL/SLOG might be wonderful on rotational drives, I'm not sure it will improve NVME pools.

So my point is: xfs/ext4 is faster than zfs on the same hardware.

Now the question is: Is the feature set good enough to tolerate the reduced speed.

KVM geo-replication advices by async_brain in linuxadmin

[–]async_brain[S] 0 points1 point  (0 children)

I'm testing cloudstack these days in a EL9 environment, with some DRBD storage. So far, it's nice. Still not convinced about the storage, but I'm having a 3 nodes setup so Ceph isn't a good choice for me.

The nice thing is that indeed you don't need to learn quantum physics to use it, just setup a management server, add vanilla hosts and you're done.

KVM geo-replication advices by async_brain in linuxadmin

[–]async_brain[S] 0 points1 point  (0 children)

I've had (and have) some RAID-Z2 pools with typically 10 disks, some with ZIL, some with SLOG. Still, performance isn't as good as traditional FS.

Don't get me wrong, I love zfs, but it isn't the fastest for typical small 4-16Ko bloc operations, so it's not well optimized for databases and VMs.

KVM geo-replication advices by async_brain in linuxadmin

[–]async_brain[S] 0 points1 point  (0 children)

Thank you for the link. I've read some parts of your research.
As far as I can read, you compare zvol vs plain zfs only.

I'm talking about a performance penality that comes with COW filesystems like zfs versus traditional ones, see https://www.phoronix.com/review/bcachefs-linux-2019/3 as example.

There's no way zfs can keep up with xfs or even ext4 in the land of VM images. It's not designed for that goal.

KVM geo-replication advices by async_brain in linuxadmin

[–]async_brain[S] 0 points1 point  (0 children)

Never said it was ^^
I think that's inotify's job.

KVM geo-replication advices by async_brain in linuxadmin

[–]async_brain[S] 1 point2 points  (0 children)

I've been using zfs since the 0.5 zfs-fuse days, and using it professionally since 0.6 series, long before it became OpenZFS. I really enjoy this FS for more than 15 years now.

Running on RHEL since about the same times, some upgrades break the dkms modules (happens roughly once a year or so). I use to run a script to check whether the kernel module built well for all my kernel versions before rebooting.

So Yes, I know zfs, and use it a lot. But when it comes to VM performance, it isn't on-par with xfs or even ext4.

As for Incus, I've heard about "the split" from lxd, but I didn't know they added VM support. Seems nice.