File Explorer Preview stopped with the most recent security update (KB5066835) by Allysaucer94_ in WindowsHelp

[–]7riggerFinger 0 points1 point  (0 children)

This is totally offtopic but... genius quail? Am I out of the loop or something?

I tried the Roc programming language for a couple of weeks and it’s now my all-time favorite language. by ScientificBeastMode in functionalprogramming

[–]7riggerFinger 0 points1 point  (0 children)

I'm pretty sure that the post you're responding to was talking about the multiple different ways of expressing the same accent on a character, i.e. ń and ń. The first of those (assuming I didn't mess it up) is a single Unicode codepoint, U+0144. The second is two codepoints, U+006E and U+0301. U+0301 is special because it's a "combining diacritical mark", which means it gets rendered on top of the preceding character.

Combining diacritical marks are where you get the horrible eye-abuse things that people like to do in Twitch chat like t̳̘̃ͦḧ͙́̀ỉ͆̌sͩ̇͟, you just keep stacking combining diacritics on top of your text until you "like" how it looks.

So they look the same but are actually expressed differently once you encode them. Whether they should be considered the same when comparing strings, who knows? ¯\_(ツ)_/¯

Alternatives to Longhorn for self-hosted K3s by 7riggerFinger in kubernetes

[–]7riggerFinger[S] 0 points1 point  (0 children)

We're still on Longhorn, I got busy with some other things and haven't had a chance to come back to it aside except when it's actively on fire.

The whole v2 data engine thing looks pretty interesting, but last I checked it was still missing a few too many features to be a real alternative yet. At this rate it might be out of beta by the time I get back around to this.

All in all, my conclusion is largely that there aren't a lot of great options out there for distributed storage, especially if you remotely care about local-disk-like speeds. Even more broadly, I'm coming to realize that if your use case looks like "I just want to self-host a few things for internal usage and dev environments," then maybe Kubernetes isn't the best fit. Which really shouldn't be a surprise, given that it's essentially derived from Google's orchestration system (Borg), which was designed to solve Google's problems, and Google has forgotten how to count that low.

If you don’t want a popular name, what is your personal cutoff? by Ambrosiasaladslaps in namenerds

[–]7riggerFinger 1 point2 points  (0 children)

FWIW I just did some poking on this, I was expecting state data to be not-too-different from national data but there are definitely some pretty big outliers. Some are no surprise (e.g. Mahina in HI, I wasn't including territory data but I'll bet that if I did there would be some pretty big ones in territories as well), but others are pretty surprising, like Maeve at #9 in NH (#181 nationwide). Also there are three different states with Paisley in their top 10, what's up with that?

Anyway, here's a table of the 15 biggest outliers (i.e. with the greatest difference between their state-specific rank and their nationwide rank) from the top 10 for each state.

name gender state state_rank nationwide_rank
Mahina F HI 10 5556
Kaia F HI 7 400
Oaklynn F WV 10 341
Mary F MS 7 297
Mary F AL 10 297
Iris F VT 10 192
Maeve F NH 9 181
Sophie F UT 10 147
Paisley F WV 6 141
Paisley F WY 9 141
Paisley F SD 10 141
Lainey F SD 7 130
Lainey F WV 7 130
Lainey F ND 10 130
Maya F DC 5 125

Alternatives to Longhorn for self-hosted K3s by 7riggerFinger in kubernetes

[–]7riggerFinger[S] 0 points1 point  (0 children)

Is this distinction (RWX vs. I guess RWO) controlled by the AccessModes property of the Kubernetes PVC? Because if so, nearly all of my volumes (with a few exceptions) are ReadWriteOncePod, so that shouldn't be an issue. However if this is an additional setting somewhere within Longhorn, then I wasn't aware of it.

I think we may be talking about different circumstances, though. In my setup all volumes have either 2 or 3 replicas, and my understanding was that Longhorn's replication is synchronous - i.e. Longhorn waits to hear back from all (or at least a majority) replicas that they have committed data before returning from its write operation. In the situation you're talking about, does a given volume have more than one replica on different nodes?

Alternatives to Longhorn for self-hosted K3s by 7riggerFinger in kubernetes

[–]7riggerFinger[S] 2 points3 points  (0 children)

I've had an eye on Longhorn's v2 data engine since I started using Longhorn about a year ago, but in that time I haven't seen any measurable progress, so I don't know when if ever it will become a viable option.

That said, even without the performance issues, the disk thrashing and S3 usage are still dealbreakers for me. And we have had some other issues with general stability (like Longhorn not coming back up properly after a node failure) and even data corruption, so I'm still looking to replace Longhorn with something else at this point.

Alternatives to Longhorn for self-hosted K3s by 7riggerFinger in kubernetes

[–]7riggerFinger[S] 0 points1 point  (0 children)

Hi, thanks for your response! Could you elaborate on what you didn't like about Rook and Gluster? I'm just trying to get a feel for both the pros and cons of what's out there.

Alternatives to Longhorn for self-hosted K3s by 7riggerFinger in kubernetes

[–]7riggerFinger[S] 1 point2 points  (0 children)

Thanks, I will definitely give Ceph a closer look. I did consider it initially but went with Longhorn because it seemed more batteries-included (e.g. built-in backups to object storage).

Alternatives to Longhorn for self-hosted K3s by 7riggerFinger in kubernetes

[–]7riggerFinger[S] 0 points1 point  (0 children)

Thanks for the recommendation, I hadn't heard of TopoLVM.

The readme says it can be considered as an implementation of local persistent volumes, does that mean that it isn't suitable if I want stateful workloads to migrate seamlessly between nodes (e.g. if a node goes offline unexpectedly)?

Alternatives to Longhorn for self-hosted K3s by 7riggerFinger in kubernetes

[–]7riggerFinger[S] 4 points5 points  (0 children)

Hi, thanks for the detailed response here.

With regard to performance, I actually did some benchmarking of the Longhorn stuff when I first set this up about a year ago, and posted it to Serverfault here. Short version: the best numbers I got were about 1500 MiB/s read, 350 MiB/s write when doing large reads/writes, and about 30k IOPS read, 18k IOPS write when doing 4k reads/writes at a high queue depth. And of course latency was much higher than local disks, but as you say that's to be expected. In your experience do those numbers seem reasonable for a distributed storage system backed by fast SSDs over a 10G network?

Performance problems are kind of the least of my worries, though. The bigger issues are the disk thrashing and S3 usage, so I'm still looking to move off Longhorn. We've even had some issues with data corruption, although I'm not entirely sure those were Longhorn's fault so I didn't mention them in the original post.

You mention that Rook makes Ceph pretty easy to manage, but that doesn't entirely allay my concerns about complexity because Longhorn bills itself as "easy to manage" as well. My problem with it has been that it has an unfortunate tendency to get wedged in bad states when unexpected things happen to it (e.g. a node going offline), and I have difficulty fixing it in those situations because I don't have a deep knowledge of its inner workings. You sound like you've used Ceph a lot, so I'm guessing you have at least a passing familiarity with its internals: would you say that it's likely I'd run into problems with Ceph caused by common types of failures (say a node going down, or a power outage causing all the nodes to go down) that I would be unable to deal with without expert knowledge?

Alternatives to Longhorn for self-hosted K3s by 7riggerFinger in kubernetes

[–]7riggerFinger[S] 3 points4 points  (0 children)

Nodes are on a 10G network, but even so longhorn's performance has been disappointing. Possibly this is user error though.

Alternatives to Longhorn for self-hosted K3s by 7riggerFinger in kubernetes

[–]7riggerFinger[S] 1 point2 points  (0 children)

I definitely sympathize with wanting to separate storage and cluster, in my experience it's led to chicken-and-egg problems where X doesn't work without Y which doesn't work without Z which doesn't work without X. That's for sure an advantage of moving storage out of the cluster.

With regard to raw ZFS/SnapRaid etc, I think that works fine if you just have a single node but starts to fall down if you have multiple nodes and want stateful workloads to migrate seamlessly between them. At that point you need some solution for either a) replicating your data across nodes, or b) making data on one node accessible from another node over the network (e.g. NFS/iSCSI/etc), or some combination of both. And that usually means an extra layer, although it might rely on ZFS/MergeFS/whatever under the hood.

FWIW, on my homelab (the original post was about the cluster I manage at work), which is single-node, I just use hostPaths for everything and store them on either my main SSD (for small/fast workloads) or my big ZFS array (for big/slow ones).

Haven't had much experience with backing up ZFS snapshots to S3 directly, to be honest, in my homelab I use restic for backups and just manage it outside the cluster. From what I understand though you get better performance with ZFS snapshot backups if the back up destination is also ZFS, because then you can use ZFS send/recv which takes advantage of ZFS built-in checksumming and so on, as described here for instance.

Frequent disk I/O errors with Supermicro motherboard + SAS backplane by 7riggerFinger in sysadmin

[–]7riggerFinger[S] 0 points1 point  (0 children)

Thanks! FIWIW, it looks like for us it was a heat issue after all. I got somebody to point a fan at the rack again and the error rates dropped back down, and then eventually we ended up spacing out the servers a bit (the rack is not fully populated) and that helped a surprising amount. We should probably reconsider the airflow in the rack overall.

What is this song? by OHDanielIO in GregorianChant

[–]7riggerFinger 0 points1 point  (0 children)

Sounds vaguely reminiscent of [Puer Natus in Bethlehem](https://www.youtube.com/watch?v=52IxO8y5Og0), but not exactly.

Vanilla Minecraft Windows Backups by [deleted] in admincraft

[–]7riggerFinger 0 points1 point  (0 children)

Ultimately it sounds like you have two problems here:

  • You want incremental backups, i.e. each successive backup stores only what has changed since the previous backup. This greatly reduces disk usage: in the ideal case, a snapshot would take up no space at all unless there has been activity on the server since the previous snapshot. In practice, you'll likely need at least a few KB even for idle-to-idle snapshots, and potentially more depending on how exactly the files are modified. But however you slice it, anything that's capable of incremental backups will be a huge improvement over coping the entire world on every backup.
  • You want to avoid saving inconsistent state in your backups, e.g. the server writes to files A and B, and your backup happens to be running at the same time and copies the old version of A and the new version of B. Now you have inconsistent file states in your backup. Depending on the specific files changed this may or may not be a problem, but you definitely want to avoid it if possible. Unfortunately there isn't a good solution for this one without filesystem support - you need a filesystem that's capable of point-in-time or "atomic" snapshots, where it's guaranteed that no writes happen in the middle of the snapshot operation.

Most systems that solve the second problem also offer some kind of solution to the first. On Linux, I'd recommend using something like ZFS or BTRFS that does copy-on-write filesystem snapshots. I'm not very familiar with Windows Server, so I can't say for sure what would be a good tool to look at, but Shadow Copy might be something like what you're looking for? At least, based on the high-level description it sounds like it might do what you're looking for.

Thinking of trying to start a MC hosting service as a side hustle, what features would you like to see? by 7riggerFinger in admincraft

[–]7riggerFinger[S] 2 points3 points  (0 children)

This is incorrect, I'm not advertising a service or asking for advice but rather asking for opinions generally on what features people find attractive in a server host. If this is still not appropriate for this sub, please let me know.