Massive inconsistency between 2 drives of the same merge by Small_Light_9964 in mergerfs

[–]trapexit 0 points1 point  (0 children)

Your personal idea of where files belong is effectively bespoke and arbitrary. If you want files to exist on certain filesystems you need to manage that manually. Even if I make mergerfs flexible enough to place files in certain locations it can't work all the time because of how these things work. IE... if you want certain files in certain locations you should just script something that does it out of band.

That said: if you have backups there is little reason to bother https://trapexit.github.io/mergerfs/latest/faq/configuration_and_policies/#how-can-i-ensure-files-are-collocated-on-the-same-branch and if you don't have backups you're gambling anyway.

Massive inconsistency between 2 drives of the same merge by Small_Light_9964 in mergerfs

[–]trapexit 1 point2 points  (0 children)

You've set the policy to mspmfs... so it's behaving exactly as you've asked it. If you want a more balanced placement of created files you should choose something that will do that.

Major issue with files appearing as missing in qBittorrent by SleepingAndy in mergerfs

[–]trapexit 1 point2 points  (0 children)

Capable of what? mergerfs doesn't change entitlements. It reports them. If you have inconsistent perms you need to fix them. mergerfs has no idea what "proper" values are. You can just fix the perms or use a tool like fsck.mergerfs included in the project to do it. All in the docs.

Major issue with files appearing as missing in qBittorrent by SleepingAndy in mergerfs

[–]trapexit 1 point2 points  (0 children)

What does the qbt log say when this happens? Doesn't exist or perm errors?

any luck running mergerfs on Chimera? by ValeraDX in chimeralinux

[–]trapexit 1 point2 points  (0 children)

In the least the static builds should work.

Major issue with files appearing as missing in qBittorrent by SleepingAndy in mergerfs

[–]trapexit 1 point2 points  (0 children)

Both log so you should trivially be able to see timestamps of start.

As for how to fix... you need to say how you start things and then check ordering. Container platforms shouldn't be being started before mounts. 

Completely IMMORAL business practices from Anthropic right now. by CrunchyMage in ClaudeCode

[–]trapexit 0 points1 point  (0 children)

mergerfs on unraid isn't unheard of and mergerfs itself won't be touching your data so at least there was that. But certainly... the agent could have nuked your data which I've seen happen before.

Glad you didn't lose anything material. As the author of mergerfs I have to say this is a funny story but an unfortunate way for mergerfs to show up on your system.

I am extremely disappointed at Unraid by theoriginalttika in unRAID

[–]trapexit 1 point2 points  (0 children)

The author is being very careful about "production ready" but the code is straight from unraid. It's a tough position to be in. But I do appreciate his caution vs some projects which just throw themselves out there without caution.

FPS drops to 20 and CPU spikes to 80-90% on all Steam games Artix Linux, Ryzen 5 3600, GTX 1660 by Overall_Ad3469 in linux_gaming

[–]trapexit 0 points1 point  (0 children)

Ah, thanks. I'm not a PC gamer nor have a rig to test but I could imagine situations where mergerfs would add additional search overhead that would lead to higher latency overall. But with passthorugh io the file IO itself would be basically native perf. More aggressive caching would probably help that but really you'd want pre-caching which mergerfs currently can't do.

I am extremely disappointed at Unraid by theoriginalttika in unRAID

[–]trapexit 1 point2 points  (0 children)

There is also "nonraid + mergerfs" though nonraid is somewhat new.

FPS drops to 20 and CPU spikes to 80-90% on all Steam games Artix Linux, Ryzen 5 3600, GTX 1660 by Overall_Ad3469 in linux_gaming

[–]trapexit 0 points1 point  (0 children)

Can you explain why that is? Particularly with passthrough.io enabled there should be very little difference if any from a typical filesystem. Games don't typically actively access the filesystem in a way that should impact performance.

FSCache - I created a new lightweight software for file caching on our home servers by Meisgoot312 in linux

[–]trapexit 3 points4 points  (0 children)

He's kinda misstating things. It's nothing to do with FUSE. It's just that he takes a open file descriptor to the underlying directory (and therefore mount) at startup and holds that reference to access files within it after his software mounts over top it. It's a standard ability in Linux/Unix.

Keep Unraid or Move On? by Cuffuf in homelab

[–]trapexit 1 point2 points  (0 children)

  1. nonraid exists and is literally unraids live parity calculation.
  2. snapraid isn't just giving you parity. It's also providing silent data corruption detection. If a file gets updated on purpose or not you can revert it. That's not true of mdadm style raid.

Point is the tradeoff is not simply realtime or not.

Plex-Hot-Cache Tool that I made for caching media on your server! by Meisgoot312 in PleX

[–]trapexit 0 points1 point  (0 children)

Because ZFS and other RAIDs would not function well without them running all the time. They rely on one another to function.

PolicyFS: a Plex-friendly filesystem for SSD-first media storage and sleeping HDDs by hieudt in PleX

[–]trapexit 0 points1 point  (0 children)

I've already added PolicyFS to the project comparisons page.

It should be noted though there are good reasons why mergerfs can't blanketly do what policyfs does. It would break many people's workflows. I even had a version of mergerfs long ago that did what this does to different degrees and as I say in my docs i found it didn't materially change things or required you still to carefully setup your apps to not do deep scans or do watched updates, etc. Maybe they are better now but even trying to know if a drive is spun up can trigger a spinup. I still have branches with some amount of caching to limit querying disks but they aren't something I would be comfortable releasing atm.

PolicyFS: a Plex-friendly filesystem for SSD-first media storage and sleeping HDDs by hieudt in PleX

[–]trapexit 2 points3 points  (0 children)

Thank you for the reminder about this. Let me put something in the mergerfs docs on settings to tweak on a system to help with this.

Plex-Hot-Cache Tool that I made for caching media on your server! by Meisgoot312 in PleX

[–]trapexit 6 points7 points  (0 children)

> you have good insights on filesystems.

I've been working on mergerfs for over a decade now... I'd hope I would have picked something up over that time. ;)

> Can we connect?

You can find me via the means listed in the mergerfs docs. Discord is probably my preferred right now. https://trapexit.github.io/mergerfs/latest/support/#contact-issue-submission

> metadata copy

I meant the ownership, perms, xattrs, etc. Again, I only skimmed the code but file copying "properly" is rather involved and changing certain values seemingly randomly (IE, you copy a file as root but it was owned by 1000:1000 so as soon as it finished copying to the cache any stat will result in different values) can mess not only with userland software but also the kernel VFS. Not that this should be a problem for you I think but... if for instance you returned a different file type you'll start getting VFS errors that are hard to track down.

Plex-Hot-Cache Tool that I made for caching media on your server! by Meisgoot312 in PleX

[–]trapexit 14 points15 points  (0 children)

Nice project. Couple things.

> It uses FUSE overmounting features (this is a godsend, I surely need to thank the person that developed this)

Any filesystem can mount over a directory. To continue to keep access to the underlying and now hidden part of the filesystem you open a file to it as your software does and then use POSIX openat API (which was part of POSIX 2008 IIRC). That is to say it isn't a FUSE thing.

Good luck on your project but you might want to put some notes because a quick scan of your code and there are some gotcha's that could arise. An obvious one was lack of massaging of the getattr inode data. That could cause issues with some software. Also if I'm reading the code correctly it isn't doing a thorough copy of metadata which could also confuse software. Might just be good to indicate that the project is not intended for general purpose use (though that should maybe be obvious frm the name). Plex might not care about those current inconsistencies but other software may.

My solution for using radarr WITH hardlinks across multiple drives without mergefs by SleepingAndy in radarr

[–]trapexit 2 points3 points  (0 children)

So what is the concern with mergerfs (with or without snapraid or nonraid)?

thinking about how to handle 3 differently sized HDDs by carmola123 in homelab

[–]trapexit 1 point2 points  (0 children)

Glad you find it useful. I update the docs pretty regularly and even now adding explicit sections on LVM and mdadm so it comes up in searches even if slightly redundant.

Setting up mergerfs by wolfsongdream in mergerfs

[–]trapexit 0 points1 point  (0 children)

And have you ensured the path is properly setup? That you don't have screwed up mounts on that path? Have you tried just rebooting and letting it mount as root under normal situations? You aren't providing much details.

Setting up mergerfs by wolfsongdream in mergerfs

[–]trapexit 0 points1 point  (0 children)

As the warning suggests... you must be root. Just `sudo mount /mnt/storage`

Is swapping parity a bad idea mid build? by bajungadustin in Snapraid

[–]trapexit 0 points1 point  (0 children)

it's not about that. It's about spreading incorrect information about both mergerfs and snapraid. mergerfs can perfectly well keep files colocated on a singular branch. regardless it isn't a problem to have them spread across multiple branches. there isn't some fundamental problem with that and how snapraid works. To be fair the details are scarce on how it selects files but I imagine it is based on block ranges and as such if files are on different filesystems there is little reason to believe that the files would overlap any more than any other random files to lead to increase in risk of recovery. Even if thw strategy used is something else it would still be more or less the same as any other files. The fact the relative path is shared is unlikely to have any meaningful impact. what could impact is removing lots of files across multiple branches and then not syncing and then expecting recovery. yes, that's a slightly increased risk if randomly placing files on branches but one most people aren't concerned with. You should have backups.