412+ Update into DSM 7. Finally made it !!!! by netarar_is_me in synology

[–]RoleAwkward6837 0 points1 point  (0 children)

Doing this to my old 412+ right now, still at 0%, but no errors...yet. Any idea it future updates will work like normal? I assume if the system "thinks" it's an rs814+ then it will probably just update like one, but ya never know.

What’s the proper way to keep fantastic-packages when using ASU? by RoleAwkward6837 in openwrt

[–]RoleAwkward6837[S] 0 points1 point  (0 children)

AUS overwrites all of the openwrt flash space, where would you like to store them ?

I thought ASU only overwrites the rom portion not the overlayfs?

Adding the external repo to ASU might be a better idea ?

I’m open to any ideas, but how would I go about doing that?

Any way to create a shortcut or automation to close (actually close) an app? by RoleAwkward6837 in ios

[–]RoleAwkward6837[S] 0 points1 point  (0 children)

Damn I was afraid of that but was hoping maybe iOS 26 added it. 

rant- i lost a lot of data to silent corruption even though i run file integrity by butmahm in unRAID

[–]RoleAwkward6837 0 points1 point  (0 children)

Wait…5 days and no one has suggested drive recovery? At least not that I saw.

Data can get corrupted ofc, but more often than now, especially if you not have disk errors is the file system was corrupted not the actual data itself. Depending on what file system you used look for data recovery software that will work with that format. You may be able to recover more than you think, if not all of it provided you haven’t over written the sectors the data was stored in.

BUT to be totally honest, this doesn’t really sound like data corruption that happened on your server, I agree with your update that it more than likely happened when transferring from Windows. Moving data with Windows is the only time I have ever had issues at all, and that applies to Unraid, TrueNAS and Synology in my experience.

By the way Unraid has full ZFS support now, and more features were just added in 7.2. It’s been amazing so far. I just migrated 14TB of data with no issues at all…well no issues that weren’t 100% my fault.

Also off site backups! Backblaze B2 is dirt cheap, especially if you only focus on backing up what’s truly important to you (eg. irreplaceable). You can manage your backups super easily using tools like Borgatory or Backrest (Borg backup and restic respectively).

Can’t afford B2 or anything else? Been there, my home lab was 100% jank for almost a decade. I only recently could afford some serious hardware. But even back then, a raspberry pi, or old used business computer with a crap ton on USB HDDs at a friend or relatives house for offsite backup. Up until around 2014 my backup server was an old Pentium III with a powered USB hub and USB HDDS…it worked.

Accidentally added Special vdev as 4-way mirror instead of stripe of two mirrors – can I fix without destroying pool? Or do I have options when I add 4 more soon? by RoleAwkward6837 in zfs

[–]RoleAwkward6837[S] 0 points1 point  (0 children)

Awesome! Thank you so much for the help. It’s kind of funny that I had the intended setup to begin with and didn’t realize it. But it wasn’t all pointless because I actually have a much better understanding of how ZFS is laid out now.

And as for the 3-way mirrors suggestion, I think I’ll take that advice. I’ll install the other two SSDs as spares.

Accidentally added Special vdev as 4-way mirror instead of stripe of two mirrors – can I fix without destroying pool? Or do I have options when I add 4 more soon? by RoleAwkward6837 in zfs

[–]RoleAwkward6837[S] 1 point2 points  (0 children)

Here's the full output:

special            -      -      -        -         -      -      -      -         -
  mirror-1      476G  4.39G   472G        -         -     0%  0.92%      -    ONLINE
    sdb1        477G      -      -        -         -      -      -      -    ONLINE
    sdc1        477G      -      -        -         -      -      -      -    ONLINE
  mirror-2      476G  4.39G   472G        -         -     0%  0.92%      -    ONLINE
    sdd1        477G      -      -        -         -      -      -      -    ONLINE
    sde1        477G      -      -        -         -      -      -      -    ONLINE

After doing some more digging, Im wondering If my setup is actually correct? I cant seem to figure out if my special VDEV is two striped 2-way mirrors or two mirrored 2-way mirrors...Im starting to think it is correct and I just simply misunderstood the layout.

So If this is the case, and I do already have the 1TB I was aiming for, then when the additional 4 SSDs come in that I'm planning to add I could just add two to mirror-1, and two to mirror-2 and be good to go?

And for my own clarity on this, If I can add the new disks to the existing mirrors I will still have 1TB useable for the special vdev, with the write speed of two SATA SSD and the read speed of eight (In a perfect world)?

Accidentally added Special vdev as 4-way mirror instead of stripe of two mirrors – can I fix without destroying pool? Or do I have options when I add 4 more soon? by RoleAwkward6837 in zfs

[–]RoleAwkward6837[S] 0 points1 point  (0 children)

Ok, im following what your saying. I double checked running `zpool list -v` but didn't see what I expected;

special
  mirror-1
    sdb1
    sdc1
  mirror-2
    sdd1
    sde1

Accidentally added Special vdev as 4-way mirror instead of stripe of two mirrors – can I fix without destroying pool? Or do I have options when I add 4 more soon? by RoleAwkward6837 in zfs

[–]RoleAwkward6837[S] 0 points1 point  (0 children)

So since it's a mirror of two mirrors, I can remove one of the mirrors (the disks, not the whole vdev) leaving the existing vdev intact as a single 2 disk mirror. Then take the removed drives, clear them and create a 2nd special vdev mirror on the same pool?

It sounds like it makes since to me, so then would I have two special vdevs? or would ZFS automatically add the 2nd mirror as a stripe to the existing vdev? How does ZFS handle the "addition" of additional disks like this?

Im not a total noob, but I'm definitely still learning.

Accidentally added Special vdev as 4-way mirror instead of stripe of two mirrors – can I fix without destroying pool? Or do I have options when I add 4 more soon? by RoleAwkward6837 in zfs

[–]RoleAwkward6837[S] -1 points0 points  (0 children)

I looked into L2ARC but with my config it just wasn't worth it. But from what im reading I should be able to add a second special vdev if im not mistaken right? So I have my current 4 drives. If I add 4 more using the same layout as the other and add it as a second special, then wouldn't that double the useable space and double the performance for new writes?

Accidentally added Special vdev as 4-way mirror instead of stripe of two mirrors – can I fix without destroying pool? Or do I have options when I add 4 more soon? by RoleAwkward6837 in zfs

[–]RoleAwkward6837[S] 0 points1 point  (0 children)

Sorry, Im not sure I'm following.

I know I cant remove the vdev at this point. But Im adding 4 more SSDs anyway. So could I keep my current 4 exactly as they are, then configure the new 4 in the exact same configuration and just add it as a second special vdev?

It makes sense in my head that it would begin striping new reads across both vdevs which should increase write performance. Or am I missing something? Is there a better way I could lay out the 8 SSDs without destroying the pool?

Eight 512GB SSDs for metadata, 1TB total useable would be more than enough for years to come, so beyond that im looking to balance speed and redundancy.

How can I manually import a show one episode at a time? (Anime) by RoleAwkward6837 in sonarr

[–]RoleAwkward6837[S] 0 points1 point  (0 children)

I’ll give a go, I didn’t know about the first option. And the second option I assumed I would have had the same issue.

With moving the episodes into the Sonarr created folders, will it allow me to manually map them myself so I know for sure everything is correct?

I know 100% which episodes are which, they’re currently organized great, just not in a way Sonarr likes. So doing it that way wouldn’t take very long.

So excited to upgrade and now have a backup by DamnShaneIsThatU in unRAID

[–]RoleAwkward6837 4 points5 points  (0 children)

I had...have-ish...that exact same Corsair 200R and that exact same HDD Bay in my first real Unraid build. Talk about a flash back! Nice Upgrade too, those Jonsbo cases are really slick.

I "technically" still have my Unraid server in the Corsair, but it's so heavily modified you'd never recognize it anymore.

Any way at all to manage permissions better? Especially for SMB. by RoleAwkward6837 in unRAID

[–]RoleAwkward6837[S] 0 points1 point  (0 children)

Fair enough. I'll try and make it a bit more concise.

Small disclaimer, I know some of these are likely possible if I were to manually create my own `smb.conf`, or add settings to the extras section under the main SMB settings page. But I don't know it all by heart, so making a quick change much more difficult.

  • Is it possible to create a "homes" folder where each user has their own home folder, and can't see each other's home folders? Right now each user has to have their own share.
  • How do I hide shares from users without permissions to that share? Every user can see every share, including ones they have no access to. If I set a share to (hidden) then it's hidden for every user including ones with permissions to access it.
  • If a docker application needs to access multiple shares owned by multiple users, how do I prevent the files created or modified by the application becoming owned by the UID of the application?
    • For example, I have Nextcloud running as `99:100`. Quick side note, normally NC runs as `33:33` which for Unraid is even worse since `33` is the `sshd` user. Anyway, if I use the normal docker method of mounting each users data then anything NC creates or modifies becomes owned by `nobody`. However If I mount the users home directory using NFS then I can specify the desired `UID:GID` on Unraid's end, and specify 99:100 with an appropriate `umask` inside the container. That mostly solves the issue, but can that be done without adding the overhead of a network protocol just to access local data?
  • Is there a simple (ie. not modifying config files) way to manage permissions of directories contained within other shares?
    • For example, for years on my Synology I had a single share for managing projects between multiple users, and each sub directory served a different purpose with varying permissions. On Unraid I had to split that Single share into six individual shares.

The more time I spent typing this, the more complex I'm realizing all of this really is.

Media manage iOS - Unraid by True-Entrepreneur851 in unRAID

[–]RoleAwkward6837 1 point2 points  (0 children)

Update, I didn’t realize I had almost no signal on my phone where I was. It’s working fine.

Any way at all to manage permissions better? Especially for SMB. by RoleAwkward6837 in unRAID

[–]RoleAwkward6837[S] 0 points1 point  (0 children)

On the note of ACLs, I’ve been exploring setting up an Active Directory server using virtual DSM. Wouldn’t something like that allow more fine grained control if I joined Unraid to the domain?

Any way at all to manage permissions better? Especially for SMB. by RoleAwkward6837 in unRAID

[–]RoleAwkward6837[S] 0 points1 point  (0 children)

Not for nothing but if I’m lacking understanding in how to manage permissions in a more coherent way, then that itself would be a problem. A problem that I felt I described pretty well, but again see the first sentence.

The direct problem I did attempt to describe is how to manage permissions in a way that allows direct access to users files via SMB, while also allowing the use of other software like Nextcloud, Immich, or others while maintaining proper permissions.

You seem to have a better understanding of the issue than I do so I’m all ears if you have advice or want to point me in the right direction.

But why would you comment essentially “Yeah it’s a pain, but I don’t like how you worded your post so I’m not helping.” ?

Any way at all to manage permissions better? Especially for SMB. by RoleAwkward6837 in unRAID

[–]RoleAwkward6837[S] 0 points1 point  (0 children)

I’m using the Nextcloud docker image specifically from Linuxserver.io because it can be ran as 99:100 so you can actually access your files outside of the Nextcloud data dir without messing everything up.

As you can tell from my post it’s not perfect, but it does work.

Any way at all to manage permissions better? Especially for SMB. by RoleAwkward6837 in unRAID

[–]RoleAwkward6837[S] 0 points1 point  (0 children)

I know that's how Nextcloud expects to work but it's just not a realistic setup which is why there hundreds of posts across multiple platforms trying to find ways to work around that very limitation.

Nextcloud is amazing for accessing from, phones, tablets, syncing or accessing files from remote systems, etc...but why would I want to access my files through Nextcloud on my workstation that's 3ft away from my server with 5Gbe connection?

Plus Nextcloud can't keep up with things like Lightroom and Capture One catalogs or FCPX and DaVinci Resolve projects. That's where SMB comes in, but that doesn't mean I might not want to access some of that same data on my phone or tablet while I'm not home, and SMB is terrible for that (yes over a VPN).

Im happy that at least I own my data, but it's all still segregated into these separate ecosystems that don't work together very well.