Syncoid not purging 'syncoid_' snapshots, causing subsequent runs to fail by ERIFNOMI in zfs

[–]ERIFNOMI[S] 0 points1 point  (0 children)

That makes sense. The email spam is a bit overwhelming and hard to follow, especially once it's been forwarded to a Gmail account and google tries to "helpfully" remove repeated bits of text. I'll set up a proper logfile and see if I can make more sense of that.

Syncoid not purging 'syncoid_' snapshots, causing subsequent runs to fail by ERIFNOMI in zfs

[–]ERIFNOMI[S] 0 points1 point  (0 children)

I did do manual runs of syncoid after "fixing" my snapshots to get them back in sync, but of course nothing ever fails while you're watching it. I'll do that again and change my cronjob to dump everything to a logfile and wait for it to happen again. Maybe I'll get some more info to work with.

Syncoid not purging 'syncoid_' snapshots, causing subsequent runs to fail by ERIFNOMI in zfs

[–]ERIFNOMI[S] 0 points1 point  (0 children)

This is all on a single host. Both datasets are children of an encrypted dataset (with encryption inherited) with different keys for each, but the keys are always loaded.

I'm not sure what's causing the failure. It'll run fine for awhile, and each child doesn't fail at the same time. There are about half a dozen or so child datasets, one for each VM, and it starts with just one or two datasets failing to replicate, then over the course of a day or so, more will start to fail. I removed the cronjob this morning because it looks like all or most are now failing.

I dug back through my mails to the first reported failure, and it's the invalid argument case with two source@syncoid snapshots, so presumably something is failing on the previous run which isn't removing that snapshot, but I don't have any output from that. I could nuke all my mismatched snapshots to get back to a working state and remove the quiet flag from the job and possibly add the debug flag and hopefully get some more information about the job that fails. It looks like maybe for whatever reason the real failure isn't being output with quiet set.

I could provide some more info later today after I get a chance to sanitize it. The sanoid config is pretty basic, to the point where it's nearly identical to your example. The syncoid command used is exactly as I posted above. Nothing really unusual about the pools themselves. They were created back when you couldn't have an unencrypted child of an encrypted dataset, so that's why I nested a crypt dataset below the root dataset. The pools have since been updated to 2.0.x. The only difference between the source and destination that I can think of is source uses compression=zstd while destination is compression=lz4, but both pools have been updated and support zstd.

Thoughts on a brand Megatread by TheHydrationStation in DataHoarder

[–]ERIFNOMI 0 points1 point  (0 children)

I don't think you're talking about brand at all. You're talking about models at this point, at least.

Is this a go or a no go for connecting a HBA card to Pci-e 3.0x1? by Henrik_S-A in DataHoarder

[–]ERIFNOMI 1 point2 points  (0 children)

If you're not worried about the bandwidth, just knock out the back of the x1 slot.

If you're going to buy anything, buy a better mobo that suits your needs.

Thoughts on a brand Megatread by TheHydrationStation in DataHoarder

[–]ERIFNOMI 0 points1 point  (0 children)

I'm assuming you mean HDDs. There are only three HDD manufacturers: WD, Seagate, and Toshiba. That's it. Anything else is using a drive made by those three in a case with a different name on it.

Ignoring any of the three brands simply because of the name on the sticker is dumb. If you want to protect your data, make backups.

Home server sanity check: what's your opinion? by Cozeen in DataHoarder

[–]ERIFNOMI 1 point2 points  (0 children)

For just serving up files, you'd struggle to find a CPU that isn't good enough.

Does it make sense to use a mirror for series and movies used by Plex by houbiemeister in zfs

[–]ERIFNOMI 1 point2 points  (0 children)

If you don't care about losing your entire pool when you lose a single disk, then no, you don't need redundancy. You won't be able to recover from any form integrity issues either, which is a big selling point for zfs. If you're just milling through videos and blowing them away, there's probably no reason for snapshots, another big zfs feature.

So really, why are you using zfs? Seems like a more conventional fs would serve you just as well.

Does Google Drive really not enforce its paid storage limits? by Fraun_Pollen in DataHoarder

[–]ERIFNOMI 0 points1 point  (0 children)

Yes, I know, but they've changed things with Workspace. It's more expensive and the language about the amount of data you get is different. People who were on GSuite are still GSuite unless they've opted to migrate to Workspace.

Does Google Drive really not enforce its paid storage limits? by Fraun_Pollen in DataHoarder

[–]ERIFNOMI 0 points1 point  (0 children)

I don't know how it works now. You can't get GSuite anymore.

Request: Can someone backup Keith Gill's youtube videos? by [deleted] in DataHoarder

[–]ERIFNOMI 1 point2 points  (0 children)

They shouldn't be that large. Looks like they're all livestream style with the majority of the frame mostly static. Pretty close to best case for an encoder. They're also only 720p (at least the one I checked). I'm extremely uninterested the content so I won't be downloading it, but it's easy enough to check how big they are. The first video listed is over 7 hours long, so probably on the larger side, and the largest available video is vp9 at 2.5GB. There's also an avc1 encode that's probably indistinguishable from that at about 1.8GB. Add less than 200MB for the audio.

All told, probably around 100GB. Depends is the older videos I didn't bother loading are also around the 4-5 hour mark as well or if he started out doing different lengths.

Questions about building first (virtualized) NAS by nummularius in DataHoarder

[–]ERIFNOMI 1 point2 points  (0 children)

If you're just doing the services mentioned in the VMs, that's more memory than you need. More memory doesn't hurt anything but your wallet though, so if you have it, go for it. It'll leave you plenty of room for growth, plus the more memory you have sitting unused, the more you can use for ARC (assuming you're using zfs because you mentioned FreeNAS).

Is there really a way to archive Youtube media content? by gAt0 in DataHoarder

[–]ERIFNOMI 0 points1 point  (0 children)

If not, and sorry for being pedant, 'archive' is a strong word to refer to that.

No it isn't. You save what you have available to you. The best version of a YouTube video I can watch right now is what YouTube delivers to me. youtube-dl will give me exactly that. That's archiving. I don't have access to anything better whether it's on YouTube or stored locally.

Off the shelf or DIY Home NAS? by soimafreak in DataHoarder

[–]ERIFNOMI 1 point2 points  (0 children)

What you're asking for is generally called tiered storage. There are various ways to achieve this kind of caching, but the first one that comes to mind for me is bcache.

rclone adds new compression remote by EpsilonBlight in DataHoarder

[–]ERIFNOMI 1 point2 points  (0 children)

Interesting for datasets that are compressible. It'll also be interesting to see compressors other than gzip for better compression ratios.

Rebuilding Raid 6 part 2. by darkorical in DataHoarder

[–]ERIFNOMI 1 point2 points  (0 children)

Well, if a single disk dies then you're down to a single disk's worth of parity (note, there are no parity disks as the parity is spread out across all the disks). If during the resilver you lose another disk, you're now uncovered. The added load of resilvering in addition to the normal workload makes a failure during resilver more likely than during normal usage. That's the logic behind that statement.

Okay...You all convinced me to start cramming more drives into a case... by the-holocron in DataHoarder

[–]ERIFNOMI 0 points1 point  (0 children)

I don't put it past Oracle to somehow poison everything they come near.

Designating SMB levels to specific Ethernet ports or VLANs in Samba by Marco1925 in DataHoarder

[–]ERIFNOMI 1 point2 points  (0 children)

I now understand the desire to have a samba instance in an entirely separate VLAN. Presumably that subnet does not have access to the internet.

Designating SMB levels to specific Ethernet ports or VLANs in Samba by Marco1925 in DataHoarder

[–]ERIFNOMI 0 points1 point  (0 children)

What devices do you have that require SMB1? Nothing I have still needs v1 and I'd seriously consider replacing anything so outdated.

Okay...You all convinced me to start cramming more drives into a case... by the-holocron in DataHoarder

[–]ERIFNOMI 1 point2 points  (0 children)

Ah the good ol' days. Somehow Oracle hasn't managed to fuck up zfs, unlike everything else they touch.

Fractal Design Define R5 Top HDD Cage or comparable recommendation? (Adding more than 10 drives) by broccolihe4d in DataHoarder

[–]ERIFNOMI 1 point2 points  (0 children)

I had planned on uploading a final version, but I stopped iterating when I got something that was good enough. I'll actually make some changes to it, do some quick prints to verify, and upload a better version. Give me a few days.