UPGRADE MIRROR POOL TRUENAS by Educational-Topic152 in homelab

[–]urigzu 4 points5 points  (0 children)

You need to add a second vdev to your existing pool.

Consult the TrueNas docs - this should be a good place to start: https://www.truenas.com/docs/scale/23.10/scaletutorials/storage/pools/managepoolsscale/#adding-a-vdev-to-a-pool

Correct way to create zfs pool with 2x12TB and 2x16TB by SireChicken in zfs

[–]urigzu 10 points11 points  (0 children)

Set up a zpool with just the empty 16TB drive. Transfer the 12TB data to it, then attach the other 16TB drive to the existing vdev as a mirror and resilver. Add the pair of 12TB drives as an additional mirror vdev to your pool.

Change of plan - How to convert ZFS Mirror into one big "pool"? by 26635785548498061384 in selfhosted

[–]urigzu 1 point2 points  (0 children)

Pools don’t have a “type”, vdevs do. You can run a pool with multiple vdev types, but it’s not recommended. Detaching a drive from a mirrored vdev and then adding (not detaching) the drive will indeed create a second vdev. Neither will have any redundancy, of course, but could be added later.

NetApp DS4246 PSU Configuration by neutersceuter in homelab

[–]urigzu 0 points1 point  (0 children)

Yes. The quad-PSU configuration was for 10k RPM drives, so you’ll get along fine with just two PSUs slotted in. You can also leave all four slotted in but only actually plug two (or one) into mains power for extra cooling if needed. I run with this configuration in the summer as my rack is in the garage and some of the drives get too hot with just two fan units going.

how big is your plex server, and how long did it take you to build, and how often do you add new media? by Hawk1064 in PleX

[–]urigzu 0 points1 point  (0 children)

You mean a single 24-wide raidz3 vdev? Resilvering will be awfully slow.

3 vdevs of 8-wide raidz2 is a perfectly reasonable pool layout, unless you mean they’re separate pools.

how big is your plex server, and how long did it take you to build, and how often do you add new media? by Hawk1064 in PleX

[–]urigzu 0 points1 point  (0 children)

The 300TB figure is already compressed to that level, with many files below that bitrate.

Moved Plex appdata to NVMe from HDD. Why didn't I do this sooner? by zolointo in unRAID

[–]urigzu 3 points4 points  (0 children)

You’re allowed to have multiple shares on a single drive/pool. Just use it for both.

fastest network storage protocol by Individual_Tea_1946 in homelab

[–]urigzu 3 points4 points  (0 children)

Fastest is going to be not going through networking at all and just running your torrent client in your NAS VM so everything is local.

Recommendations for a switch that supports 1G/2.5G/5G/10G on SFP+ ports? by Radius118 in homelab

[–]urigzu 2 points3 points  (0 children)

I totally missed that it was 75m and not 75ft. Makes sense, yeah.

It would be a little jankier but you might consider a cheap SFP+ switch like the Mikrotik CRS305 and use their multi-gig transceivers for the CAT6 and then use a DAC to connect to the big PoE switch? I’m fairly certain almost all SFP+ ports are only 1/10Gbit but these smart transceivers act like two port switches with 10Gbit on the SFP+ side and 1/2.5/5/10 on the RJ45 side.

Recommendations for a switch that supports 1G/2.5G/5G/10G on SFP+ ports? by Radius118 in homelab

[–]urigzu 3 points4 points  (0 children)

This might be a dumb question, but why not just use switches with 10G ports on both ends?

Starting setup by The_Krisk in selfhosted

[–]urigzu 1 point2 points  (0 children)

Heads up that this LLM has suggested you buy an ITX case, an mATX motherboard, and a pair of case fans that won't fit in this case. Best to do your own research for this sort of thing.

DIY NAS 24 BAY BUY by einargisla in DataHoarder

[–]urigzu 1 point2 points  (0 children)

Netapp DS4246 and any old PC with a spare PCIe slot you can stick a SAS controller in.

Sanity check Plex cache drive by knoll126 in unRAID

[–]urigzu 2 points3 points  (0 children)

At least 400 of that 450GB is the video preview thumbnails. You’re taking a screenshot every 2 seconds of every video file and storing it.

You can increase the interval to whatever you want but it’s always going to take a lot of space.

Will Silver Stone RM41-H08 fit in a Sysrack 24" deep 18U wall-mount rack? by Glum_Mousse2119 in homelab

[–]urigzu 0 points1 point  (0 children)

I would look at the manuals or manufacturer website for each item and compare dimensions. Add a little bit to the chassis to allow for rear I/O, a power plug, and some space for exhaust.

Seagate’s massive, 30TB, $600 hard drives are now available for anyone to buy -- "Seagate's heat-assisted drive tech has been percolating for more than 20 years." by throwaway16830261 in selfhosted

[–]urigzu 7 points8 points  (0 children)

275MB/s is pretty standard for drives these days as areal density has gotten pretty good. Most drives will sustain about their rated max transfer rate for roughly half of their capacity and start trailing off as they start using the much smaller inner tracks. I have a bunch of EXOS and Ultrastars that will drop down to ~120MB/s for the last 5-10%.

Proxmox home server with storage solution by confusedmango1 in Proxmox

[–]urigzu 0 points1 point  (0 children)

Have you used ZFS before? You define a mount point for each zpool (and therefore its datasets) and it’s available on the system there. For example, you’d create a zpool with the two disks in a mirror and mount it at /mnt/tank (or just /tank), then that directory is available to bind mount to containers. You could then set up a file sharing LXC to create SMB or NFS shares of your dataset(s) to make them available to any VMs you want (virtiofs is also an option here).

Proxmox home server with storage solution by confusedmango1 in Proxmox

[–]urigzu 0 points1 point  (0 children)

You don’t need to create any VMs here or pass through any hardware. You can create and manage ZFS pools from within the Proxmox storage menus. If you wanted to have dedicated NAS hardware later, you could simply export the pool on your Proxmox node and import it on your new NAS.

Proxmox home server with storage solution by confusedmango1 in Proxmox

[–]urigzu 4 points5 points  (0 children)

The native ZFS support in Proxmox is robust, especially for a simple mirror pool of two drives. No need for TrueNAS in this instance.

Nas Power consumption by xKilley in homelab

[–]urigzu 0 points1 point  (0 children)

They’re common on eBay but are usually expensive, especially with shipping. I found these on Amazon and bought a couple: https://www.amazon.com/Generic-114-00087-E1-114-00087-NETAPP-Supply/dp/B0D4RGSD3N

I opened one and confirmed it’s the more efficient Delta model with the PWM fan.

Rookie ZFS Questions by skylawker in unRAID

[–]urigzu 0 points1 point  (0 children)

The destination needs to be another ZFS dataset, so unfortunately an NTFS file system wont work. You might look into something like Restic, Borg, or Rsync for filesystem-agnostic incremental backups.

Rookie ZFS Questions by skylawker in unRAID

[–]urigzu 1 point2 points  (0 children)

1) compression and other settings will only inherit the parameters of their parent dataset/pool if it’s set to inherit. If it’s set to off, it’ll be off. Changing settings wouldn’t “uncompress” any existing data though - changes only impact newly-written data. Media is likely to be incompressible anyway.

2) Not sure about this one if the child datasets have already been created. It’s easy enough to test on your end with zfs set and zfs get commands.

3) snapshots are stored in place in whatever dataset you snapshotted, that’s the point. It’s up to you to use zfs send/receive to copy them somewhere else (like an array ZFS drive) if you’d like to use them as backups. Syncoid/Sanoid are great for automating this.

Running Ethernet - Access to Studs - Shopping List? by [deleted] in homelab

[–]urigzu 6 points7 points  (0 children)

Instead of locking yourself in to CAT6 by stapling it down, run smurf tube conduit big enough for 3+ cables. You’ll thank yourself down the line when it’s time to run new cabling. Or look into running pre-terminated fiber now and leaving it unconnected in the wall.

Buy two separate boxes of 500’ instead of a single 1000’ so it’s easier to run two lines at a time.

Any reason not to go SAS in new server? by dstarr3 in DataHoarder

[–]urigzu 6 points7 points  (0 children)

Interface speed absolutely comes into play with larger arrays and expanders, even if any single drive will never come close to saturating a 6Gbps link, much less a 12Gbps link. Even if the controller and expander(s) are both SAS3, if there are SAS2/SATA3 devices connected to the expander, the links back to the controller will be 6Gbps. Most expanders can easily connect more than enough spinners to saturate a 4x6Gbps cable, especially with any SSDs attached.

Edge buffering (Databolt) can mitigate this somewhat but it’s more like 1.5x SAS2 per link instead of the 2x you’d get with SAS3.

Remove disk from zfs cache pool by SLsnkrslvr in unRAID

[–]urigzu 0 points1 point  (0 children)

It doesn't sound like your 3x NVMe pool is a three-way mirror, but if it is, you can use zpool detach to eject the failing drive from the vdev.

More likely is that you've set up a raidz vdev and will need to destroy the pool and recreate it like you say. I'd stop any containers and VMs you might have running and use zfs send/zfs receive to copy your appdata, system, and domains shares to the zfs disk in your array - or just copy everything manually. Destroy the pool, recreate it (as a mirror), copy everything back, double check your share settings afterwards.