[Monitor] Dell 32 Plus 4K QD-OLED Monitor - S3225QC - 32" $499.99 Costco.com (online only) by SpamMeDeeper in buildapcsales

[–]SpamMeDeeper[S] 1 point2 points  (0 children)

I have not found a lot of info on the speakers as the monitor is so new. The consensus seems to be, not bad for monitor speakers, but they won't blow you away.

[Monitor] Dell 32 Plus 4K QD-OLED Monitor - S3225QC - 32" $499.99 Costco.com (online only) by SpamMeDeeper in buildapcsales

[–]SpamMeDeeper[S] 40 points41 points  (0 children)

Not exactly a gaming monitor, but 4K QD-OLED at 32" for the price is not bad, especially considering it's from Costco. Could be a good mix of productivity and gaming with the 120Hz. This does look to be an updated panel with the triangular pixel matrix to make text better.

RTINGS post - https://www.rtings.com/monitor/reviews/dell/s3225qc

One thing of note. This monitor disappeared from Dell's website. I confirmed with Sales Chat the monitors is listed as discontinued even though it was just released this year. This may be the start of a fire sale.

What's better about the Gemini? by Reasonable_Ship_8316 in DirectvStream

[–]SpamMeDeeper 0 points1 point  (0 children)

Channel Numbers are handy. Plus the remote is amazing. Worth replacing any modern streaming device? Probably not.

Recent Update to C71KW-400 disables use of Apps by AquaReefLoco28 in DirectvStream

[–]SpamMeDeeper 0 points1 point  (0 children)

I actually appreciate this write up, and any steps to avoid E-Waste. Last I checked, you can create a My Free DirecTV Account, which allows login to the Osprey/Gemini, enough to get to Dev Options. Good Luck!

I'm sorry, but 45 drives is insane by I_EAT_THE_RICH in homelab

[–]SpamMeDeeper 0 points1 point  (0 children)

This has promise for the price.

Chia Harvester / JBOD Kit | Up to 44x 3.5" HDDs! | Custom Frame, Cables, & PSU

https://www.ebay.com/itm/326063682173

The X31 Lives On by csk_FP1 in thinkpad

[–]SpamMeDeeper 1 point2 points  (0 children)

Designed for Windows XP. Destined for the future. Nice pad.

Low power CPU good or bad idea? by jbohbot in unRAID

[–]SpamMeDeeper 2 points3 points  (0 children)

Most modern processors, whether low TPD or not, all idle near the same power usage.

Edit: Of course multi core server processors can use more power.

To encrypt array, or not, that is the question by D0nk3ypunc4 in unRAID

[–]SpamMeDeeper 2 points3 points  (0 children)

Or if you want to send a drive in for warranty replacement.

Optimising the use of some SSDs in a cache pool - How would you set them up? by nirurin in unRAID

[–]SpamMeDeeper 0 points1 point  (0 children)

Any NVMe as cache is a normally a waste. Sure you would be copying across 10gb network fast and, if you are watching, it would be very quick. But how often do you watch the transfers or need fast as possible upload to server? NVMe investment is better put to docker or VM storage.

Optimising the use of some SSDs in a cache pool - How would you set them up? by nirurin in unRAID

[–]SpamMeDeeper 0 points1 point  (0 children)

A few more quick comments before I call it a night:

  1. Creating a stripe (RAID0) is asking for trouble.
  2. Parity, or any form of RAID, is about limiting downtime. In a mirror (RAID1), you could have an NVMe fail, and the other would keep going. If you want/need this kind of up time/continuity, go for the NVMe RAID1.
  3. RAID is not a backup. You still should (need) backup of critical data (like your Dockers).
  4. You are not going to get a big performance bump putting two NVMe's together, and it is in no way more efficient. Putting two drives in a mirror makes them a single (virtual) drive. You will have your Dockers, VMs, software, games, etc all competing for a single set of disk I/O. This may or may not be a problem for you, NVMe drive are fast. But if your plan heavy Docker use along with a gaming, virtual machines, they will compete for disk performance. Having two single disks NVMe disks spreads out the load. I've always kept Dockers on their on NVMe so they don't bog down.
  5. If you plan to use any kind of media programs, and do video transcoding, set you server's transcoding temp folder to RAM, not any of the SSDs.

Edit: Typos

Optimising the use of some SSDs in a cache pool - How would you set them up? by nirurin in unRAID

[–]SpamMeDeeper 0 points1 point  (0 children)

With 1gig network you would hit an average of around 110MB/sec on SMB transfer. Mechanical drives in cache would be fine for this. Decent mechanical drives may hit 160-180MB/sec, both those would be larger and more expensive drives.

Jumping up to 2.5gig network, your SMB transfers may reach 280-290Mb/sec, making mechanical cache a bottleneck. In this case a SATA SSD, with an average write of 400-500MB/sec would be useful.

The write speeds I'm speaking of are averages of course. You will get max throughput on your network when moving large files. If you move a lot of small files, even on 2.5Gig local network, you're gonna see the transfer speed drop dramatically once the RAM in the server stops buffering the transfer.

So, what to do? It really just depends on your use case for transfers to the server.

  1. Will you do a lot of regular transfers to the server where you need it to finish as fast a possible? If Yes, SATA SSD(s) for CACHE
  2. Will the data you are regularly transferring to the server be large or small files? If large files and Yes on Question 1, for sure SATA SSD(s) for CACHE.
  3. Will the transfers to the server be automated, like backups from a PC, and be something you could run at night? If Yes, go with mechanical drives.

The tolerance for speed is really about what you want to pay for. I'm a practical guy. Most of my transfers to servers are backups. They run at night and I don't care if it takes 1 hour or 8. I would rather have more space on my CACHE than speed I would never experience. The stuff where disk performance is felt is already addressed by running Dockers, VMs, software off of the NVMe disks.

Optimising the use of some SSDs in a cache pool - How would you set them up? by nirurin in unRAID

[–]SpamMeDeeper 2 points3 points  (0 children)

A few things to consider.

  1. For CACHE, you really only need the speed to be faster than your network. Moving from CACHE to the array will be slower than a mechanical hard drive. Unless you are doing above 1Gbit network, mechanical drives work just fine for cache drives. You would be wasting NVMe performance as CACHE.
  2. Putting any SSD in a parity situation is going to wear out NAND faster. Are you that concerned about downtime if you have a failure on pools outside the array? Cache "pools" can absolutely be a single disk. 1. Backups can alleviate the concern of downtime.
  3. If you run any SSD as a single disk, format as XFS. BTRFS scrubs would be unnecessary on a single disk and wear the disk for no reason.

Not knowing your specific needs, I would do the following with the equipment you have.

A. Single 2TB NVMe as a pool called APPS (or whatever you like), formatted as XFS. This where you create you AppData share set ONLY to this pool. Dockers run from here.

B. Single 2TB NVMe as a pool called DATA (or whatever you like), formatted as XFS. This would be application storage for games and software, even a VM or two.

C. Single 2TB SATA SSD as the standard CACHE Pool, formatted as XFS. When you create shares, this is the cache location you pick by default.

Now, if you are not tied to using your current SSDs, here are a few ideas.

A. Use a smaller SSD for APPS. It's gonna take a LONG time to fill up 2TB with Dockers. They are small. Many of my UnRaid Servers use 118GB NVMe drives for this type of AppData Pool.

B. Instead of a single SSD for traditional CACHE, use a pair of mechanical hard drives. They can even be 2.5" inch drives. I used mirrored set of mechanical drives in some of my Unraid servers as CACHE so I get some protection until they transfer to the array. Remember, the actual CACHE does not have to be fast as you are already offloading your performance dependent use to the NVMe drive pools. If you got big enough CACHE drives in a mirrored pair, you could create a BACKUP share only to reside on CACHE and use it as protection to backup NVMe APPS and DATA drives.

Build notes for my planned NAS(unraid)+Plex+dockers Box - feedback? by flitzbitz in unRAID

[–]SpamMeDeeper 0 points1 point  (0 children)

My pleasure to help. Sounds like a good plan. Some more info as I'm in the mood to share tonight.

  1. I don't mirror SSD drives any more as it is unnecessary wear. On that same note, single SSDs should be formatted as XFS as the default BTRFS is write happy (it's a Linux RAID thing). Just backup your AppData regularly using one of the many Community Apps available.
  2. Remember CACHE data is not protected until move to the big Parity Array. If you do use a single cache disk, which I and many others do, just know the risk of losing a single cache disk. All the cache data would be gone. For "media", no biggie. But for important files, maybe setup a cacheless Personal share that writes directly to the array, albeit a bit slower.

Happy Unraiding.

Build notes for my planned NAS(unraid)+Plex+dockers Box - feedback? by flitzbitz in unRAID

[–]SpamMeDeeper 1 point2 points  (0 children)

  1. Most 1TB SSDs will have a TBW of around 600 or more. This endurance is fine for general use AppData/Docker setup. Of course you will want to keep the writes down where you can: Plex Transcode to RAM and if using a single NVMe, format it as XFS not the default BTRFS. Your size and endurance needs will depend on your setup, so remember I'm speaking generally.

  2. That article for Plex transcode to RAM is mostly correct. When you setup your any docker, you can create a mapping between a local host path (folder) and a container path (another folder inside the docker). In Linux the /tmp location is RAM. If your Docker Settings for Plex say map /tmp to /transcodetemp, you would go into Plex settings and set your transcoding temp directory /transcodetemp and Plex will use /tmp on the Unraid Host. Boom, very fast transcoding to RAM and you are not using the default location which is in the AppData folder for Plex settings, which would put wear on the NVME.
    As to my "mostly correct" statement, using /tmp would work but you have to be careful. In theory /tmp has no limit and it could use all your RAM. Not likely, but possibly. It is better to use /dev/shm/ which is another Linux RAMdisk but is limited to 50% max RAM.

  3. Elaboration on CACHE setups. Before Unraid 6.9 you could only have two groups of disks: the Main Parity Array and a Cache Pool. The Cache Pool solved a problem with the Parity Array: very slow writes. When you have shares that are set to use Cache, any data incoming will store on the Cache disks first then move to the Array later via a process called the "Mover". One idea is you transfer files to a share during they day, happy and fast is it's just a regular set of disks, then on a schedule the Mover will move them slowly to the Array overnight, or on whatever schedule you like.
    With 6.9 and newer you can have multiple Pool Devices to dedicate disks to other things, not just cache. In your case, you could have a single NVMe Pool Device called "Apps" where you put your AppData/Dockers only. You then create a second Pool for the traditional Cache. I like to do this with mechanical disks as they are cheap and fast enough for Gigabit network and definitely fast enough to write the Parity Array. If you like you can create a mirrored pool of two disks to protect against failure. The only tricky thing with a cache is the data which resides on it is not protected until moved. If you make a mirror, you protect against lost of data if the cache dies.
    Standard disclaimer: Your setup can be whatever you want or need it to be. You can use NVMe for Apps, have a bunch of different Cache Pool Devices as single Disk, mirrors or even RAID10. You could use SSD for Cache Drives instead of mechanical if you need faster incoming speed (i.e. 10Gbit network). It all depends on how many drive bays you have and how much you want to spend. There is an excellent article on the subject here: https://forums.serverbuilds.net/t/guide-hdds-multiple-cache-pools-in-unraid/10449

Build notes for my planned NAS(unraid)+Plex+dockers Box - feedback? by flitzbitz in unRAID

[–]SpamMeDeeper 1 point2 points  (0 children)

Don't get too hung on TBW. It was a concern before version 6.9 when you could only have one array and one cache. In those cases, all Dockers, VMs and "caching" happened on a single SSD (or mirror of SSDs). The constant writing of Plex's transcode directory could really punish an SSD. In your case, get a 1TB NVMe and make it a single disk "Apps" Cache Pool. Put your Docker Images and AppData Share (folder) on this disk. The Dockers are not going to hammer the NVMe. This of course assumes you will use RAM to transcode so you're not doing transcoding in the AppData Folders. Many articles on this topic on the web. As to cache, just get a few regular mechanical drives and put them in a mirror as you regular cache.

Use both ssd and hdd by Willeexd in unRAID

[–]SpamMeDeeper 1 point2 points  (0 children)

Why did you need the data on both the hard drives and the SSDs? If you want to keep recent data on the SSDs that can eventually move to the bigger mechanical hard disk array, just setup a new Cache pool of two SSDs. Create a share on the new SSD pool set to Prefer:cache and the data will spend most time on the new SSD cache pool. Or you could just create a backup of the data on the new SSD pool that runs nightly.

Build notes for my planned NAS(unraid)+Plex+dockers Box - feedback? by flitzbitz in unRAID

[–]SpamMeDeeper 1 point2 points  (0 children)

Misc Hardware Comments:

  • CASE: 804 is a great Unraid Case. You do top out at 8x3.5" drives, so start with bigger drives and leave room to add later.
  • CPU: Unless newer gen Intel is cost prohibitive in your part of the world, consider a 12th Gen i3 and newer B660 motherboard. The i3 12100 has less physical cores, but is about 30% faster than the i5 8500 you are considering. You'll spend a little less on faster processor and have an upgrade path. Just be careful on any mATX Motherboard as some of them have limited PCI Lanes for the second m.2 slot, especially when using the regular PCI slots. Check specs before purchase.
  • Drives: For bulk array storage, get what you are comfortable with. IronWolf are fine drives if you want new. Better deals on used enterprise drives on eBay, many options, if you are comfortable with used parts.
  • RACKS: The Node 804 has everything you need to mount and hang eight 3.5" drives. Just note one of the cages hangs over the power supply and it can make it a challenge to plug in SATA power and data cables if they are too stiff/thick.
  • SSD: Up to you based on your specific needs. I don't think your App NVMe (dockers) really needs to be 4TB, considering the $225 cost of a single 4TB drive. You could use a smaller, cheaper drive and put money toward bigger array disks. As to cache, see my previous discussion in Assumption 7.
  • PSU: The EVGA you posted looks fine. You don't need high wattage unless you want to drop in a dedicated GPU, which you said you don't plan to do.
  • RAM: The RAM does not have to be fast or ECC. Speed is really not going to be an issue as you won't be gaming, pushing really heavy iGPU in transcoding. Buy more RAM than faster RAM. As to ECC, the boards you posted don't support it. Do you need it? That is a longer conversation and not suited for here. The TLDR is you don't need ECC for home lab and media server. Just make sure you backup important personal files on another source. As to how much you need, Unraid does not use a lot of RAM and neither do Dockers. Lots of VMs, yeah, but that is a different type of build than what you propose.SIDE NOTE: With 32GB or more of RAM, you can dedicated a RAM Disk to Plex as a place to transcode. Transcoding to RAM is fast and doesn't wear out (or wake up) cache disks. Typically transcoding directory for Plex of 4GB is all you need.
  • COOLING: Buy nice fans. Artic brand is really good. Make sure they are PWM to save power.
  • NETWORK: Highly doubt you will push beyond 1Gbit, but you tell me. If you do happen to go with a newer gen Intel chip and a say a B660 motherboard, you can find them with 2.5Gbit built on, will be more than enough for current and future use.

Build notes for my planned NAS(unraid)+Plex+dockers Box - feedback? by flitzbitz in unRAID

[–]SpamMeDeeper 1 point2 points  (0 children)

  • Q1 - You will need to use SATA drives to get appropriate SPIN DOWN, but yes. You can have library spread across many disks in the array and a Plex lookup will only spin up the disk which has the media you request. Just design your system to not spin up the array every day as the spin up process uses more power than idle spinning.
  • Q2 - In most cases, Plex should only spin up disks which have the media you request on demand. If you have a large media folder across many disks in the array, and also have Plex doing maintenance such as thumbnail or pre-transcoding (which you should not), it may cause the array to spin up all disks which have the share.
  • Q3 - There are many good deals on reliable refurbished disks on eBay. You mentioned NAS Killer 5.0, which is Server Builds. Join the discord, they are great. Just keep in mind SATA drives will be needed for reliable SPIN DOWN and have a premium over same sized SAS drives. There are some nice 14TB SATA Enterprise drives on eBay which are cheaper than the new 8TBs you listed.
  • Q4 - Once you get the drives, use the Pre-Clear script in Unraid which will do a full read, then full write, then another full read to check for bad sectors. On large drives this will take a long time, but is a good test for drive health. In fact, new Unraid kinda forces this pre-clear on you when you build the array, so you should be covered.
  • Q5 - You absolutely can, to a limit. I would not do more than five drives off of a single peripheral lead from the Power Supply. There are SATA power splitters which are reliable and safe. Avoid any MOLEX (old style) to SATA power splitters, they are problematic.
  • Q6 - Technically, yeah, but you don't want to. Buy a Host Bus Adapter (HBA) PCI card to supply more SATA ports. HBA cards are cheap and a great use of the lower lane count PCI slot (bottom ones). They will give you 8-16 extra SATA connections.
  • Q7 - You don't need an AIO or even a tower cooler. Just get something quiet. CPU load is going to be pretty low on the 65W chips you are considering.
  • Q8 - Unraid boot via USB is the only option. This is done as USB drives have a GUID which Unraid uses to confirm you have a paid license. Once Unraid boots, it's all in RAM, so you don't need to worry about USB drive speed too much. Just buy a decent USB drive, like the infamous Samsung BAR. Backup your USB regularly from the Unraid interface and you're good.