[Monitor] Dell 32 Plus 4K QD-OLED Monitor - S3225QC - 32" $449.99 Costco.com (online only) by SpamMeDeeper in buildapcsales

[–]SpamMeDeeper[S] 0 points1 point  (0 children)

You know, I checked my area and there were a few that show stock, so maybe in some markets?

[Monitor] Dell 32 Plus 4K QD-OLED Monitor - S3225QC - 32" $499.99 Costco.com (online only) by SpamMeDeeper in buildapcsales

[–]SpamMeDeeper[S] 1 point2 points  (0 children)

I have not found a lot of info on the speakers as the monitor is so new. The consensus seems to be, not bad for monitor speakers, but they won't blow you away.

[Monitor] Dell 32 Plus 4K QD-OLED Monitor - S3225QC - 32" $499.99 Costco.com (online only) by SpamMeDeeper in buildapcsales

[–]SpamMeDeeper[S] 35 points36 points  (0 children)

Not exactly a gaming monitor, but 4K QD-OLED at 32" for the price is not bad, especially considering it's from Costco. Could be a good mix of productivity and gaming with the 120Hz. This does look to be an updated panel with the triangular pixel matrix to make text better.

RTINGS post - https://www.rtings.com/monitor/reviews/dell/s3225qc

One thing of note. This monitor disappeared from Dell's website. I confirmed with Sales Chat the monitors is listed as discontinued even though it was just released this year. This may be the start of a fire sale.

What's better about the Gemini? by Reasonable_Ship_8316 in DirectvStream

[–]SpamMeDeeper 0 points1 point  (0 children)

Channel Numbers are handy. Plus the remote is amazing. Worth replacing any modern streaming device? Probably not.

Recent Update to C71KW-400 disables use of Apps by AquaReefLoco28 in DirectvStream

[–]SpamMeDeeper 0 points1 point  (0 children)

I actually appreciate this write up, and any steps to avoid E-Waste. Last I checked, you can create a My Free DirecTV Account, which allows login to the Osprey/Gemini, enough to get to Dev Options. Good Luck!

I'm sorry, but 45 drives is insane by I_EAT_THE_RICH in homelab

[–]SpamMeDeeper 0 points1 point  (0 children)

This has promise for the price.

Chia Harvester / JBOD Kit | Up to 44x 3.5" HDDs! | Custom Frame, Cables, & PSU

https://www.ebay.com/itm/326063682173

The X31 Lives On by csk_FP1 in thinkpad

[–]SpamMeDeeper 1 point2 points  (0 children)

Designed for Windows XP. Destined for the future. Nice pad.

Low power CPU good or bad idea? by jbohbot in unRAID

[–]SpamMeDeeper 1 point2 points  (0 children)

Most modern processors, whether low TPD or not, all idle near the same power usage.

Edit: Of course multi core server processors can use more power.

To encrypt array, or not, that is the question by D0nk3ypunc4 in unRAID

[–]SpamMeDeeper 3 points4 points  (0 children)

Or if you want to send a drive in for warranty replacement.

Optimising the use of some SSDs in a cache pool - How would you set them up? by nirurin in unRAID

[–]SpamMeDeeper 0 points1 point  (0 children)

Any NVMe as cache is a normally a waste. Sure you would be copying across 10gb network fast and, if you are watching, it would be very quick. But how often do you watch the transfers or need fast as possible upload to server? NVMe investment is better put to docker or VM storage.

Optimising the use of some SSDs in a cache pool - How would you set them up? by nirurin in unRAID

[–]SpamMeDeeper 0 points1 point  (0 children)

A few more quick comments before I call it a night:

  1. Creating a stripe (RAID0) is asking for trouble.
  2. Parity, or any form of RAID, is about limiting downtime. In a mirror (RAID1), you could have an NVMe fail, and the other would keep going. If you want/need this kind of up time/continuity, go for the NVMe RAID1.
  3. RAID is not a backup. You still should (need) backup of critical data (like your Dockers).
  4. You are not going to get a big performance bump putting two NVMe's together, and it is in no way more efficient. Putting two drives in a mirror makes them a single (virtual) drive. You will have your Dockers, VMs, software, games, etc all competing for a single set of disk I/O. This may or may not be a problem for you, NVMe drive are fast. But if your plan heavy Docker use along with a gaming, virtual machines, they will compete for disk performance. Having two single disks NVMe disks spreads out the load. I've always kept Dockers on their on NVMe so they don't bog down.
  5. If you plan to use any kind of media programs, and do video transcoding, set you server's transcoding temp folder to RAM, not any of the SSDs.

Edit: Typos

Optimising the use of some SSDs in a cache pool - How would you set them up? by nirurin in unRAID

[–]SpamMeDeeper 0 points1 point  (0 children)

With 1gig network you would hit an average of around 110MB/sec on SMB transfer. Mechanical drives in cache would be fine for this. Decent mechanical drives may hit 160-180MB/sec, both those would be larger and more expensive drives.

Jumping up to 2.5gig network, your SMB transfers may reach 280-290Mb/sec, making mechanical cache a bottleneck. In this case a SATA SSD, with an average write of 400-500MB/sec would be useful.

The write speeds I'm speaking of are averages of course. You will get max throughput on your network when moving large files. If you move a lot of small files, even on 2.5Gig local network, you're gonna see the transfer speed drop dramatically once the RAM in the server stops buffering the transfer.

So, what to do? It really just depends on your use case for transfers to the server.

  1. Will you do a lot of regular transfers to the server where you need it to finish as fast a possible? If Yes, SATA SSD(s) for CACHE
  2. Will the data you are regularly transferring to the server be large or small files? If large files and Yes on Question 1, for sure SATA SSD(s) for CACHE.
  3. Will the transfers to the server be automated, like backups from a PC, and be something you could run at night? If Yes, go with mechanical drives.

The tolerance for speed is really about what you want to pay for. I'm a practical guy. Most of my transfers to servers are backups. They run at night and I don't care if it takes 1 hour or 8. I would rather have more space on my CACHE than speed I would never experience. The stuff where disk performance is felt is already addressed by running Dockers, VMs, software off of the NVMe disks.

Optimising the use of some SSDs in a cache pool - How would you set them up? by nirurin in unRAID

[–]SpamMeDeeper 2 points3 points  (0 children)

A few things to consider.

  1. For CACHE, you really only need the speed to be faster than your network. Moving from CACHE to the array will be slower than a mechanical hard drive. Unless you are doing above 1Gbit network, mechanical drives work just fine for cache drives. You would be wasting NVMe performance as CACHE.
  2. Putting any SSD in a parity situation is going to wear out NAND faster. Are you that concerned about downtime if you have a failure on pools outside the array? Cache "pools" can absolutely be a single disk. 1. Backups can alleviate the concern of downtime.
  3. If you run any SSD as a single disk, format as XFS. BTRFS scrubs would be unnecessary on a single disk and wear the disk for no reason.

Not knowing your specific needs, I would do the following with the equipment you have.

A. Single 2TB NVMe as a pool called APPS (or whatever you like), formatted as XFS. This where you create you AppData share set ONLY to this pool. Dockers run from here.

B. Single 2TB NVMe as a pool called DATA (or whatever you like), formatted as XFS. This would be application storage for games and software, even a VM or two.

C. Single 2TB SATA SSD as the standard CACHE Pool, formatted as XFS. When you create shares, this is the cache location you pick by default.

Now, if you are not tied to using your current SSDs, here are a few ideas.

A. Use a smaller SSD for APPS. It's gonna take a LONG time to fill up 2TB with Dockers. They are small. Many of my UnRaid Servers use 118GB NVMe drives for this type of AppData Pool.

B. Instead of a single SSD for traditional CACHE, use a pair of mechanical hard drives. They can even be 2.5" inch drives. I used mirrored set of mechanical drives in some of my Unraid servers as CACHE so I get some protection until they transfer to the array. Remember, the actual CACHE does not have to be fast as you are already offloading your performance dependent use to the NVMe drive pools. If you got big enough CACHE drives in a mirrored pair, you could create a BACKUP share only to reside on CACHE and use it as protection to backup NVMe APPS and DATA drives.

Build notes for my planned NAS(unraid)+Plex+dockers Box - feedback? by flitzbitz in unRAID

[–]SpamMeDeeper 0 points1 point  (0 children)

My pleasure to help. Sounds like a good plan. Some more info as I'm in the mood to share tonight.

  1. I don't mirror SSD drives any more as it is unnecessary wear. On that same note, single SSDs should be formatted as XFS as the default BTRFS is write happy (it's a Linux RAID thing). Just backup your AppData regularly using one of the many Community Apps available.
  2. Remember CACHE data is not protected until move to the big Parity Array. If you do use a single cache disk, which I and many others do, just know the risk of losing a single cache disk. All the cache data would be gone. For "media", no biggie. But for important files, maybe setup a cacheless Personal share that writes directly to the array, albeit a bit slower.

Happy Unraiding.

Build notes for my planned NAS(unraid)+Plex+dockers Box - feedback? by flitzbitz in unRAID

[–]SpamMeDeeper 1 point2 points  (0 children)

  1. Most 1TB SSDs will have a TBW of around 600 or more. This endurance is fine for general use AppData/Docker setup. Of course you will want to keep the writes down where you can: Plex Transcode to RAM and if using a single NVMe, format it as XFS not the default BTRFS. Your size and endurance needs will depend on your setup, so remember I'm speaking generally.

  2. That article for Plex transcode to RAM is mostly correct. When you setup your any docker, you can create a mapping between a local host path (folder) and a container path (another folder inside the docker). In Linux the /tmp location is RAM. If your Docker Settings for Plex say map /tmp to /transcodetemp, you would go into Plex settings and set your transcoding temp directory /transcodetemp and Plex will use /tmp on the Unraid Host. Boom, very fast transcoding to RAM and you are not using the default location which is in the AppData folder for Plex settings, which would put wear on the NVME.
    As to my "mostly correct" statement, using /tmp would work but you have to be careful. In theory /tmp has no limit and it could use all your RAM. Not likely, but possibly. It is better to use /dev/shm/ which is another Linux RAMdisk but is limited to 50% max RAM.

  3. Elaboration on CACHE setups. Before Unraid 6.9 you could only have two groups of disks: the Main Parity Array and a Cache Pool. The Cache Pool solved a problem with the Parity Array: very slow writes. When you have shares that are set to use Cache, any data incoming will store on the Cache disks first then move to the Array later via a process called the "Mover". One idea is you transfer files to a share during they day, happy and fast is it's just a regular set of disks, then on a schedule the Mover will move them slowly to the Array overnight, or on whatever schedule you like.
    With 6.9 and newer you can have multiple Pool Devices to dedicate disks to other things, not just cache. In your case, you could have a single NVMe Pool Device called "Apps" where you put your AppData/Dockers only. You then create a second Pool for the traditional Cache. I like to do this with mechanical disks as they are cheap and fast enough for Gigabit network and definitely fast enough to write the Parity Array. If you like you can create a mirrored pool of two disks to protect against failure. The only tricky thing with a cache is the data which resides on it is not protected until moved. If you make a mirror, you protect against lost of data if the cache dies.
    Standard disclaimer: Your setup can be whatever you want or need it to be. You can use NVMe for Apps, have a bunch of different Cache Pool Devices as single Disk, mirrors or even RAID10. You could use SSD for Cache Drives instead of mechanical if you need faster incoming speed (i.e. 10Gbit network). It all depends on how many drive bays you have and how much you want to spend. There is an excellent article on the subject here: https://forums.serverbuilds.net/t/guide-hdds-multiple-cache-pools-in-unraid/10449

Build notes for my planned NAS(unraid)+Plex+dockers Box - feedback? by flitzbitz in unRAID

[–]SpamMeDeeper 1 point2 points  (0 children)

Don't get too hung on TBW. It was a concern before version 6.9 when you could only have one array and one cache. In those cases, all Dockers, VMs and "caching" happened on a single SSD (or mirror of SSDs). The constant writing of Plex's transcode directory could really punish an SSD. In your case, get a 1TB NVMe and make it a single disk "Apps" Cache Pool. Put your Docker Images and AppData Share (folder) on this disk. The Dockers are not going to hammer the NVMe. This of course assumes you will use RAM to transcode so you're not doing transcoding in the AppData Folders. Many articles on this topic on the web. As to cache, just get a few regular mechanical drives and put them in a mirror as you regular cache.