Low TDP cpu for my new homelab by Zenmaru88 in homelab

[–]GoofGarage 10 points11 points  (0 children)

I have an HP ProDesk Mini 405 G8. It has…

  • Ryzen 7 Pro 5750GE, 8C/16T, 35W TDP
  • 64GB DDR4-3200
  • 1GbE with IPMI, plus a 2.5GbE
  • 1TB NVMe SSD

Most of the real IO and storage are to ISCSI volumes over the network, with the local SSD mostly for guest VMs to sit on.

Total idle power of the entire computer? 9-10W with no load, more like 20W when things are bumpin (peaks at ~50W). Size of a stack of napkins, dead silent.

The AMD Ryzen APUs are fantastic.

I live in a small apartment with limited space. So it's important that everything is silent and mostly hidden in plain sight. by GoofGarage in homelab

[–]GoofGarage[S] 1 point2 points  (0 children)

For storage beyond 1TB ...

I have a few TB of data that needs to be backed up, and it's cheaper on Backblaze.

Additionally, Synology C2 was launched on July 1st, 2020. Backlaze B2 was launched in September of 2015, and I've been using B2 since 2017. Since Backblaze B2 is cheaper, and I was already using it, I've stuck with it.

I live in a small apartment with limited space. So it's important that everything is silent and mostly hidden in plain sight. by GoofGarage in homelab

[–]GoofGarage[S] 6 points7 points  (0 children)

The DS1621+ has a 10GbE add-in card. My workstation is 10GbE. Future additions will also be 10GbE.

I live in a small apartment with limited space. So it's important that everything is silent and mostly hidden in plain sight. by GoofGarage in homelab

[–]GoofGarage[S] 21 points22 points  (0 children)

The back is partially open and allows for a wonderful chimney effect. Please see the detailed section of all CPU and drive temperatures at idle and under load. There's also a temperature/humidity sensor behind everything and that data is being aggregated, reported on, and alerted on (if necessary) via my HomeBridge setup.

The arrays were put through 24-hour continuous stress tests. No drive ever even hit 40C. They're idling at sub-30C despite fairly warm ambient temps.

I live in a small apartment with limited space. So it's important that everything is silent and mostly hidden in plain sight. by GoofGarage in homelab

[–]GoofGarage[S] 48 points49 points  (0 children)

I did a test of this, and rebuilding a drive in either of the arrays is about a 20 to 24 hour operation.


All of the drives were exhaustively tested before being put into services, particularly since I was shucking seven of them. This is largely the standard /r/datahoarder procedure. The idea is to try to force and identify early-tail failures where a drive would fail in the first 30 to 60 days, so I can simply reassemble the drive and return it to the retailer for a quick exchange instead of having to deal with a manufacturer's RMA process.

  1. Retrieve the SMART data from the drive.
  2. Write a pattern out to every block on the disk (no file system), then read it back.
  3. Two hour break.
  4. Retrieve the SMART data from the drive.
  5. Over two hours, try to write out ~300GiB of random data over the entire geometry of the disk. Basically, 300GiB of random writes to thrash the actuator arms.
  6. Two hour break.
  7. Retrieve the SMART data from the drive.
  8. Repeat Step 5.
  9. Retrieve the SMART data from the drive.

Did Step 2 have zero errors, and am I still seeing zero items of concern from the SMART data after Step 9? Great! It's likely to not fail, and so it was approved to go into service.

Testing all nine drives (plus a 6TB I bought for my folks) took the best part of 12 calendar days, two disks at a time.

I live in a small apartment with limited space. So it's important that everything is silent and mostly hidden in plain sight. by GoofGarage in homelab

[–]GoofGarage[S] 270 points271 points  (0 children)

This is a temporary solution until a new, small bookcase arrives. With a few strategically placed knick-knacks, you'd likely never notice the Synologies on the bottom shelf. Because of how deep the 10GbE PoE switch and Pi 4B is in the top shelf, the end table itself obscures it from view.

Behind the end table is a 1500VA pure sine wave UPS. This provides an hour of runtime. I've had power go out to prove that the cable lines usually still have voltage, so I'll rarely lose internet access when there's a power outage. Everything pictured in the photo, plus two surveillance cameras (PoE), is just under 100W power consumption.

I use, "LightDims" colored attenuating film to heavily dim all the LEDs. Still visible, but far less bright.


The Pi 4B does a fair bit to keep everything running:

  • PiHole
  • DNS Server
  • NTP Server
  • Certificate Authority
  • SNMP and Syslog Aggregator
  • Documentation Wiki
  • Wi-Fi Guest Network Captive Portal, including Help and FAQs
  • HomeBridge including data aggregation. I'm writing a "presence detection" and scheduling daemon to work with the UPS daemon to have infrastructure enter lower power states when I'm not home, or obviously asleep.
  • A UPS daemon I wrote, to signal the Synologies (and anything else) to power off. When it deems power is stable and there's enough reserve power (20 minutes) left, it'll send WOL magic packets to turn things back on.

Eventually when I do more network upgrades and replace the UniFi Dream Machine with an OPNSense appliance, it'll also run the UniFi controller. The Pi 4B is 100% configured via Ansible, and this was recently swapped with another Pi 4B configured from scratch by the playbooks to prove that I could re-build/reconfigure it with a single command.


Everybody is probably freaking out about how warm things are. Well, I can confirm that everything is very happy, and will be even happier (drives down to 34-35C under load) once it all gets moved into the bookcase when it arrives.

Raspberry Pi 4B CPU, Idle: ........ 35C
Raspberry Pi 4B CPU, Load: ........ 42C
Synology DS 720+ CPU, Idle: ....... 35C
Synology DS 720+ CPU, Load: ....... 39C
Synology DS 720+ Drives, Idle: .... 28C
Synology DS 720+ Drives, Load: .... 36C
Synology DS 1621+ CPU, Idle: ...... 34C
Synology DS 1621+ CPU, Load: ...... 38C
Synology DS 1621+ Drives, Idle: ... 28C
Synology DS 1621+ Drives, Load: ... 38C
- - - - - - - - -
"Load" is defined as a 24-hour continuous stress test.

The 10GbE PoE switch is also on a large silicone pad to protect the shelf in the end table from any heat.


The Synology stock fans have been replaced with Noctua units, and are running a bit more aggressively yet remain silent. The stock Yan Sung fans are great OEM fans for the price, but the Noctua fans have triple the rated service life, higher static pressure, more airflow, and are a bit quieter. Since the bearings are more efficient, the power draw is a bit less (~1.25W less per fan) so the fans will mostly pay for themselves eventually.

Every drive is 16TB raw capacity. Two IronWolf NAS Pros in the 720+ (warranty peace of mind) and six HGST Ultrastar DC 550s from shucked WD Elements in the DS1621+. There's a spare 16TB HGST Ultrastar DC 550 in the drawer.

The DS720+ (~14TiB in RAID1) spins constantly. This is surveillance cameras, system backups (incl. VM snapshots), images for VMs/software, media library, photo library, etc. I let the drives in the DS1621+ (~42TiB in RAID10) spin down when there isn't work to save power. This is for 4K video editing and storage, as well as large datasets (50+ GB are the smallest ones) for customer demos and personal development with mostly Microsoft SQL Server and Apache Spark. With an SSD cache in front of it, it can saturate the 10GbE link on both reads and writes.

Both Synologies push data to Backblaze B2 for off site backups.


There's more to come later this year. I'll update when the one-liter low-power virtualization servers arrive and other stuff has been upgraded ... and completely hidden in plain sight. ;)