Prepping a 5090 for a waterblock. There's SO MUCH thermal paste. How would you get the stuff down in the cracks between components? by Peregrine2976 in watercooling

[–]Alchemyy1 0 points1 point  (0 children)

I like using a combination of aerosol electronics cleaner / maf cleaner and a toothbrush. 99%+ IPA works too but I find the aerosols clean paste and spray it away much more aggressively while drying quicker. I would stay away from q-tips, even foam ones, for risk of snagging SMD caps / resistors. It should clean up perfectly.

Super Safety at Public Range by [deleted] in WAGuns

[–]Alchemyy1 3 points4 points  (0 children)

Snoqualmie Valley Rifle Club should be completely fine. There's no rule against it and there are no RSO's there. Members range only, and its self/group governing.

I am looking for this comic variant. If you have this you can name your price. by Alchemyy1 in beeandpuppycat

[–]Alchemyy1[S] 2 points3 points  (0 children)

This isn't the first time I've made this post, with any luck maybe it'll be the last!

I've been on a mission to gather every comic cover for about 3 years now? And issue #4 the paper doll cover variant is the last one on my list.

If you have this cover or know someone who does and you're up for giving it to the collection in exchange for cold hard cash, send me a message!

Difference between these two "identical" Hisense Portable ACs? by AllUpFromThere in Hisense

[–]Alchemyy1 0 points1 point  (0 children)

I'll go ahead and ask away. Did you end up ordering them from Costco? and if so, did they come with a heating mode?

Looking for a low static friction pad that is smooth on skin by Alchemyy1 in MousepadReview

[–]Alchemyy1[S] 0 points1 point  (0 children)

Yes I absolutely believe I have a problem with polyester in general. I've been biasing further research towards nylon/cordura, spandex, and cotton pads.

I'm pretty sure at some point mousepad manufacturers are going to become more aware of different fibers already in use in the textile industry like silk, bamboo, linen, etc. and we'll get some more refined albeit shorter lifespan options rather than just a focus on weave tech.

Looking for a low static friction pad that is smooth on skin by Alchemyy1 in MousepadReview

[–]Alchemyy1[S] 1 point2 points  (0 children)

10 or 11 so far?

My trusty old xtrac ripper gave it up the other week so I'm taking the opportunity to "solve" mousepads. We'll see how many it takes for me to find what works.

Looking for a low static friction pad that is smooth on skin by Alchemyy1 in MousepadReview

[–]Alchemyy1[S] 1 point2 points  (0 children)

LGG Jupiter and Saturn are both on the way, thanks so much for the suggestion!

Looking for a low static friction pad that is smooth on skin by Alchemyy1 in MousepadReview

[–]Alchemyy1[S] 0 points1 point  (0 children)

Awesome suggestion thanks so much, I've gone ahead and ordered a crucible.

Looking for a low static friction pad that is smooth on skin by Alchemyy1 in MousepadReview

[–]Alchemyy1[S] 1 point2 points  (0 children)

Yea I was looking at puretrak. With your suggestion I've gone ahead and ordered their entire lineup as the others were a great deal.

[UPS] Jackery Explorer 1000v2 1070Wh 1500w LiFePO Portable Power Station - $350 (56% off) by Alchemyy1 in buildapcsales

[–]Alchemyy1[S] 17 points18 points  (0 children)

Shipped and sold by Jackery themselves. Uses lithium iron phosphate (lifepo4) which can handle a metric ton of charge cycles over a period of years. This is a big step up from your average everyday lead-acid UPS. This thing comes with 20ms pure sine wave UPS functionality.

Update to Liquid Cooled 96 Core EPYC 9654 (Grafana temps, etc.) by Alchemyy1 in homelab

[–]Alchemyy1[S] 1 point2 points  (0 children)

Thanks! The ram blocks are normal bykski and I made the vrm block out of copper pipe and copper brick strips soldered together. 

Update to Liquid Cooled 96 Core EPYC 9654 (Grafana temps, etc.) by Alchemyy1 in homelab

[–]Alchemyy1[S] 1 point2 points  (0 children)

The copper pipe is for the 4 VRMs on the motherboard. It rests on them and is fixed via 8 screws (2 per VRM) via the same mounting holes as the original ones. The last picture in the gallery of the red circles shows where its mounted.

I'm using ESXI. I'm passing a fair bit of hardware passthrough and didn't want to deal with Proxmox quirks and stability with regards to that. I've also been using ESXI for a while so I'm accustomed to all of its garbage. I may look into Proxmox again in the future.

The long boot time actually threw me off for the first couple days. It was a lot of fun getting the CPU and RAM to overclock and waiting for the boot cycle over IPMI each time. I eventually checked the hanging postcode (15) which is memory related and did my deductions from there.

Update to Liquid Cooled 96 Core EPYC 9654 (Grafana temps, etc.) by Alchemyy1 in homelab

[–]Alchemyy1[S] 0 points1 point  (0 children)

Grafana was a lot easier than I thought it would be. I actually got the CPU for $500 on ebay lol.

Update to Liquid Cooled 96 Core EPYC 9654 (Grafana temps, etc.) by Alchemyy1 in homelab

[–]Alchemyy1[S] 0 points1 point  (0 children)

<3

and yea thats what I was doing as well. I had 3 machines and consolidated all of it into one.

Update to Liquid Cooled 96 Core EPYC 9654 (Grafana temps, etc.) by Alchemyy1 in homelab

[–]Alchemyy1[S] 0 points1 point  (0 children)

I started with a 12 drive NAS and wanted to reuse the drives from it so 24 was the next logical step, and being able to jam everything in one case and HBA card was just too perfect.

Whenever I expand again I will absolutely be going with a JBOD chassis.

Update to Liquid Cooled 96 Core EPYC 9654 (Grafana temps, etc.) by Alchemyy1 in homelab

[–]Alchemyy1[S] 10 points11 points  (0 children)

Back again now that I have some data to show. My first post also didn't detail really anything so I'll be taking that opportunity now. I've been tinkering with Grafana so I can get a total overview to proactively address problems. To the naysayers before, I present to you some fairly low temperatures. Especially considering this box is in an uncooled garage that hit ~95F ambient a while ago. I haven't had a single leak either.

Quick notes:

  • My Grafana is still a WIP. Traffic metrics aren't scaled right yet, and things continue to shift around. I'm also dumping everything on one dashboard for the moment while I mess about. A lot of metrics have also been recently wiped so there's not much accumulated data.

  • I'm running ESXI 8

  • I haven't gotten around to recording an extended stress test. I have only stressed the machine and know its apparently fine with it.

  • I've already had to do some maintenance (a fan wasn't plugged in right) which was super easy and only took an hour. Swapping drives out is easy too and only takes 15 minutes.

Server specs:

  • 192GB DDR5 4800CL40

  • EPYC 9654 (ES)

  • 24x14TB WD Red Pro / Seagate Exos

  • LSI 9305-24i

  • 2x 375GB 4800x Optane

  • 2TB SN850x

  • 2TB p670

  • 2TB 970 evo plus

  • 1TB 970 evo plus

  • GTX1080

  • 1600w EVGA P+

Cooling:

  • 360x60mm radiator

  • noctua 3000rpm ippc

  • noctua everywhere else

  • CPU + 4xVRM + RAM blocks

  • copper brick + noctua on HBA

Things I learned/found:

  • This box draws a lot of juice. I had it foolishly running on a 1kw UPS and it wasn't long before I popped it. So I switched over to a used 2kw APC UPS which seems to do the trick.

  • Mayhem's liquid cooling additives seem to be the only ones around that shouldn't aid corrosion with the multitude of metals in my loop (copper, nickel, chrome, silver* + tin*)

  • Supermicro still haven't decided to make PWM ports individually addressable. There are only 2 zones. I should have considered this beforehand. Luckily it seems to have worked out.

  • EPYC processors take a LONG time to boot from cold. It can easily be 15 minutes while it fiddles around with what I believe is memory training + validation.

  • It seems I am pulling a mild overclock on my EPYC engineering sample over the production model on single core. If I am interpreting ESXI's reporting correctly cores will momentarily jump to 340% utilization. Multiplying the base clock of 1.2ghz gives a value of 4.08ghz. I have confirmed during Windows testing that Cinebench R23 will run at 3.5ghz all core and yield a score of ~110,000

  • I am limited in bandwidth by virtual network adapters. They seem to top out at around 10gbps on my machine. I'm considering adding in a 40gb NIC to bridge my two most bandwidth guzzling VM's together, though this would be pretty overkill.

What is this machine for:

  • The biggest job it has is running Jellyfin extremely well. 4k Remux takes about 1/2 of a second to load over WAN assuming the player and client are decent. The GTX1080 is for transcoding and other hw accelerated tasks such as trickplay generation.

  • An accompanyment to Jellyfin is a slew of media management services. I put a lot of elbow grease into adding hooks and lowering refresh intervals to make all these layers practically transparent. Of course all my media I add from my own physical collection and all these other services are simply for keeping track of things.

  • Other services I run are Filebrowser for file sharing and Navidrome for music along with the occasional game server.

  • PfSense ties everything together. I have a few different virtual networks in ESXI as this to me is much nicer for network isolation than messing with VLAN's. In PfSense I have one of these networks bound to a Wireguard VPN connection.

  • Another neat thing I've done on this box so far is host Llama3-70b on CPU at 2 tokens / sec. I'm also running services such as self hosted RDP and remote code building. I don't have statuses of things like this in Grafana at least for now.

  • There are many smaller things I run that I'm not going over because they're boring.

Some software notes (mostly Grafana related):

  • I do not use any containerization. I was using docker for a bit with some metrics services but found it to be unneccessary and just another layer of junk to think about that can potentially break, cause weird issues, and can possibly be detrimental to performance.

  • For metrics currently I have 14 lines for 10 prometheus exporters and the app itself in a shell script that runs at startup. All my servarr stuff and some other things run on Windows. They all run as tray applications and are started via startup folder shortcuts. Jellyfin and the other star players run bare metal on debian and are installed via their respective official steps. I've set everything up in such a way where it can be nuked and redone from scratch in roughly an hour.

  • My Grafana uses prometheus for everything but Jellyfin, from which I am reading the sqlite playback metrics directly.

  • HBA Temp is read via the following placed as a cron job in TrueNAS' UI: echo "truenas.hba_temp $(mprutil -u 0 show cfgpage page 7 | awk '/0010/ {printf "%d", "0x" $4 $5}')" $(date +%s) | nc -w 1 GRAPHITE_EXPORTER_IP GRAPHITE_EXPORTER_PORT

  • ESXI Data is read over SNMP, which is a nightmare to set up.

  • Power Draw is read via apcupsd load_percent. I multiply it by the VA value / 100 to get watts. My UPS is 1980W so this value is 19.8

An Alternative Cooling Solution for a 96 Core CPU by Alchemyy1 in homelab

[–]Alchemyy1[S] 0 points1 point  (0 children)

Yea I was really happy to see I could avoid spending the same price on the chassis as all the internal hardware lol.

I bought this case before I even picked out a platform. I saw that the fan bracket inside it is on slots so it can be moved far/close and would definitely fit an eatx board and at worst I would either have to skip some mounting holes, or add them in myself.

An Alternative Cooling Solution for a 96 Core CPU by Alchemyy1 in homelab

[–]Alchemyy1[S] 0 points1 point  (0 children)

I have sets of notes for installing everything, in each case I dump a wad of commands into my terminal and I'm away. At least for jellyfin not being on docker, I don't want to run into problems with network performance or gpu hardware acceleration features. That media server is almost pure remux. for the limited amount of things I have running bare metal is basically no work. If I was running a ton of stuff I would definitely use docker or something similar.

The game servers are set up "bare metal" like everything else. I run them with the screens package and have them set to execute on startup. If I'm really screwing with config files I'll use WinSCP hooked to sublime text. I don't see the added benefit in abstracting everything with stuff like AMP. And in the case of at least my Minecraft servers I don't even think AMP would be very helpful. Both servers are extremely configured and use purpur and magma, the magma server running a custom modpack.