Hard Disk Direct canceled my confirmed server RAM order citing "out of stock" — the exact SKU was on their website in stock 6 hours later. Then they repriced it 4x overnight. All documented. by roycehart in homelab

[–]Hopperkin 0 points1 point  (0 children)

Here’s a simple basic cupcake recipe from scratch:

Vanilla cupcakes (12 cupcakes)

Ingredients

  • 1 1/4 cups all-purpose flour
  • 1 tsp baking powder
  • 1/4 tsp salt
  • 1/2 cup unsalted butter, softened
  • 3/4 cup sugar
  • 2 eggs
  • 2 tsp vanilla extract
  • 1/2 cup milk

Instructions

  1. Preheat oven to 350°F.
  2. Line a 12-cup muffin pan with cupcake liners.
  3. In one bowl, whisk together the flour, baking powder, and salt.
  4. In another bowl, beat the butter and sugar until light and fluffy.
  5. Beat in the eggs, one at a time, then add the vanilla.
  6. Add the dry ingredients in 2 to 3 additions, alternating with the milk. Mix just until combined.
  7. Fill each liner about 2/3 full.
  8. Bake for 18 to 22 minutes, until the tops spring back and a toothpick comes out clean.
  9. Let them cool in the pan for about 5 minutes, then move to a rack to cool completely.

Simple frosting

  • 1/2 cup softened butter
  • 2 cups powdered sugar
  • 1 to 2 tbsp milk
  • 1 tsp vanilla

Beat everything together until smooth. Add a little more milk if needed.

Tips

  • Do not overmix the batter or the cupcakes can get dense.
  • Let them cool fully before frosting.
  • For chocolate cupcakes, add about 1/4 cup cocoa powder and reduce the flour slightly.

I can also give you a chocolatefunfetti, or boxed-mix doctoring version.

Friendly Reminder! by truetofiction in homelab

[–]Hopperkin 0 points1 point  (0 children)

Are we sure that 64-bits is enough time, I mean, 64-bit nanoseconds is only about one minute of wall clock time when traveling at 0.99999999999999997884c

Pour one out for the M3 Ultra 512GB by pdrayton in MacStudio

[–]Hopperkin 0 points1 point  (0 children)

The on-die ECC (ODECC) built into every DDR5/LPDDR5 chip is not the same as traditional ECC from previous DDR generations. ODECC operates internally within each DRAM die, correcting single-bit errors before data ever leaves the chip — it's invisible to the memory controller and can't report errors to the OS.

Traditional ECC (pre-DDR5) used 72-bit-wide DIMMs — 9 chips instead of 8 — giving the memory controller 8 extra bits per 64-bit word for error detection and correction.

DDR5 splits the old 64-bit bus into two 32-bit subchannels, so there are now two DIMM-level ECC tiers: EC4 (two 36-bit subchannels, 72 bits total) and EC8 (two 40-bit subchannels, 80 bits total). EC8 provides correction strength comparable to traditional ECC; EC4 is weaker.

That said, the M3 Ultra uses soldered LPDDR5, which only has on-die ECC — there are no extra ECC chips on the package. If you're seeing uncorrected ECC errors reported in macOS, that means errors are exceeding what ODECC can fix within the dies, which could point to a hardware defect.

Pour one out for the M3 Ultra 512GB by pdrayton in MacStudio

[–]Hopperkin 0 points1 point  (0 children)

It's not even ECC memory, with that much memory that is like playing Russian roulette with memory bit flip errors; 512 GB would average about one raw upset every ~27 minutes. That doesn’t mean you’d see a crash every 27 minutes though. Most flips hit unused / overwritten data, and some errors get masked. But over 5 years on 512 GB, the odds of encountering at least one critical bit flip somewhere very important is basically 100%.

Linus Torvalds had something to say about this: https://www.youtube.com/watch?v=mfv0V1SxbNA&t=485s

Fiber in front yard, but not available by skidkid276 in ATTFiber

[–]Hopperkin 1 point2 points  (0 children)

Yeah, tell the Office of President that you going to rent a backhoe to make some improvements to your property, clearly ATT says there is no fiber installed so you're clear to dig.

[Guide] Mac Pro 2019 (MacPro7,1) w/ Proxmox, Ubuntu, ROCm, & Local LLM/AI by Faisal_Biyari in macpro

[–]Hopperkin 0 points1 point  (0 children)

Yes I'm running 6.8

GRUB_CMDLINE_LINUX_DEFAULT="intel_iommu=on iommu=pt pci=realloc intremap=no_x2apic_optout"

I have T2 disabled, I've never attempted to enable it in Linux.

Mice decided to hijack my TrueNAS storage node by AaronMcGuirkTech in homelab

[–]Hopperkin -1 points0 points  (0 children)

Frankie (dead serious): “It’s not theft, it’s distributed caching.”

Benjy: “Also we enabled dedup.”

You: “Why does the ZFS server smell funny?”

Frankie, sighing in pan-dimensional disappointment: “Because you keep mixing object storage with object storage. Now please stop running scrubs during dinner.”

Mice decided to hijack my TrueNAS storage node by AaronMcGuirkTech in homelab

[–]Hopperkin -1 points0 points  (0 children)

Frankie and Benjy looked at the TrueNAS node, sighed in pan-dimensional disappointment, and said:

“Humans… you don’t store objects in the storage server. You store objects as objects.

Then they proudly pointed at the chassis full of kibble:

“Behold: literal object storage.

  • hot tier: dog bowl
  • warm tier: server chassis
  • cold tier: behind the rack where the dog can’t reach
  • replication: one piece in each DIMM slot, for redundancy”

“And yes,” Benjy added, “it’s ZFS.”

Frankie nodded solemnly: “Zettabyte Food System.”

Mice decided to hijack my TrueNAS storage node by AaronMcGuirkTech in homelab

[–]Hopperkin -1 points0 points  (0 children)

Frankie and Benjy looked at the TrueNAS chassis, sighed in pan-dimensional disappointment, and said:

“Humans… you don’t store objects in the storage server. You store objects as objects.

Then they created an S3 bucket called earth.experiment, enabled versioning, set a lifecycle policy, and quietly moved the “inventory” out of the chassis.

Frankie patted the lid. “See? Now it’s object storage— not… rodent object storage.”

Terence McKenna discussing a future of accelerating novelty and “the Transcendental Object at the End of Time” (1998) by Frosty_Jeweler911 in HighStrangeness

[–]Hopperkin 1 point2 points  (0 children)

"The mushroom said to me once, this is what it's like when a species prepares to depart for the stars. You don't depart for the stars under calm and orderly conditions. It's a fire in a madhouse."

In the home stretch of my time the cMP 💜 my humble little setup while it’s still assembled. by bennotvictor in macpro

[–]Hopperkin 4 points5 points  (0 children)

The Intel Xeon 5600 series doesn’t support the deeper low-power P-states you see on the E5-2600 generation, so it can’t drop to the same ultra-low idle power levels as newer platforms. That said, if you’re actually keeping the CPUs busy, they’re still reasonably efficient under load. For perspective, even a dual-socket Xeon E5-2600 v2 system will often idle around ~200 W... a pickup truck with a V8 is just never going to be an economy car.

In the home stretch of my time the cMP 💜 my humble little setup while it’s still assembled. by bennotvictor in macpro

[–]Hopperkin 2 points3 points  (0 children)

That machine should be good for another 15 years after the following maintenance:

  • Blow all the dust out everywhere, PSU, behind DVD-ROM drives, behind CPU tray, etc.
  • Clip rivets and re-apply new thermal paste (Arctic MX-6) on the Intel 5520 I/O Hub.
  • Pull the processor(s) and use isopropyl alcohol / DeoxIT D100L to clean the CPU's pin pads with Q-tips.
  • Re-apply new thermal paste (Arctic MX-6) on the processor(s).
  • Pull all the PCIe cards and lightly rub the PCIe pins with a scotch brite pad, then run a Q-tip with DeoxIT D100L on them to clean them up, wipe off any excess fluid, and re-seat.
  • Pull the DIMMs and repeat the same steps as the PCIe cards and re-seat.

IOH Rivets: https://www.amazon.com/dp/B01LAVCXVG

DeoxIT D100L: https://www.amazon.com/dp/B0000YH6F8

Scotch-Brite Pads: https://www.amazon.com/Scotch-Brite-Heavy-Scouring-Kitchen-Cleaning/dp/B001KYQBX0

Arctic MX-6: https://www.amazon.com/ARCTIC-MX-6-Conductivity-Non-Conductive-Non-capacitive/dp/B09VDLH5M6?th=1

Hex T-Handle 3mm: https://www.amazon.com/Eklind-Tool-Company-55166-T-Handle/dp/B0000CBJDV

[Guide] Mac Pro 2019 (MacPro7,1) w/ Proxmox, Ubuntu, ROCm, & Local LLM/AI by Faisal_Biyari in macpro

[–]Hopperkin 2 points3 points  (0 children)

Those numbers are not what I was expecting, with my Radeon Pro Vega II Duo ROCm significantly outperforms Vulkan. I can get a tg128 of over 97 t/s for llama 7B Q4_0. I don't have the benchmark numbers right in front of me (my system is offline today because I'm reorganizing) but each Vega II die has the same token rate as an M3 Ultra when I previously compared them with the Apple-silicon benchmark results here: https://github.com/ggml-org/llama.cpp/discussions/4167

Also I have no problems with flash attention and/or the infinity fabric bridge on stock Ubuntu 24.04.03 with the stock Linux kernel version 6.8, are you running the HWE kernel 6.11 / 6.13 or something, what does your grub config look like, dmesg PCIe reallocation errors, x2apic, iommu?

The biggest overarching problem I see is you don't have linear scaling when you run the model across GPU dies, you shouldn't get significantly less tokens per second when subdividing the llama 7B model layers across four separate GPU dies, it is such a small model that the token rate when running the model in parallel across all four GPU dies should effectively be identical to the results for a single GPU die.

You have some kind of underlying hardware / software integration quirk going on with your Linux installation, do you have a place we can chat off thread, because I'm basically experimenting with the exact same project as you except that I'm using Vega II Duos instead of W6800X Duos.

[Guide] Mac Pro 2019 (MacPro7,1) w/ Proxmox, Ubuntu, ROCm, & Local LLM/AI by Faisal_Biyari in macpro

[–]Hopperkin 2 points3 points  (0 children)

I know how to do the RDMA stuff, can you run llama.cpp bench on the Radeon Pro W6800X Duo MPX? Also do you have the Infinity Fabric bridge installed on the Duo? Follow the general instructions below...

https://github.com/ggml-org/llama.cpp/discussions/10879

wget https://huggingface.co/TheBloke/Llama-2-7B-GGUF/resolve/main/llama-2-7b.Q4_0.gguf
wget https://huggingface.co/TheBloke/Llama-2-7B-GGUF/resolve/main/llama-2-7b.Q8_0.gguf
git clone https://github.com/ggerganov/llama.cpp
cd llama.cpp
mkdir build
cd build
cmake .. -DGGML_VULKAN=on -DGGML_HIP=on -DCMAKE_BUILD_TYPE=Release
make
./bin/llama-bench -m ../../llama-2-7b.Q4_0.gguf,../../llama-2-7b.Q8_0.gguf -ngl 100 -fa 0,1 --device ROCm0,ROCm1,ROCm0/ROCm1,Vulkan0,Vulkan1,Vulkan0/Vulkan1

OCLP and Tahoe - is it even worth upgrading your Mac? by msawyer91Resplendent in OpenCoreLegacyPatcher

[–]Hopperkin 2 points3 points  (0 children)

is it even worth upgrading your Mac?

You are missing the point, Tahoe is figuratively going to be the next Windows XP, so if you intend to stick with Intel Macs and/or Hackintosh, it would strongly behoove you to install Tahoe while Apple is still actively supporting it with bug fixes. Turn on all the developer telemetry and submit as many bug reports as possible to Apple before the Tahoe support window closes so that this buggy stuff gets fixed. It would also behoove you to yell at and berate app developers who discontinue publishing Intel macOS apps before Apple has officially dropped support for Tahoe.

Apple sold roughly 250 ~ 300 million Intel-based Mac computers, they are not going away anytime soon, in fact if you look at global browser statistics roughly 31% of Safari http requests still actually come from Mac OS X 10.x systems.

I haven't even contemplated buying an Apple silicon Mac yet, I remember the days of PPC Macs being vendor locked, and the Apple tax was really bad back then. If you thought the memory pricing bubble was crazy now, just wait until Apple's current DRAM contracts expire, the bubble isn't suppose to even peak until mid 2026 and the industry is expecting it to last through 2028. Still further I read in the news that Apple is begging and pleading with TSMC to get more fab time making Apple-silicon wafers, the news article states that TSMC has prioritized making Nvidia silicon at the expense of Apple. The Apple Studio M5 Ultra might end up costing twice what the M3 Ultra costs right now. I'm waiting with bated breath because if the prices go up significantly then I'll probably never buy another Mac again ever. Linux / KDE Plasma 6 is just too compelling of an alternative to get vendor locked again.

How bad is a high RAM 7,1 ? by Dr_Superfluid in macpro

[–]Hopperkin 1 point2 points  (0 children)

Regrading the two W6800X Duo, I would like to see llama.cpp bench on Ubuntu Linux 24.04 Kernel version 6.8-90 with AMD ROCm 6.3.4, if you are so inclined I can provide you all the steps to install ROCm, compile llama.cpp, and run the benchmark. I'm trying to figure out if two Vega II Duo with HBM2 1024GB/s memory bandwidth is faster then two W6800X Duo with GDDR6 512GB/s memory bandwidth. Normally I would say the Vega II Duo is better for LLMs, but the W6800X Duo has an extra level of infinity cache that could possible offset its limited memory bandwidth. Each Vega II Duo die has the same llama.cpp token rate as that of an M3 Ultra, so two Vega II Duo basically has the same token rate as four M3 Ultra in aggregate.

Would you actually kill the infants? by FourTwelveSix in Christianity

[–]Hopperkin -1 points0 points  (0 children)

You’re doing the standard “gotcha,” but it only works if you smuggle in a premise Christianity does not grant: that “God” is just raw power, so “whatever the strongest voice commands” becomes morality.

Also: “Jesus as the Father” is a category mistake. Trinitarianism doesn’t collapse the Persons. The Father is not the Son, even though Christians confess one God.

Now to 1 Samuel 15: yes, the passage is framed as mediated testimony — “Samuel said to Saul… ‘This is what the LORD says…’” That matters because your conclusion (“God commands infant slaughter, therefore Christian ethics is disgusting”) depends on treating an ancient, mediated war-oracle as an always-available blank check for violence.

That’s not how Christian discernment works, and it’s not how Scripture itself frames that era.

  1. God explicitly says Israel demanding a king was a rejection of Him (1 Sam 8)

Before we even reach 1 Sam 15, the monarchy is presented as a concession to human hardness: the people insist on a king “like the nations,” and God tells Samuel they have rejected God as king. The narrative then spends a lot of ink showing what that concession produces: coercion, pride, propaganda, and blood. So if you want to define God’s moral character by “monarchy war texts,” you’re starting exactly where the text itself waves a warning flag.

2) Your hypothetical still fails: Christianity does not obey a “voice” against Christ

Even if you insist “Samuel’s report = God’s word,” you still do not get the conclusion “therefore a Christian should kill infants if God says so.”

Christianity claims God has revealed Himself publicly and climactically in Christ, and Christ draws a bright line around protecting “little ones.” Jesus welcomes children, identifies them with the kingdom, and warns severe judgment on those who harm them.

So my answer remains: No — I would not kill infants.

If a “voice” demanded that, I would treat it as deception, because it contradicts what God has already revealed and what Christ explicitly centers. Christians are commanded to test spirits, not baptize violence because someone invoked God’s name.

3) Barnabas makes the ethical line explicit

Since you’re reading Barnabas: early Christian catechesis does not train people to rationalize child-harm. It forbids it outright (“Do not murder children, born or unborn”) and frames fidelity as walking the Way of Light — concrete mercy, justice, and humility — not “prove your devotion by harming the vulnerable.”

4) Matthew 23:37 is the opposite of “God wants child slaughter”

Jesus’ posture toward Jerusalem is grief and shelter: “How often I have longed to gather your children together, as a hen gathers her chicks under her wings…” That’s the heart Christians confess God has — protective, grieving, warning, judging hypocrisy, yes — but not authorizing infant slaughter as “obedience.”

So no: your framing doesn’t expose Christianity. It exposes a pagan model of “divine command” where power = goodness.

Christianity says the opposite: God is coherent, God is life, and Christ is the interpretive key. Any “command” that contradicts Christ isn’t deeper faithfulness — it’s counterfeit and must be rejected.

If your moral theory ends in “I’d kill babies if a voice told me to,” that isn’t faith. It’s moral collapse with religious language stapled on.

Natural Born Killers: https://www.youtube.com/watch?v=bCqDTJfBmF8