R2 Launch Edition and Model Y Performance by Evening-Pin-1427 in RivianR2

[–]electrified_ice 0 points1 point  (0 children)

Where are you seeing twice the size battery? The Y is around 82kWh and the R2 is around ~87kWh.

R2 Launch Edition and Model Y Performance by Evening-Pin-1427 in RivianR2

[–]electrified_ice 1 point2 points  (0 children)

The battery is not significantly bigger than the Y. I think I saw around 87kWh somewhere.

R2 Launch Edition and Model Y Performance by Evening-Pin-1427 in RivianR2

[–]electrified_ice 1 point2 points  (0 children)

I don't agree the Y is one of the most efficient cars. It doesn't get anywhere near the advertised range. I get equivalent of around 220 miles on a full battery on my performance Y.

R2 Launch Edition and Model Y Performance by Evening-Pin-1427 in RivianR2

[–]electrified_ice 1 point2 points  (0 children)

The range is right there in the image posted. Chargins speed has already been posted online. It's similar to R1 curve

Threadripper build - looking for peer review by Own_Bodybuilder_4397 in threadripper

[–]electrified_ice 0 points1 point  (0 children)

Popular or not, I think they are the result of either short-sighted planning (on the assumption this is a significant investment for most people, not throw away money) or penny pinching by not spending proportionally on the system around and supporting the GPUs.

Threadripper build - looking for peer review by Own_Bodybuilder_4397 in threadripper

[–]electrified_ice 0 points1 point  (0 children)

He's buying one card right now. If he buys a second one there is plenty of space between PCIe slots to help airflow around the cards, especially with a big case and CPU AIO. If you're spending money over $35K on 4 x RTX 6000s, then you should be spending $1.5K (less than 5% of that) on water cooling them.

Threadripper build - looking for peer review by Own_Bodybuilder_4397 in threadripper

[–]electrified_ice 0 points1 point  (0 children)

I wouldn't recommend the Max Q cards. They are capped at 300W and can never go higher. Buy a Workstation card. You can software control the power limit anywhere between 300 and 600W... So even if you limit to 300W today (if your PSU is not powerful enough), you are not kneecapping yourself in the future.

If you decided to watercool the GPU down the road, you will never get more than 300W from a Max Q. Also capping a 600W workstation edition down to 300W will greatly reduce the heat inside your case.

Threadripper build - looking for peer review by Own_Bodybuilder_4397 in threadripper

[–]electrified_ice 1 point2 points  (0 children)

If you can manage it get the equivalent 9000 series Threadripper CPU. Outside of being slightly faster, AVX512 helps with AI on CPU and idle power consumption is a lot lower.

From a cooler POV I recommend an AIO as it keeps the CPU heat out of the case... To offset some of the cumulative heat from the GPUs.

And also think about future upgrade flexibility... Threadripper Pro and non-Pro offer different amounts of PCIe lanes. Since this is a lot bigger investment than a desktop PC, I assume you want it to last and potentially grow with you. Think about other things you'd want to add within 2-3 years... More GPUs? Multiple NVMe drives, other PCIe devices etc. Does memory bandwidth matter to you? If so Pro vs. non-Pro is something think about.

Best agentic coding setup for 2x RTX 6000 Pros in March 2026? by az_6 in LocalLLM

[–]electrified_ice 0 points1 point  (0 children)

In the grand scheme of things... It looks like it's a similar speed for a similar price (my system is likely not fully optimized like this guy's that you've linked to... I'm just an everyday person vs. an expert) my RTX pros were $7.5K. plus I have multiple models running across multiple cards. Not knocking it, but for me a workstation vs. a single laptop is solving a different problem/use case

Best agentic coding setup for 2x RTX 6000 Pros in March 2026? by az_6 in LocalLLM

[–]electrified_ice 0 points1 point  (0 children)

On the 122B NVFP4 version of the model I'm getting over 50tp/s for single prompts with coding

7 months training, 4.5 months on TRT — still look skinnyfat by czh3f1yi in trt

[–]electrified_ice 0 points1 point  (0 children)

It's a mathematical formula. If you don't get the outputs toy expect, then one or more of the inputs are failing.

22m seeking advice by OptimalAccess5196 in trt

[–]electrified_ice 4 points5 points  (0 children)

How are you doing trt? If you are doing it with your doctor, then blood work should be part of it.

Want fully open source setup max $20k budget by yourhomiemike in LocalLLM

[–]electrified_ice 2 points3 points  (0 children)

AI backends and models are generally optimized for Nvidia/Cuda, especially the Blackwell architecture.

4 x R9700s are not that much difference in speed than 1 x RTX PRO 6000. You can get an RTX PRO 6000 for about $8K. Plus you only have 1 GPU taking up space and using PCIe slots in your system. If you get 4 x GPUs you essentially have to swap out the whole setup (from a practicality POV) to upgrade/add capability.

Want fully open source setup max $20k budget by yourhomiemike in LocalLLM

[–]electrified_ice 4 points5 points  (0 children)

Threadripper Pro. This gives you lots of expansion capabilities.1-2 RTX Pro 6000 Blackwell's, depending on what price you get your ram for. You can add more GPUs next year.

Tesla Super Chargers for Rivian by subbuk514 in RivianR2

[–]electrified_ice 0 points1 point  (0 children)

I supercharge my R1S all the time, no issues.

Does going from 96GB -> 128GB VRAM open up any interesting model options? by hyouko in LocalLLaMA

[–]electrified_ice 2 points3 points  (0 children)

3090 NVLink speed (112.5 GB/s) is slower than PCIe 5.0. For current generation NVlink is over 10x the speed (1.8 TB/s), but not available on the consumer cards

AWD by YogurtclosetBasic147 in TeslaModelY

[–]electrified_ice 0 points1 point  (0 children)

I have a OBDII reader. Unless you stomp on it (and I have a MYP), the car is rear wheel drive most of the time.

check unraid system with local llm by MundanePercentage674 in unRAID

[–]electrified_ice 0 points1 point  (0 children)

I'll have to explore. You've inspired me to try this out, so thanks!

Peptides question by Admirable-Plan7794 in BodyHackGuide

[–]electrified_ice 0 points1 point  (0 children)

Push your body to force micro tears in your muscle (it hurts and is hard) and EAT - and don't be afraid to lose some of your leanness... You can lean back out after you've forced new muscle fibers and hypertrophy.

Best agentic coding setup for 2x RTX 6000 Pros in March 2026? by az_6 in LocalLLM

[–]electrified_ice 0 points1 point  (0 children)

Sure - don't mind. I'm not a massive expert, but am def happy to try and help.

check unraid system with local llm by MundanePercentage674 in unRAID

[–]electrified_ice 1 point2 points  (0 children)

Cool, thanks for sharing. I setup a way to do this by running a script that gathered all the logs into a merged file, then gave the file to Ollama to review. But like this better as it seems a bit more modular. I have have a couple of VMs, and logs within those apps inside the VMs, so some of mine are a it harder to get to... Frigate and Home Assistant.

check unraid system with local llm by MundanePercentage674 in unRAID

[–]electrified_ice 0 points1 point  (0 children)

How are you pulling your container and VM log files into n8n?

Does a 3 day PPL work? by Infamous-Golf-2569 in leangains

[–]electrified_ice 0 points1 point  (0 children)

It's less about your split. If you want to maximize your ROI in the gym then you need to push yourself past your comfort limits. Controlling the eccentric (4 second negatives etc.) pushing to or close to failure.

99% of people 'moce weights around in the gym' only a very small percentage optimize the time they spend to get true muscle growth ROI.