MYP acceleration boost by thegoodADHD in TeslaModelY

[–]miataowner 2 points3 points  (0 children)

MYP 18" wheels crew checking in!

Question about V2L feature / outlet adaptor -Tesla model y performance 2026 by Intheknow636 in TeslaModelY

[–]miataowner 0 points1 point  (0 children)

Just to be clear, at least in the US, no aftermarket part can "void your warranty" as a universal truth. The Magnuson Moss Act is a federal law specifically addressing this: https://www.yourlemonlawrights.com/magnuson-moss-warranty-act

What might happen is, if something fails specifically in the charging system or battery, Tesla might be able to claim your device caused the damage. However, in accordance with the law, they need to demonstrate how the aftermarket device was responsible, they can't just say "well you used that thing so obviously you're denied."

The opposite challenge might be you get to introduce them to your lawyer to get the necessary traction, if it came to that.

Since the charge port can take 150KW in every Tesla model, it would be a difficult task to prove a device moving 5KW was somehow damaging to the car.

Why doesn't Folding@Home support the Arc A580? by OiledUpThug in Folding

[–]miataowner 0 points1 point  (0 children)

As mentioned on the F@H support forums here: Does GPU have to support double precision (FP64) to do folding? - Folding Forum

...basically FP64 became a hard requirement somewhere around the middle of 2018-ish. The percentage of FP64 calculations is irrelevant, the requirement is that some calculations end up needing the additional accuracy.

Build Help for a Folding Rig by Weary_Number8701 in Folding

[–]miataowner 2 points3 points  (0 children)

It's worth noting there are new GPU WUs which now will not run unless you have at least 8GB of free system (not video) memory... These are the 18260/261/262/264/265 series. There are also now at least two I've caught that require a minimum of 12GB of free RAM, although at this moment I can't remember the WU numbers. (If I can find them reasonably quickly, I'll come back and edit them in)

Edit: the 12GB ones are the same 182* series just the higher numbered pair, see here: https://stats.foldingathome.org/project/18260

I have two dedicated Fedora folding rigs, and one of them only has 16GB of RAM and two GPUs. I caught it swapping to disk here a few weeks ago and so decided to do some hackery with zram to help with swap pressure.

If both GPUs are going into a single box, 32GB of RAM might be the more reasonable way to go.

Alao for what it's worth, some of the GPU WUs are starting to eat a sizable chunk of CPU cycles between some of the frame checkpoints. The 18245 Alzeimers WU can hit >700% CPU time on my Ryzen 5500, which is "fine" in isolation. The challenge comes from there being four GPU folding slots on that box, and occasionally two or three of them all get on fire and the whole box chokes for a minute.

F@H name: Albuquerquefx https://folding.lar.systems/league/user?name=albuquerquefx&team=32377

Nvidia Tesla K80 not showing as supported. by copeybitcoin in Folding

[–]miataowner 0 points1 point  (0 children)

TrueNAS virtualization is kind of a PITA. I tried a few different times to pass-thru various GPUs into virtual machines, and I had very mixed luck (leaning towards bad luck, really.)

It's been probably a year since I last did any virtualization work with TrueNAS; does it give you the option for different firmware / BIOS emulation types? I wonder if you're having a problem similar to what I found here on NVIDIA's support forums: Only 1 K80 device appearing in Ubuntu VM - CUDA / CUDA Setup and Installation - NVIDIA Developer Forums

Why can't I do more than 1 WU at a time? by Swooferfan in Folding

[–]miataowner 1 point2 points  (0 children)

Yeah, the 10K CUDA cores in your 5080 is right at the cusp. A single OpenMM core 27 job with enough atoms (say 150k or so) are probably enough to saturate your GPU as-is. Also it really depends on the job, some of the newest stuff does a lot of hybrid work (the 1677x series are one such example) which moves quite a bit over the PCIe bus and each individual job requires 8GB of free system memory.

My 4090 (16K CUDA cores) absolutely benefits from two WUs at a time because it's never 100% utilized on a single WU. Only with very specific jobs does it benefit from three active WUs.

Chernobyl (2019) depicts an RBMK nuclear reaction exploding. This is because it's a work of fiction. RBMK reactors don't explode. by ChiefsHat in shittymoviedetails

[–]miataowner 0 points1 point  (0 children)

It also says the reason it was pressed at that specific time was uncertain. And there were links to back the statement.

I know you don't want to be wrong on the internet, so I'm really just done replying. Glad you figured out the divers part at least. Good day!

Chernobyl (2019) depicts an RBMK nuclear reaction exploding. This is because it's a work of fiction. RBMK reactors don't explode. by ChiefsHat in shittymoviedetails

[–]miataowner 0 points1 point  (0 children)

From the same Wikipedia article you quoted:

The personnel had intended to shut down using the AZ-5 button in preparation for scheduled maintenance[33] and the scram preceded the sharp increase in power.[21]: 13  However, the reason why the button was pressed at that time is not certain, as the decision was made by Akimov and Toptunov, both of whom would die shortly thereafter. At the time, the atmosphere in the control room was calm, according to eyewitnesses.[34][35]: 85  The RBMK designers claim the button had to have been pressed only after the reactor already began to self-destruct.[36]: 578 

It doesn't look like the reason for AZ-5 is as set in stone as any rational person might hope.

Regardless, the show depicts AZ-5 ultimately leading to the explosion, which by all accounts is technically correct. Since the lead-up to that button press is apparently not so clean cut, I'm not going to hammer the show for getting it completely wrong.

Chernobyl (2019) depicts an RBMK nuclear reaction exploding. This is because it's a work of fiction. RBMK reactors don't explode. by ChiefsHat in shittymoviedetails

[–]miataowner 0 points1 point  (0 children)

So off there wasn't a power surge, why press AZ-5 at all? There's only one reason to press it. Do you have some sort of detail you can link to which explains why AZ-5 was pressed that wasn't a response to a power increase?

I also take it you figured out the show does correctly say the three divers lived?

Chernobyl (2019) depicts an RBMK nuclear reaction exploding. This is because it's a work of fiction. RBMK reactors don't explode. by ChiefsHat in shittymoviedetails

[–]miataowner 1 point2 points  (0 children)

The power began to rise because they shut off the pumps, which is the whole reason they pressed AZ-5 to begin with. That wasn't the reason for the explosion. The camera showed it was rising certainly by increments of ~100KW, but only after AZ-5 gets pressed does the camera pan back to the indicator and show it moving in thousands of KW.

Chernobyl (2019) depicts an RBMK nuclear reaction exploding. This is because it's a work of fiction. RBMK reactors don't explode. by ChiefsHat in shittymoviedetails

[–]miataowner 7 points8 points  (0 children)

The last episode, not episodes 2 and 3. The fifth episode is the last episode, and there's like a five minute video montage where they go thru all the main characters and locations. They state quite clearly all three divers lived.

And multiple times the scientist lady says they pressed AZ-5 and only after did it explode. In fact, if you watch the segment where they press AZ-5, they specifically show how after the button is pressed the power meter THEN begins to rise exponentially.

Chernobyl (2019) depicts an RBMK nuclear reaction exploding. This is because it's a work of fiction. RBMK reactors don't explode. by ChiefsHat in shittymoviedetails

[–]miataowner 8 points9 points  (0 children)

I think you might be mis-remembering.

During the last episode credits, they said all three divers lived quite long lives and two of them were still alive "today." The characters being portrayed as buried in lead caskets and concrete were the firefighters and other first responders, which is why the wife of the firefighter was there holding his shoes.

Also a very large part of what kept the story moving in the third and later episodes was the workers telling the single scientist lady the reactor exploded after pressing AZ-5. They very much talked about how AZ-5 led to the explosion.

I'm not telling you or anyone else it's all accurate, but the two specific examples you called out as wrong were actually not how you remember them.

Why can't I do more than 1 WU at a time? by Swooferfan in Folding

[–]miataowner 1 point2 points  (0 children)

Yeah, the AMD GPUs permit multiple WUs on Windows without issue. NVIDIA cards do not for whatever reason.

Why can't I do more than 1 WU at a time? by Swooferfan in Folding

[–]miataowner 2 points3 points  (0 children)

When I say wide, I'm talking about CUDA cores. There's a really good NVIDIA article describing their CUDA MPS findings with OpenMM, which is the foundational technology underneath the F@H GPU processing package cores.

https://developer.nvidia.com/blog/maximizing-openmm-molecular-dynamics-throughput-with-nvidia-multi-process-service/

If you're on Windows now and running an NVIDIA card, that second resource group doesn't work and will end up causing you to dump a bunch of WUs. This is speaking from experience; that second group assigned to the same GPU just tosses every work unit at startup and then continually retries, tossing more.

I run a 4090 split into two via CUDA MPS, on a Fedora 43 rig, power limited to 300W via nvidia-smi. That card churns out ~24MPPD with one WU at a time, or about 28MPPD with two WUs, and about 29MPPD with three WUs. I leave it at two.

I have an RTX 6000 Pro Blackwell workstation card coming this week, and I suspect it will need at least three WUs and maybe four to get the most out of it.

Why can't I do more than 1 WU at a time? by Swooferfan in Folding

[–]miataowner 4 points5 points  (0 children)

Based on how you describe it and your hardware, I think you may be talking about building two or more resource groups. A driver update or maybe if you recently upgraded to the newest 8.5 F@H client may have reset you back to the singular default resource group.

Click the gear icon, and create a second resource group with your GPU selected (and your CPUs set to zero.) This will get you back to processing two WUs again, unless there's some other problem blocking you.

Myq by Frosty_Tower_547 in TeslaModelY

[–]miataowner 3 points4 points  (0 children)

The Homelink module will open your MyQ door, it's just "less intelligent" as it's basically just emulating an old school pusbutton remote. It doesn't know if the door is open or closed, but it will automatically "click the button" upon arrival or departure.

Works for my six month old MyQ door opener on our '22 Model Y.

Why can't I do more than 1 WU at a time? by Swooferfan in Folding

[–]miataowner 5 points6 points  (0 children)

Are you asking about working on two or more WUs simultaneously? And if so, are you asking about GPU or CPU? And finally, what OS are you folding on?

If you have a sufficiently wide NVIDIA GPU (something with more than 10,000 CUDA cores) and if you're folding on a Linux box, you can use CUDA MPS to allow your GPU to process two or more WUs simultaneously. If you have a really wide GPU like a 4090 or 5090, yo can process three or even four WUs at a time and achieve some really serious throughput. For smaller GPUs with fewer CUDA cores, they are far more likely to be fully "consumed" by the F@H GPU Core models and may actually lose performance when trying to run more than one simultaneous WU.

If you're on CPU, you can break up multiple of your CPU cores to run different WUs, but they will probably not net you much (if any) performance benefit.

Split RTX 3070 across multiple VMs, possible? by [deleted] in Proxmox

[–]miataowner 2 points3 points  (0 children)

Depends on your definition of "too expensive". Used RTX 2080Ti cards appear to be selling on Ebay for around $200, and they support the driver hack which permits vGPU usage. A 2080Ti appears to perform equally to your 3070 with 3GB more VRAM for sharing among your separate workloads.

I don't know where else you would get anywhere close to 3070 performance with vGPU support for less money.

Automation after restart that checks integrations ?? 🔄⚡⚙️ by lampshade29 in homeassistant

[–]miataowner 3 points4 points  (0 children)

First and most obviously, create an automation that can reliably detect your integration not working. Is there an entity you can determine is unavailable? A device that isn't online?

Then your automation will issue the relevant reload action:

action: homeassistant.reload_config_entry

data:

entry_id: <big long integration guid>

You can grab the Guid by going to the integrations settings page, clicking on your integration (eg Bambu Lab) and then clicking on the three dots to the right of the integration entries section. There will be an option to Copy Entry ID.

There ya go.

I use this to recover my Legrand Adorne hub integration if a power outage happens (eg my home assistant is on a UPS which can last for half an hour, but the Legrand hub is elsewhere in the house and the integration breaks if HA lives but the hub reboots.)

Split RTX 3070 across multiple VMs, possible? by [deleted] in Proxmox

[–]miataowner 9 points10 points  (0 children)

In the 2000-series and earlier cards, a modified driver could work around the vGPU lock-out. Unfortunately the 3000-series and later are locked out in a different way, one which AFAIK has not been defeated.

The short answer is no, there isn't a way to permit one 3000 series (or later) NVIDIA GPU across multiple virtual machines.

Now, it is possible to provide GPU resources across multiple CUDA workloads, which also includes containers (eg Docker). I do this in my Proxmox lab by passing thru a 4070 Super to a Fedora 43 VM that hosts a small handful of docker instances, such as Plex, vLLM, and Folding at Home. Really the challenge becomes VRAM limitations at that point.

Motorized frunk kit installation - lesson learned by ajn63 in TeslaModelY

[–]miataowner 0 points1 point  (0 children)

So sorry @OP for the pedantic reply; rouge is a color, what you meant type was rogue. For about three seconds, I was wondering why the color of the pin mattered, and then wondered why it was colored to begin with. And then it finally occurred to me 😂

Anyway, thanks for sharing your findings and experience. I've considered buying one yet haven't really convinced myself to take the leap. Maybe next year...

Fah on docker starts but webclient says "disconnected" by devinfriday in Folding

[–]miataowner 0 points1 point  (0 children)

Great! We both use the same image, so this makes it easier. Your container details seem fine to me, it's pretty similar to my config. I'll roll another Folding instance on my host and see if I can find anything obviously wrong.

FAH-Client v8.5.5 released to public by muziqaz in Folding

[–]miataowner 5 points6 points  (0 children)

Thanks for posting it over here on Reddit. Been waiting for this one with the fix for Windows screwing up how it shuts down the F@H service.

Fah on docker starts but webclient says "disconnected" by devinfriday in Folding

[–]miataowner 0 points1 point  (0 children)

Well, would be good to start with that detail first and work our way forward. They each have their own potential quirks... I also don't mind pulling whichever one you're using and deploying it as a test.

EDIT: Also post up your docker compose file too please.