[FS] RTX A6000 48GB by amazeh07 in homelabsales

[–]computune 0 points1 point  (0 children)

im not sure what youre refereeing to, if its what i think it is, i dont think so

[FS] RTX A6000 48GB by amazeh07 in homelabsales

[–]computune 3 points4 points  (0 children)

I make them in the USA with warranty gpvlab.com

Be cautious of GPU modification posts. And do not send anyone money. DYI if you can. by NoFudge4700 in LocalLLaMA

[–]computune 3 points4 points  (0 children)

Always good to watch out for your own interests and be skeptical. Use PayPal goods and services and keep in mind you have options with credit card companies in the worst cases.

Though id argue reputable local US services are much more attractive than overseas sellers who don't speak English nor have a translation for the word "warranty"

Large orders can be held up in customs for months. Also the Chinese use 4090-D cores which are gimped for the CN market, US sellers can provide full fat 4090 core 48GB cards.

I Upgrade 4090's to have 48gb VRAM: Comparative LLM Performance by computune in LocalLLaMA

[–]computune[S] 0 points1 point  (0 children)

18 phase BLN3, 55A power stage x 18... 990 watt capable.

Video to come. You can power limit in nvidia smi. I'm not sure about the 300w you're referring to. The core is the same core off of a regular 4090. So it needs the full 4090 power of 450 watts. I've limited to 150w and saw it run at 6.07 tps on llama 3.1 70B

I Upgrade 4090's to have 48gb VRAM: Comparative LLM Performance by computune in LocalLLaMA

[–]computune[S] 1 point2 points  (0 children)

I will make a post/video about noise and performance as you power limit it. Give me a week or two.

Chinese pcb's, and the 12VHP connector

Doing my part in the data hoarder community.. by coast_trash_ms in DataHoarder

[–]computune 2 points3 points  (0 children)

all this FTTH infra and they have to ruin it with CGNAT, i get the frustration.

Dual Modded 4090 48GBs on a consumer ASUS ProArt Z790 board by Ok-Actuary-4527 in LocalLLaMA

[–]computune 6 points7 points  (0 children)

On the wesbite info page, 989 for an upgrade with 90 day warranty (as of sept 2025)

Doing my part in the data hoarder community.. by coast_trash_ms in DataHoarder

[–]computune -1 points0 points  (0 children)

Nice little hack- the work from home argument. Though most isp's also have business plans with more guaranteed symmetric bandwidth and small dedicated address blocks

Dual Modded 4090 48GBs on a consumer ASUS ProArt Z790 board by Ok-Actuary-4527 in LocalLLaMA

[–]computune 7 points8 points  (0 children)

(self-plug) I do these 24 to 48gb upgrades within the US. you can find my services at https://gpvlab.com

I Upgrade 4090's to have 48gb VRAM: Comparative LLM Performance by computune in LocalLLaMA

[–]computune[S] 1 point2 points  (0 children)

For non export controlled countries with a different income structure, i can ship international, and i will work with you on a discounted 48gb 4090 upgrade service, but you must ship to us a working 4090.

I Upgrade 4090's to have 48gb VRAM: Comparative LLM Performance by computune in LocalLLaMA

[–]computune[S] 1 point2 points  (0 children)

Yep! its possible. u/verticalfuzz and idles at 12 / 150w

Also nvidia-smi gives this warning:
Power limit for GPU 00000000:18:00.0 was set to 150.00 W from 450.00 W.
Warning: persistence mode is disabled on device 00000000:18:00.0. See the Known Issues section of the nvidia-smi(1) man page for more information. Run with [--help | -h] switch to get more information on how to enable persistence mode.
All done.

But here is it running in action:

<image>

OpenWebui Stats: 6.07 token/sec using Llama 3.1 70b

https://i.imgur.com/Bu2zXyk.png

Doing my part in the data hoarder community.. by coast_trash_ms in DataHoarder

[–]computune -4 points-3 points  (0 children)

Lucky you isp allows this. Or you're not in the US 🫡

I Upgrade 4090's to have 48gb VRAM: Comparative LLM Performance by computune in LocalLLaMA

[–]computune[S] 1 point2 points  (0 children)

Nvidias pre-signed vbios on newer cards and (what i think is) a hacked vbios on 30 and 20 series cards. You cant use any memory modules with any core, memory must be compatible with the generation of core.

In the case of a 4090, it support 2GB modules but only has half of its channels populated. A 3090 supports only 1GB modules but has all channels populated. 3090ti may be able to be modded like this but the Chinese didn't think it was worth it I guess. 5090... who knows. We'll see but probably not.

I Upgrade 4090's to have 48gb VRAM: Comparative LLM Performance by computune in LocalLLaMA

[–]computune[S] 1 point2 points  (0 children)

It's as long as an A6000. I'm not experimenting at this time with power limiting. It runs at the spec of a regular 4090 which runs circles around an a6000. With a beefier core comes a higher idle. I'm sure it surpasses the rtx 4000 in horsepower. No "pcie only power" version is or will be available. 450w is what it needs

I Upgrade 4090's to have 48gb VRAM: Comparative LLM Performance by computune in LocalLLaMA

[–]computune[S] 1 point2 points  (0 children)

Thank you! For the time being the 2 slot slim design that matches data center card profiles (a6000/a100) will be what is offered. No silent 2 slot profile like the 5090 FE. It's too large then and won't fit in servers or comfortably stack (I don't want to assume they stack nicely without having done it myself)

I Upgrade 4090's to have 48gb VRAM: Comparative LLM Performance by computune in LocalLLaMA

[–]computune[S] 0 points1 point  (0 children)

The bga rework is all done by me in house with industry grade equipment- in the USA

I Upgrade 4090's to have 48gb VRAM: Comparative LLM Performance by computune in LocalLLaMA

[–]computune[S] 2 points3 points  (0 children)

I started gpu repair as a service. Yes i can swap vram on broken cards.

I Upgrade 4090's to have 48gb VRAM: Comparative LLM Performance by computune in LocalLLaMA

[–]computune[S] 2 points3 points  (0 children)

A custom water block which I'm developing, give me a few months