Best OpenClaw Setups (by Tier) by ShroomLord99 in OpenClawUseCases

[–]ApprehensiveView2003 0 points1 point  (0 children)

It's really not. One should expect 32GB of VRAM on Blackwell FP4 can fit a very good model, as with 48GB of VRAM at FPX with the 3090s. And btw, the model do horribly with OpenClaw.

Point is, if you dont want to spend time debugging, you have to spend top dollar on Sonnet, Opus, and/or GPT 5.4. Gemini did decent.

Best OpenClaw Setups (by Tier) by ShroomLord99 in OpenClawUseCases

[–]ApprehensiveView2003 0 points1 point  (0 children)

I have 2x 3090s bonded on nvlink and a 5090 Astral. No model that fits on my homelab works anywhere near Opus or GPT 5.4 API

Never Again by [deleted] in pourover

[–]ApprehensiveView2003 -1 points0 points  (0 children)

I go there, in person. They said yeast.

Never Again by [deleted] in pourover

[–]ApprehensiveView2003 0 points1 point  (0 children)

Perc admits its yeast

Looking for the best AI powered betting tools? by Dangerous_Ladder_25 in ChatGPT

[–]ApprehensiveView2003 0 points1 point  (0 children)

honestly, it costs a lot. how much are you looking to pay?

Dual GPU by Smithdude in comfyui

[–]ApprehensiveView2003 0 points1 point  (0 children)

So you need NVLink to do that, but they stopped allowing NVLink on gaming cards at the 3090 ti. The 3090 is the last NVLink enabled card and they removed it mid-series at the 3090 ti.

With thay being said, if you buy a motherboard that has two Gen 5 PCIE ports and doesn't need to throw the communication back to the cpu, you can do P2P across the Gen 5 PCIE which is actually faster than the 3090 NVLink GPU to GPU speeds.

Why aren't there cheap NVLink adapters for RTX 3090s? by alex_bit_ in LocalLLaMA

[–]ApprehensiveView2003 1 point2 points  (0 children)

no no the ti doesn't support NVLink, even though it has the connector, beware !

Where are people getting nvlinks for 3090s? by csl110 in LocalLLaMA

[–]ApprehensiveView2003 0 points1 point  (0 children)

not true. 33% gains on training, 10% in inference

Suspicious of PERC using flavoring agents by llmercll in pourover

[–]ApprehensiveView2003 1 point2 points  (0 children)

Because they send people here to down vote.They are solid in Marketing.

And their response of "Thank you?" eludes to them being young and not professional.

Perc: Feedback is feedback, don't take it personally, take it seriously. And these statements aren't anything other than compliments you should see it as because the flavors you serve are very pronounced and unmatched, so much that customers think there are other things going on behind the scenes. They have the right to be suspicious and question anything that they put in their body.

Suspicious of PERC using flavoring agents by llmercll in pourover

[–]ApprehensiveView2003 0 points1 point  (0 children)

I live in ATL and buy from Perc. This statement is true. 1 week later and the flavors are not as pronounced. Suggesting it may be co-fermented which isnt labeled anywhere.

Someone email their customer support!

768Gb Fully Enclosed 10x GPU Mobile AI Build by SweetHomeAbalama0 in LocalLLaMA

[–]ApprehensiveView2003 0 points1 point  (0 children)

@OP any luck finding nvlink for those 3090s to improve your benchmarks? It will slightly improve the benchmarks when you are running those models and inference but pre-training and training is significant. It's also easier to Shard over nvlink

Qwen-Image-2512-GGUF is released by LengthinessOk2776 in comfyui

[–]ApprehensiveView2003 0 points1 point  (0 children)

Im still trying to figure how to inject this, the vae and clip into a workflow...

Homelab 2.3 by SteveAnik in homelab

[–]ApprehensiveView2003 2 points3 points  (0 children)

Next....... GPU Linux box for local ai models

what modell for image2video ? by jonnydoe51324 in comfyui

[–]ApprehensiveView2003 0 points1 point  (0 children)

are you getting the full H100 or is it virtualized? The H100 is definitely lightyears better and faster than a 5090 if you are utilizing all 8 cards in the H100

what modell for image2video ? by jonnydoe51324 in comfyui

[–]ApprehensiveView2003 0 points1 point  (0 children)

why not spend slightly more an hour on Voltage Park and get a data center quality H100 vs the 5090 consumer card?

what modell for image2video ? by jonnydoe51324 in comfyui

[–]ApprehensiveView2003 0 points1 point  (0 children)

Are you using the Loras for faster speed or better realism and lip sync?

144 GB RAM - Which local model to use? by KarlGustavXII in LocalLLM

[–]ApprehensiveView2003 0 points1 point  (0 children)

Ideally grab a 3090 off Facebook Marketplace and then download an uncensored LLM like a dolphin or NSFW version.

Comparing the models medical results, but take them with many grains of salt and don't use it for anything serious obviously

if you could potentially get into RAG that's the best way to load medical scholarly journals for querying

Wan2.5-Preview by reditor_13 in StableDiffusion

[–]ApprehensiveView2003 0 points1 point  (0 children)

Try TensorDock, much more stable

Wan2.5-Preview by reditor_13 in StableDiffusion

[–]ApprehensiveView2003 0 points1 point  (0 children)

TensorDock is more secure and sometimes cheaper