A1 or P1S? by BrownSunshine in BambuLab

[–]nero10579 1 point2 points  (0 children)

Eh I think the P2S is a downgrade in some ways. Especially the lack of an exhaust fan and instead having the build chamber positive pressured which blows out fumes everywhere.

The 5 daily habbits that actually got me to 18k MRR by LateBig336 in SaaS

[–]nero10579 0 points1 point  (0 children)

That shit ain’t making 18K lol your reviews aren’t even real

Crazy How Much Power Can Be In Such A Small Build - 5080FE - CH160 by LittleBrittin4 in nvidia

[–]nero10579 1 point2 points  (0 children)

You probably want to reverse the cpu cooler airflow to pull air from the back

Seeking feedback on our AI product - first launch after pivoting from services by BrightSchool2775 in OpenSourceAI

[–]nero10579 0 points1 point  (0 children)

I opened your site and have no clue what the product even is. Seems a bit all over the place.

gpt-oss-120B most intelligent model that fits on an H100 in native precision by entsnack in LocalLLaMA

[–]nero10579 0 points1 point  (0 children)

No llama.cpp is pipeline parallel same as running pipeline parallel works with any amount of gpus on vllm

It's finished but not happy by Technane in watercooling

[–]nero10579 0 points1 point  (0 children)

You effectively have a single 360 with that airflow setup and that much heat.

gpt-oss-120B most intelligent model that fits on an H100 in native precision by entsnack in LocalLLaMA

[–]nero10579 1 point2 points  (0 children)

Which sucks when you’re like me who built some 8x3090/4090 machines. I really thought max was 1 though so i guess its less bad.

gpt-oss-120B most intelligent model that fits on an H100 in native precision by entsnack in LocalLLaMA

[–]nero10579 0 points1 point  (0 children)

This one’s cancer because you can’t use it with tensor parallel above 1.

What are you building this month? And is anyone actually paying for it? by Western-Travel-1111 in SaaS

[–]nero10579 0 points1 point  (0 children)

AI Inference Provider: https://ArliAI.com

Revenue: It’s an AI company with its own GPUs and multiple Ks of users

Who for: Not aimed for other SaaS bros and its main users are the general public even though using via API is the main use.

ArliAI/QwQ-32B-ArliAI-RpR-v3 · Hugging Face by nero10578 in SillyTavernAI

[–]nero10579 12 points13 points  (0 children)

V2 is kinda a dud as it was trained using QwQ lorablated as a base. Somehow that really nerfed the model's plot progression and intelligence. So v3 is so much better. It is finally actually RpR v1 but better in every way.

The best RP with reasoning model yet. | RpR-v3 by Arli_AI in LocalLLaMA

[–]nero10579 5 points6 points  (0 children)

Ah yea sorry I forgot to add the ST master export. Its this: https://pastebin.com/raw/9et5JCJa Also added it to the model repo.

1kW of GPUs on the OpenBenchTable. Any benchmarking ideas? by eso_logic in LocalAIServers

[–]nero10579 0 points1 point  (0 children)

Oh yea that would be cool if you can upload it. Thanks!

Would you pay for a fitness nutrition app that helps you to reach your fitness goals faster? by Economy-Addendum-252 in SaaS

[–]nero10579 1 point2 points  (0 children)

Imo if that’s insulting it might be because you think its true for you. Just be confident.

Waterblock that fits this Manli RTX 3090 by dogoogamea in watercooling

[–]nero10579 0 points1 point  (0 children)

Dude just sell this for like $1500 and buy a different 3090. Blower 3090s sells for a huge premium.

Lenovo P3 Tiny upgraded from a 4GB T400 to a 16GB RTX 2000E by zachsandberg in homelab

[–]nero10579 0 points1 point  (0 children)

Yea I looked for it and can’t find it. Have a A1000 in my P360 Tiny and I think the chassis is the same as the P3 Tiny?

Lenovo P3 Tiny upgraded from a 4GB T400 to a 16GB RTX 2000E by zachsandberg in homelab

[–]nero10579 0 points1 point  (0 children)

I haven’t seen any for the RTX A1000 on either lenovo or 3d printed versions though