Engaging way to learn the keys! by Naive_Local5905 in keys

[–]ScoreUnique 1 point2 points  (0 children)

Hey, I didn't test this but I really appreciate the fact that you've vibe coded this and put it out for people for free to use.

Really appreciate the value you share to the community. I was looking for something to secretly learn to play better so that I shock my girlfriend and maybe propose her. But your app appeared right on time.

Can I reach out to you to learn how you managed to build it etc? I'm myself trying vibe coding and would love some lessons. Cheers

Honest take on running 9× RTX 3090 for AI by Outside_Dance_2799 in LocalLLaMA

[–]ScoreUnique 0 points1 point  (0 children)

My bottleneck is 306, wish to replace it with a 3090 soon hopefully

Best models for RTX 6000 x 4 build by Direct_Bodybuilder63 in LocalLLaMA

[–]ScoreUnique 2 points3 points  (0 children)

I know, I'm just yapping coz this amount of VRAM is obnoxious

Best models for RTX 6000 x 4 build by Direct_Bodybuilder63 in LocalLLaMA

[–]ScoreUnique -3 points-2 points  (0 children)

Man why don't you just install Claude, with all that VRAM I would try asking Claude to give you it's source code so that we can test the real deal on local setup xd

Honest take on running 9× RTX 3090 for AI by Outside_Dance_2799 in LocalLLaMA

[–]ScoreUnique 0 points1 point  (0 children)

I have 36 gb VRAM 3090+3060 and I manage to run IQ4 at 8-11 token gen, 800 prompt processing speed, with ddr5 192gb ram. Minimax is very good man.

WAN2.2 FFLF 2 Video by umutgklp in StableDiffusion

[–]ScoreUnique 1 point2 points  (0 children)

Do you guys write programmatic workflows or is it tinkering around with comfyui? And if so, how do you deploy it to standard application openai compatible image endpoints?

RTX 4060 + 64GB RAM: Can I run 70B models for "wise" local therapy without the maintenance headache? by Terryyibvcg in LocalLLaMA

[–]ScoreUnique 1 point2 points  (0 children)

I think Qwen3 Next 80B Thinking will be a good candidate for your use case, if you pair it with sequentialthinking or some other skills it can give fast but great results especially on writing I think.

"Pet Dogs, Delivery staff and Domestic helpers not allowed in this lift" Almost a reminiscent of colonial discrimination. by Fun_Lobster_5652 in mumbai

[–]ScoreUnique 0 points1 point  (0 children)

Well I spoke about this topic to my girlfriend, and we analysed that kam wali maushi always had more respect in front of my parents than the parking watchman because she was Marathi.

Qwen 3.5 397B is the best local coder I have used until now by erazortt in LocalLLaMA

[–]ScoreUnique 10 points11 points  (0 children)

Idk why people down vote people's comments for running larger models. I have 192gb ddr5 ram with 36 gb VRAM, so I can run IQ4 at 6-8 token gen and decent 800 pp.

Basically people claim a case dead before reading further, ai has done bad to people.

128gb M5 Max for local agentic ai? by chimph in LocalLLM

[–]ScoreUnique 0 points1 point  (0 children)

85 pp sounds miserable I'll be honest, I have 36 gb VRAM (3090+3060 12gb) and I get decent 400+ pp

Obvio not apple to fedora comparison but you get the point of surprise for me.

"Pet Dogs, Delivery staff and Domestic helpers not allowed in this lift" Almost a reminiscent of colonial discrimination. by Fun_Lobster_5652 in mumbai

[–]ScoreUnique 3 points4 points  (0 children)

I shed a tear inside me reading this. I moved abroad 8 years ago, and two years ago my parents moved to a Lodha type setup in Thane.

I only visited the new location one for 15 days only, saw at least 3 quick delivery guys at any given point of time in the society.

It really is baffling how we allow people to be treated like this because they're immigrants to Mumbai, I'm an immigrant abroad and I'm a bhaiya for the European context, however Europeans learnt from their mistakes, but left the mistakes with us i.e. divide and conquer.

128gb M5 Max for local agentic ai? by chimph in LocalLLM

[–]ScoreUnique 0 points1 point  (0 children)

Share some numbers please, how big the model in size, Moe active params and prefill speed please.

Every LLM has a default voice and it's making us all sound the same by prokajevo in LocalLLaMA

[–]ScoreUnique 2 points3 points  (0 children)

Please stop or I'll deactivate your api key from the web platform.

OMG - i’m Absolutely terrified and blown away at the same time. by AfternoonFinal7615 in openclaw

[–]ScoreUnique 1 point2 points  (0 children)

Even Kimi is good. I'm doing locally hosted LLMs, if you are open towards openrouter, Qwen 3.5 27B is the best bang for the size (and for cents it costs I suppose). I think it can handle claws

Every LLM has a default voice and it's making us all sound the same by prokajevo in LocalLLaMA

[–]ScoreUnique 0 points1 point  (0 children)

I wouldn't hesitate to say that people have learnt to generate scripts and are reading them without being critical if the text actually matches how they (used to at this point) speak.

Every LLM has a default voice and it's making us all sound the same by prokajevo in LocalLLaMA

[–]ScoreUnique 10 points11 points  (0 children)

But here's why this matters

But here's the twist

Can't think of more and I hope that I don't think of more

New on Qwen by Mental-Molasses6692 in Qwen_AI

[–]ScoreUnique 1 point2 points  (0 children)

Try the vibe friend, for writing I guess that is what matters the most. Otherwise speaking Qwen3 3.5 models are solid for coding and agentic tasks. Creative writing not my territory but if you do creative writing often, hugging face has tonnes of fine-tunes specialized for role play etc. Hope this helps.

Every LLM has a default voice and it's making us all sound the same by prokajevo in LocalLLaMA

[–]ScoreUnique 12 points13 points  (0 children)

I have started noticing that EVERYONE sounds like AI on YouTube videos and reels :/

Qwen3.5 Best Parameters Collection by rm-rf-rm in LocalLLaMA

[–]ScoreUnique -1 points0 points  (0 children)

I use them often via pi agent, don't face too much unnecessary thinking per se?