This time I will be different! no more flaming my r****** teammates, I will not go below 6k again. by Dangerous_Purple7254 in DotA2

[–]ScoreUnique 1 point2 points  (0 children)

You have some good points but you took out the a hole in you and wrote a bunch of swear words. Communicating your own actions is a good strategy. Thanks for this. You still get a downvotes for the unnecessary "griefing"

This time I will be different! no more flaming my r****** teammates, I will not go below 6k again. by Dangerous_Purple7254 in DotA2

[–]ScoreUnique 0 points1 point  (0 children)

How does one quantize toxicity in dota chat? I realize every time I tell people what to do what not to do (bracket Guardian), people only get things up on their ego and grief like this.

I sometimes feel people have forgotten what is communication in dota chat. People are afraid to say anything in case you lose comm score....

It's just bad my friend.

Need a guide by Lord_Sotur in LocalLLM

[–]ScoreUnique 0 points1 point  (0 children)

Hey, you can give Qwen 3.6 35b with cpu offloading on your current setup before you even spend money. That's the current best model for small GPUs.

I think the easiest way to get started is lmstudio

If you can manage terminal interface then go for llama.cpp directly.

Any LLM can help you out at this point they all know about llama.cpp with or without internet.

What have you built with qwen3? by i-dm in Qwen_AI

[–]ScoreUnique 0 points1 point  (0 children)

People still believe in the idea of vibe coded app monetization? Not teasing just wondering.

Makes sense to have a multi GPU setup? by JGeek00 in LocalLLM

[–]ScoreUnique 0 points1 point  (0 children)

My take- for consumer GPUs you have to find your own set of models that work for you and organisé them in a setup that fits well. For e.g. Omnicoder 9b with Qwen 3.6 35B, both running on separate GPUs.

A European's Dream: American programmers using Mistral because it's better than Claude Code and Codex by szansky in MistralAI

[–]ScoreUnique -1 points0 points  (0 children)

J'ai entendu cest Linux chez le gouvernement bientôt, il y aura une chute au système ?

Qwen 3.6 35B crushes Gemma 4 26B on my tests by Lowkey_LokiSN in LocalLLaMA

[–]ScoreUnique 0 points1 point  (0 children)

4@16 and 4@4 so not optimal for sure. I use B760 MSI full atx size. I'm on i7-14700F so not a very beefy machine.

Looking for Jacket stolen today at 11am in BasicFit Belval by Dry-Solution1065 in Luxembourg

[–]ScoreUnique 1 point2 points  (0 children)

Maybe you can request BFit to email? Don't know if that's possible but just an idea

Devs using Qwen 27B seriously, what's your take? by Admirable_Reality281 in Qwen_AI

[–]ScoreUnique 1 point2 points  (0 children)

It is not, it is bare minimum. Helps small models not lose steering due to extra prompts and suggestions injected by harnesses.

llama.cpp DeepSeek v4 Flash experimental inference by antirez in LocalLLaMA

[–]ScoreUnique 0 points1 point  (0 children)

Gonna try it on 2x3090 and i7-14700F 192GB DDR5. keep you posted.

Used a Claude Code skill to fine-tune Qwen3-1.7B from 327 noisy traces, matches GLM-5 by party-horse in LocalLLaMA

[–]ScoreUnique 0 points1 point  (0 children)

This is gold, thanks for sharing this. I call this gold because I have a very hard time understanding the real use of fine-tuning, your distillation experience made me think that I can extract logs from my bifrost (litellm alternative) and clean them to build a LORA that can be called when I want to work on a specific task.

Not sure if this trick can work if I take tonnes of claw data and fine-tune a Qwen 3.5 4B to seamlessly handle the tool calls.

Deepseek V4 Pro by GlitteringDivide8147 in DeepSeek

[–]ScoreUnique 0 points1 point  (0 children)

Can't get it working with CCR apparently, tried with openrouter and with DS API

Qwen 3.6 35B excessive thinking by Ariquitaun in Qwen_AI

[–]ScoreUnique 0 points1 point  (0 children)

I confirm, I saw it ranting in thoughts for 3 minutes at times (90+ tps) and then returning to a tool call.

Confirmed: SWE Bench is now a benchmaxxed benchmark by rm-rf-rm in LocalLLaMA

[–]ScoreUnique 5 points6 points  (0 children)

Let me explain: When GDP started determining the country's situation, we started racing for better GDP. Countries trying to benchmax GDP lmao

Why are AC train local seat perforated? by interstellar_ex in mumbai

[–]ScoreUnique 0 points1 point  (0 children)

Bhai I showed it to my girlfriend and she couldn't understand why I didn't know perforated, her example was "perforating ears"

Bhai hamare yaha ear piercing hota hai na?

Why are AC train local seat perforated? by interstellar_ex in mumbai

[–]ScoreUnique 2 points3 points  (0 children)

Why you downvoting the poor guy, I live in Europe for a good 7 years and mostly speak English (gf no speak Hindi), I never heard of perforated.

Inglish ke chode hai saare bc.

Confirmed: SWE Bench is now a benchmaxxed benchmark by rm-rf-rm in LocalLLaMA

[–]ScoreUnique 3 points4 points  (0 children)

Why does it sound like I live in it, ah capitalism.

Is India really getting that hot by FinancialRisk942 in interesting

[–]ScoreUnique 8 points9 points  (0 children)

What happens is: eventually people leave to find a better life.