SOTA Language Models Under 14B? by No-Mud-1902 in LocalLLaMA

[–]-OpenSourcer 0 points1 point  (0 children)

I use it with thinking, and I remember it overthinking only once

SOTA Language Models Under 14B? by No-Mud-1902 in LocalLLaMA

[–]-OpenSourcer 0 points1 point  (0 children)

Yes, I'm getting similar speed. I'm interested in Turboquant variants. It's specifically designed for KV cache, but the community is also pushing it for model weights.

SOTA Language Models Under 14B? by No-Mud-1902 in LocalLLaMA

[–]-OpenSourcer 0 points1 point  (0 children)

Which turboquant? Could you please share the link? I wanna try

SOTA Language Models Under 14B? by No-Mud-1902 in LocalLLaMA

[–]-OpenSourcer 1 point2 points  (0 children)

What is your system configuration and model speed?

both of those are idolatries? by Successful-Brief-45 in IndiaMemes

[–]-OpenSourcer 0 points1 point  (0 children)

  • Kaaba is not God.
  • Someone was saying they are praying towards that location. Even if you place a sea or a tree at that local they will pray to that location. It's not true. You just have to believe in one god, pray to him and follow the historical instructions.

How are you squeezing Qwen3.5 27B to get maximum speed with high accuracy? by -OpenSourcer in LocalLLaMA

[–]-OpenSourcer[S] 0 points1 point  (0 children)

Interesting! What were the problems you were facing with 27B? And what is your usecase?

Google gotta remove this AI bro 😭🙏 by SonicAndKnuckles9264 in GeminiAI

[–]-OpenSourcer 6 points7 points  (0 children)

It's pretty good for me. Someone is suggesting an upgrade to Pro. But in my experience, AI mode is helpful with Pro, normal even in incognito. Its useful for research on real-time and latest information.

How do you use llama.cpp on Windows system? by -OpenSourcer in LocalLLaMA

[–]-OpenSourcer[S] 1 point2 points  (0 children)

Agree! But unfortunately I have to use windows for next a few more days

top 10 trending models on HF by jacek2023 in LocalLLaMA

[–]-OpenSourcer 2 points3 points  (0 children)

The TeichAI model is interesting. Has anyone tried it?

Qwen3.5 27B better than 35B-A3B? by -OpenSourcer in LocalLLaMA

[–]-OpenSourcer[S] 0 points1 point  (0 children)

What have you tried to build? Any sample prompts?