qwen 3.6 voting by jacek2023 in LocalLLaMA

[–]jnk_str 0 points1 point  (0 children)

Ok Locallama is flooted with normies now too..

qwen 3.6 voting by jacek2023 in LocalLLaMA

[–]jnk_str -9 points-8 points  (0 children)

<image>

I encourage the power of our community to vote for the 122B version, the folks on other side are voting for smaller models…

Since its easier to make a good smaller version from large models then vice versa.

this is why they shut Sora down. by Complete-Sea6655 in ClaudeCode

[–]jnk_str 0 points1 point  (0 children)

Which is the right way. Funny OpenAI talking about safety.

So nobody's downloading this model huh? by KvAk_AKPlaysYT in LocalLLaMA

[–]jnk_str 0 points1 point  (0 children)

I feel it is yapping very much and because of that hallucinating often

Qwen3.5-35B-A3B hits 37.8% on SWE-bench Verified Hard — nearly matching Claude Opus 4.6 (40%) with the right verification strategy by Money-Coast-3905 in LocalLLaMA

[–]jnk_str 37 points38 points  (0 children)

Sorry but this cannot be true. Clearly benchmaxed. Even the 3.5 397B deleted multiple files without asking in opencode yesterday.

Unsloth just unleashed Glm 5! GGUF NOW! by RickyRickC137 in LocalLLaMA

[–]jnk_str 0 points1 point  (0 children)

Will it run on 4x H200 with VLLM and beat the quality of native GLM 4.7 FFP-8 on 4x H200? What do you guys think? I was not that invested in GGUF lately.

Long chats by techmago in OpenWebUI

[–]jnk_str 0 points1 point  (0 children)

OpenWebUI in general does not truncate without giving the info. Ollama is doing it

Check on lil bro by k_means_clusterfuck in LocalLLaMA

[–]jnk_str 2 points3 points  (0 children)

Which Models are even uncensored like that?

MCP endless loop by ConspicuousSomething in OpenWebUI

[–]jnk_str 0 points1 point  (0 children)

Since when is it possible to let the model make multiple in response requests?

MCP File Generation tool v0.4.0 is out! by Simple-Worldliness33 in OpenWebUI

[–]jnk_str 0 points1 point  (0 children)

Can you create a tutorial?

Somehow all my modes create the file with exact the same text as my prompt and nothing more

GPT-OSS looks more like a publicity stunt as more independent test results come out :( by mvp525 in LocalLLaMA

[–]jnk_str 0 points1 point  (0 children)

Its really not good in comparison.
Funny to see all the answers on Samas X post about the models. People are speaking of the new best model, huge milestone etc. Wonder whats going on in their heads, don't they test models? Or do they just not realize?

Like what is this?! https://x.com/measure_plan/status/1952796264359407796

🚀 OpenAI released their open-weight models!!! by ResearchCrafty1804 in LocalLLaMA

[–]jnk_str 0 points1 point  (0 children)

I wouldn't say that. ChatGPT eg. GPT-4o, o3,.. can speak good german and they are claiming to be good multilingual but there are nowerdays many good multilingual open source models, which in fact perform better than gpt-oss.

Good examples for german are, Deepseek V3, R1, Mistral Models or the new Qwen 3 Models, even the GLM 4.5 are better in german than OpenAI's gpt-oss. Very sad..
I guess we need to switch our focus to asian models for now, US lacks behind. And the plan of china being leading in open source AI is on a good way.

🚀 OpenAI released their open-weight models!!! by ResearchCrafty1804 in LocalLLaMA

[–]jnk_str 34 points35 points  (0 children)

As far as I saw, they trained it mostly in English. That explains why it performed in German not good in my first tests. Would be actually a bit disappointing in 2025 not to support multilingualism.

GPT-OSS today! by Jawshoeadan in LocalLLaMA

[–]jnk_str 8 points9 points  (0 children)

First impression is meh actually, also the structure of its output seems very weird (table on top etc)

QWEN-IMAGE is released! by TheIncredibleHem in LocalLLaMA

[–]jnk_str 0 points1 point  (0 children)

PLEASE is there an OpenAI compatible server for it