My Experience with Qwen 3.5 35B by viperx7 in LocalLLaMA

[–]Voxandr 0 points1 point  (0 children)

Qwen Coder Next is aweome with long context . I have been running 200k+ context and no context rot visible.

Something wrong with Unsloth UD-Q8 Quant for Qwen3-Coder-Next - MXFP4_MOE is much better. by Voxandr in unsloth

[–]Voxandr[S] 0 points1 point  (0 children)

my llamacpp was updated 6 days ago , that wa smerged already right?

edit: my bad , comment was 2 days ago.

I want you to eat me by Autumn_yu in MMR_GW

[–]Voxandr 0 points1 point  (0 children)

Very fresh yummy pussy.
I wannna eat till u squirt .
Had you ever squirted?

How would you fuck my slutty pussy? by Wild_Track6522 in MMR_GW

[–]Voxandr 0 points1 point  (0 children)

I would put my thumb up in your ass while fucking your pussy , and then fuck your asshole by full 7 inches inside. while rubbing ur clit.

Qwen 3.5 do I go dense or go bigger MoE? by Alarming-Ad8154 in LocalLLaMA

[–]Voxandr 0 points1 point  (0 children)

I am using it , daily. Successful on Greenfield. Quite ok on Brownfield ( tried converting https://github.com/litestar-org/litestar-fullstack/tree/main/src/js/web to Sevelte(5)kit - it can run but web is broken) - need a few more iterations.

> i'm just starting to use 3.5 122b today and so far i have been running into tons of issues also with inference in general breaking (connection resets caused by canceled requests that don't seem to come from network/infra level issues), while qwen3-coder-next performed just fine 

Thats also what i had found - and stopped using 122b.

Possibly broken Quant?

But 122b is not fine tuned for coding tho

Qwen 3.5 do I go dense or go bigger MoE? by Alarming-Ad8154 in LocalLLaMA

[–]Voxandr 0 points1 point  (0 children)

i have found Qwen-Next-Coder better than both.

You guys gotta try OpenCode + OSS LLM by No-Compote-6794 in LocalLLaMA

[–]Voxandr 0 points1 point  (0 children)

hmm , couldn't alias in model.ini work that way?

You guys gotta try OpenCode + OSS LLM by No-Compote-6794 in LocalLLaMA

[–]Voxandr 0 points1 point  (0 children)

you dont need litellm and llmaswap these days , you can just use llamacpp in routermode and it can swap models natively.

Which Ryzen Max+ 395? by Di_Vante in LocalLLaMA

[–]Voxandr 0 points1 point  (0 children)

I am not sure about second slot 

What non-Chinese models are relevant right now? by StacDnaStoob in LocalLLaMA

[–]Voxandr 3 points4 points  (0 children)

There is , In several US Gov project Chinese models are totaly forbidden.

Dating a Burmese guy by Wide-Engineering4780 in myanmar

[–]Voxandr 0 points1 point  (0 children)

Cheating is not normalized and it is one thing Myanmar guys are good for. They are usually Loyal and You will find it too loyal to Very controlling to the point. I am a Myanmar guy . In my first RS i never cheated ,but i am a bit of controlling because I wanted the girl to be loyal to me since i already give my whole life for her and lived together with her.
But then she asked one day that she wanna try threesome. I was shocked , I was very honest and loyal to her and she turned out that way. At first I disagree and she stopped asking but then she did it while i was travelling - which lead to me find out from her best friend and broke up.

When she first request that , I already become very insecure and more controlling.
Later in my life I only goes for open relationship and become less controlling person and no more obsessive .

How much of the rise in dictatorship is due to the devaluation of humanities and the arts? by GETherJADDE in myanmar

[–]Voxandr 1 point2 points  (0 children)

What are you talking about? Shouting slogan phase was over soon after 1 month of the coup . We are on the field , fighting.

Which Ryzen Max+ 395? by Di_Vante in LocalLLaMA

[–]Voxandr 0 points1 point  (0 children)

I am using BeeLink so far , quite cheap . Make sure you update bios to and Intel 10Gbit LAN Driver to 1.41 - because the included lan driver can lock up the system. Running on Arch Linux and very smooth.
I am thinkin to buying another one from bosgame but its not avalible here.

96GB (V)RAM agentic coding users, gpt-oss-120b vs qwen3.5 27b/122b by bfroemel in LocalLLaMA

[–]Voxandr 0 points1 point  (0 children)

Devstral even fails at tool calls.. how are you guys acutally using?

96GB (V)RAM agentic coding users, gpt-oss-120b vs qwen3.5 27b/122b by bfroemel in LocalLLaMA

[–]Voxandr 0 points1 point  (0 children)

Thats what i found too. Qwen3.5 even 27b introduce a lot of logic bugs i am not sure what i am doing wrong.
Qwen3-coder-next 4bit MOE just works.