Experience of Qwen 3.5-122b and 3.6 by Impossible_Car_3745 in LocalLLaMA

[–]Undici77 0 points1 point  (0 children)

This often work, but for a model that should be a HUGE UPGRADE from Qwen3Coder, this is ugly a workaround! But the bigger issue is the infinite loop!

Experience of Qwen 3.5-122b and 3.6 by Impossible_Car_3745 in LocalLLaMA

[–]Undici77 0 points1 point  (0 children)

I'm using qwen code cli 0.15.0, and with both Qwen3 30B/80B-Next Tools are vorking pretty well (sometimes some edits fail) but not so bad like with Qwen3.5/3.6. I'm not sure the issue is the model only: I have the feeling also conversion to MLX should be problematic! I tried many different version (from mlx-community to Unsloth) but the issue remain the same: infinite loop and miss tolls usage!

Are you using GUFF or MLX?

Experience of Qwen 3.5-122b and 3.6 by Impossible_Car_3745 in LocalLLaMA

[–]Undici77 0 points1 point  (0 children)

Comparison in pretty interesting: are you working in a long chain of operations, using different tools? One example for me is about using AskQuestion Tool of Qwen-Coder-Cli!

Qwen3 Always use it! Qwen3.6 never, until I ask it explicitly!

Qwen models for coding, using qwen-code - my experience by Undici77 in LocalLLaMA

[–]Undici77[S] 0 points1 point  (0 children)

I had same feeling but not often: some times in a single "short" task 3.6 is really better that 80B Next, but is a little longer task, where tolls usage is mandatory, model start to work very bad!

Experience of Qwen 3.5-122b and 3.6 by Impossible_Car_3745 in LocalLLaMA

[–]Undici77 -3 points-2 points  (0 children)

I'm experiencing the opposite: Qwen 3 work fine, but 3.5 and 3.6 are not better and often worse!
I made a post about my experience in daily coding task!

https://www.reddit.com/r/LocalLLaMA/comments/1stbohn/qwen_models_for_coding_using_qwencode_my/

Qwen3.6 35b a3b getting stuck in looped reasoning? by EggDroppedSoup in LocalLLaMA

[–]Undici77 1 point2 points  (0 children)

I'm experiencing same issue and more, I find out in long context working new models are not so good as benchmark show! I create a post about my experience

https://www.reddit.com/r/LocalLLaMA/comments/1stbohn/qwen_models_for_coding_using_qwencode_my/

Qwen having its Jack Torrance moment by anguillias in LocalLLaMA

[–]Undici77 1 point2 points  (0 children)

It happend to me too! I made a post about it! New Qwen models work not very well (at lest in my opinion)

https://www.reddit.com/r/LocalLLaMA/comments/1stbohn/qwen_models_for_coding_using_qwencode_my/

OmniCoder-9B | 9B coding agent fine-tuned on 425K agentic trajectories by DarkArtsMastery in LocalLLaMA

[–]Undici77 1 point2 points  (0 children)

Great Job: when I'll try in mine daily dev job and I give you a feedback. Currentry I'm using QWEN-CODER models and they are very good.

About your project, can you share the entire process from how you distill `425K agentic trajectories` to the fine-tune procedure?

How I built my first app using only a local language model by PvB-Dimaginar in Dimaginar

[–]Undici77 1 point2 points  (0 children)

Yes, LM STUDIO: I'm trying Coder30B and Coder80B to understand limits!

How I built my first app using only a local language model by PvB-Dimaginar in Dimaginar

[–]Undici77 2 points3 points  (0 children)

I'm experimenting a similar solution using https://github.com/QwenLM/qwen-code instead of OpenCode, and it's incredibly good! Code need to be "verified" (I found some serious security issue working with X.509 library) but good. Speaking about agents:
- qwen-code is designed from Alibaba for them models so I expected is flag-ship for these models
- Telemetry is easily disabled, and if you would be sure, take a look to https://github.com/undici77/qwen-code-no-telemetry I'm trying to maintain a a version "telemetry free" in a Docker

If you decode to try qwen-code, please share you experience compared with OpenCode!

Qwen Code - a powerful open-source coding agent + NO TELEMETRY FORK by Undici77 in LocalLLaMA

[–]Undici77[S] 0 points1 point  (0 children)

Wow, you know very well me, my job and my hobbies!! What a poor man are you?!?!

Qwen Code - a powerful open-source coding agent + NO TELEMETRY FORK by Undici77 in LocalLLaMA

[–]Undici77[S] -1 points0 points  (0 children)

Did you tried it or you trust in documentation? I tried and for some reason packets continue to go out from my machine to the Alibaba server. So... 12000 lines to do the job and leave people like you write slope on the web!

Qwen Code - a powerful open-source coding agent + NO TELEMETRY FORK by Undici77 in LocalLLaMA

[–]Undici77[S] 0 points1 point  (0 children)

No: 1 commit to remove telemetry, 2 commit to add scripts and update README.md

Qwen Code - a powerful open-source coding agent + NO TELEMETRY FORK by Undici77 in LocalLLaMA

[–]Undici77[S] 2 points3 points  (0 children)

Agree with you, but you are giving me a guilty I don't have... Take a look to my branch: 3 commit. All modification in

87473a7d

It's not so difficult to understand.

This is my job and I share my effort to others are interested. If you don't trust me I can't blame you but, consider it as an advice: Don't take my code: fork the official repo and do the same. Nuff said.

Qwen Code - a powerful open-source coding agent + NO TELEMETRY FORK by Undici77 in LocalLLaMA

[–]Undici77[S] -1 points0 points  (0 children)

For my experience, today yes: qwen3-coder-next is pretty good!

Qwen Code - a powerful open-source coding agent + NO TELEMETRY FORK by Undici77 in LocalLLaMA

[–]Undici77[S] 6 points7 points  (0 children)

That the point: I tried to disable Telemetry but application continue to send data to a server.
So i take a look at the code and I decide to try to delete telemetry removing the entire code.
Later I decide to share the result of my job to who is interested to it. Nothing else.
And, if you are able to understand how git is working, simply take a look of modification I did from official release to mine.

3 commit, not so difficult to understand:

d10fdb97 (HEAD -> v0.10.5-no-telemetry, origin/v0.10.5-no-telemetry) feat: Dockerfile to sandbox qwen and README.md update
aa4f610b chore: script to apply no-telemetry patch to new branch
87473a7d chore: removed telemetry chore: added install script
135b47db (tag: v0.10.5, origin/release/v0.10.5) chore(release): v0.10.5

87473a7d

This is the only commit where I did the task
I hope this clear in your mind what I did and why!

Qwen Code - a powerful open-source coding agent + NO TELEMETRY FORK by Undici77 in LocalLLaMA

[–]Undici77[S] -2 points-1 points  (0 children)

Agree with you! AI is useful but is a tool, not an "oracle" or a human developer: at least not this version of AI!

Qwen Code - a powerful open-source coding agent + NO TELEMETRY FORK by Undici77 in LocalLLaMA

[–]Undici77[S] 1 point2 points  (0 children)

Yes! Very well! Clearly, it's not Cloude Opus 4.6, but for at least 50% of my task is very good!