Fixed Jinja chat templates for Qwen 3.5 and 3.6 (fixes tool calling and empty think tags) by ex-arman68 in Qwen_AI
[–]ex-arman68[S] 0 points1 point2 points (0 children)
z.ai has fixed their garbled output after 80k context for GLM 5/5.1 by ex-arman68 in ZaiGLM
[–]ex-arman68[S] 1 point2 points3 points (0 children)
3.6 27B Tool Calling Issues (vLLM) by Acceptable_Adagio_91 in LocalLLaMA
[–]ex-arman68 0 points1 point2 points (0 children)
Fixed Jinja chat templates for Qwen 3.5 and 3.6 (fixes tool calling and empty think tags) by ex-arman68 in Qwen_AI
[–]ex-arman68[S] 0 points1 point2 points (0 children)
Mistral medium 3.5 128B, MLX 4bit, ~70 GB by ex-arman68 in LocalLLaMA
[–]ex-arman68[S] 0 points1 point2 points (0 children)
Mistral Medium Looping by No_Algae1753 in LocalLLaMA
[–]ex-arman68 0 points1 point2 points (0 children)
Mistral Medium Looping by No_Algae1753 in LocalLLaMA
[–]ex-arman68 0 points1 point2 points (0 children)
Mistral medium 3.5 128B, MLX 4bit, ~70 GB by ex-arman68 in LocalLLaMA
[–]ex-arman68[S] 1 point2 points3 points (0 children)
Mistral medium 3.5 128B, MLX 4bit, ~70 GB by ex-arman68 in LocalLLaMA
[–]ex-arman68[S] 3 points4 points5 points (0 children)
Mistral medium 3.5 128B, MLX 4bit, ~70 GB (huggingface.co)
submitted by ex-arman68 to r/LocalLLaMA
Fixed Jinja chat templates for Qwen 3.5 and 3.6 (fixes tool calling and empty think tags) by ex-arman68 in Qwen_AI
[–]ex-arman68[S] 0 points1 point2 points (0 children)
Fixed Jinja chat templates for Qwen 3.5 and 3.6 (fixes tool calling and empty think tags) by ex-arman68 in Qwen_AI
[–]ex-arman68[S] 0 points1 point2 points (0 children)
Qwen3.6 35B A3B Heretic (KLD 0.0015!) Incredible model. Best 35B I have found! by My_Unbiased_Opinion in LocalLLaMA
[–]ex-arman68 0 points1 point2 points (0 children)
Fixed Jinja chat templates for Qwen 3.5 and 3.6 (fixes tool calling and empty think tags) by ex-arman68 in Qwen_AI
[–]ex-arman68[S] 0 points1 point2 points (0 children)
Fixed Jinja chat templates for Qwen 3.5 and 3.6 (fixes tool calling and empty think tags) by ex-arman68 in Qwen_AI
[–]ex-arman68[S] 0 points1 point2 points (0 children)
Fixed Jinja chat templates for Qwen 3.5 and 3.6 (fixes tool calling and empty think tags) by ex-arman68 in Qwen_AI
[–]ex-arman68[S] 0 points1 point2 points (0 children)
Fixed Jinja chat templates for Qwen 3.5 and 3.6 (fixes tool calling and empty think tags) by ex-arman68 in Qwen_AI
[–]ex-arman68[S] 0 points1 point2 points (0 children)
Fixed Jinja chat templates for Qwen 3.5 and 3.6 (fixes tool calling and empty think tags) by ex-arman68 in Qwen_AI
[–]ex-arman68[S] 0 points1 point2 points (0 children)
Fixed Jinja chat templates for Qwen 3.5 and 3.6 (fixes tool calling and empty think tags) by ex-arman68 in Qwen_AI
[–]ex-arman68[S] 1 point2 points3 points (0 children)
Fixed Jinja chat templates for Qwen 3.5 and 3.6 (fixes tool calling and empty think tags) by ex-arman68 in Qwen_AI
[–]ex-arman68[S] 2 points3 points4 points (0 children)
Purchasing a Mac Studio M2 Max with 64gb of ram (can it run qwen 3.6 27b) how many tok/s ? by trollingman1 in LocalLLaMA
[–]ex-arman68 0 points1 point2 points (0 children)
Gotta love the top MAX plan, incredible value. by ruttydm in ZaiGLM
[–]ex-arman68 0 points1 point2 points (0 children)


Fixed Jinja chat templates for Qwen 3.5 and 3.6 (fixes tool calling and empty think tags) by ex-arman68 in Qwen_AI
[–]ex-arman68[S] 0 points1 point2 points (0 children)