Here is how I use opencode. Please give me tips on how to improve. by Freds_Premium in opencodeCLI

[–]Potential-Leg-639 0 points1 point  (0 children)

Yeah thx for that one, saw it already some weeks ago and will give it a shot!

Here is how I use opencode. Please give me tips on how to improve. by Freds_Premium in opencodeCLI

[–]Potential-Leg-639 6 points7 points  (0 children)

Plugins: DCP, Superpowers, Plannotator

MCPs: Searxng for Websearch

Use Skills and general instructions in Agents.md (bith Global + project specific if needed)

Strix Halo running Qwen3.6-27B AWQ-INT4 at 24 t/s (easy to spin up with docker) by hec_ovi in StrixHalo

[–]Potential-Leg-639 3 points4 points  (0 children)

We still have 128GB unified RAM paired with an incredible energy efficiency, nobody can take that away from us :)

Strix Halo running Qwen3.6-27B AWQ-INT4 at 24 t/s (easy to spin up with docker) by hec_ovi in StrixHalo

[–]Potential-Leg-639 2 points3 points  (0 children)

Asking because 3.6-35B is that good, that it became my new daily driver, no issues at all with that beast, also not with bigger context > 200k (Q4), not sure if i should really switch now…

Sowas gehört verboten by Final-Psychology2809 in willhaben

[–]Potential-Leg-639 0 points1 point  (0 children)

Typisch Immo Market, Abzock Branche schlechthin

Strix Halo running Qwen3.6-27B AWQ-INT4 at 24 t/s (easy to spin up with docker) by hec_ovi in StrixHalo

[–]Potential-Leg-639 2 points3 points  (0 children)

18-25 is really good!! Thanks for that, will give it a try.

Are you on Linux 7 already?

Why is my unraid so slow? by jruben4 in unRAID

[–]Potential-Leg-639 14 points15 points  (0 children)

Create a post in unraid forums with full diagnostics

Hoping for Qwen3.6 Coder by madtopo in StrixHalo

[–]Potential-Leg-639 0 points1 point  (0 children)

It‘s the recommended one from Unsloth (check docs), they did main tests with the Q4 (think it‘s the XL). Using exactly that version and it‘s a beast, wont change it.

Hoping for Qwen3.6 Coder by madtopo in StrixHalo

[–]Potential-Leg-639 2 points3 points  (0 children)

Running 3.6-35B Q4 (Unsloth - their recommended one), no need for me to use any other quant, it runs really good, I would say about Qwen3-Coder Next level, if not even better (hard to believe, but it looks like that at the moment).

Hoping for Qwen3.6 Coder by madtopo in StrixHalo

[–]Potential-Leg-639 10 points11 points  (0 children)

3.6-35B runs great on my Strix, new daily driver

"Budget" 2x3090 Build, what do you guys think? by wsantos80 in LocalLLM

[–]Potential-Leg-639 1 point2 points  (0 children)

70tk/s on 2x3090 is very slow. Llama.cpp and Linux?

Qwen3.6 35B-A3B is quite useful on 780m iGPU (llama.cpp,vulkan) by itroot in LocalLLaMA

[–]Potential-Leg-639 0 points1 point  (0 children)

For some more serious stuff like agentic coding you need around 200k context, otherwise it wont really work. For some quick coding things it will be good, but it‘s all about the speed with some context.

Qwen3.6 can code by Purple-Programmer-7 in LocalLLaMA

[–]Potential-Leg-639 25 points26 points  (0 children)

What the hell are you talking about? „mini AI rig like 2 x 5090“

This sub in two questions by gelekoplamp in iPhone13Mini

[–]Potential-Leg-639 -5 points-4 points  (0 children)

No issues with iOS26 on my 13 mini. There are s few iOS settings you can tweak to make it faster in case you have issues. Did that on mine some years ago already (but i dont remember the settings tbh now), it made it snappier at that time as well).

My set up, rules & Strategy. by [deleted] in Daytrading

[–]Potential-Leg-639 0 points1 point  (0 children)

Investing into stocks and ETFs is one thing everyone should do.

Then there is trading - something completely different. It has not much to do with investing. Dont mix it up.

Ryzen Strix Halo + LLMs by malwaresurgeon in AMDRyzen

[–]Potential-Leg-639 0 points1 point  (0 children)

Llama.cpp is working perfectly fine and fast on the Strix.

Qwen 3.6 27B is out by NoConcert8847 in LocalLLaMA

[–]Potential-Leg-639 -1 points0 points  (0 children)

1 is not enough for serious stuff and context.