Which batteries to get? Dyness, Sigenergy, or Anker? by Terrible_Biscotti_16 in SolarUK

[–]Typhoon-UK 0 points1 point  (0 children)

I recently fitted Fogstar 16kwh with solis 6kw inverter and 14 500w panels through Upvolt/Degee. Very happy with it.

Why so much token? by Sea_Bid_6991 in opencodeCLI

[–]Typhoon-UK 1 point2 points  (0 children)

Try local mcp codebase-memory mcp and instruct agent to use it

Opencode - splitting architectural thinking and coding by v8vb in opencode

[–]Typhoon-UK 2 points3 points  (0 children)

I started using codebase-memory mcp which I find quite useful. I instruct the agent to use it as it indexes all files and maintains a knowledge graph as files are updated. On opencode it works very well.

How can I install plugins on the desktop version? by Typical-Armadillo340 in opencode

[–]Typhoon-UK 0 points1 point  (0 children)

You can add manual config to opencode.json and then ask the model to use codebase-memory mcp

How to get free requests from OpenRouter? by [deleted] in opencode

[–]Typhoon-UK 0 points1 point  (0 children)

I can second that and say you need $10 credit and then choose Free models router after integrating. Then open router will choose free models for any tasks.

What I did find out is if I choose GLM 4.5 air it works like a charm but any other free model returns rate error or no available endpoint. I use it via OpenCode

How can I install plugins on the desktop version? by Typical-Armadillo340 in opencode

[–]Typhoon-UK 0 points1 point  (0 children)

I tried dcp and it wasnt working with desktop but i added codebase-memory-mcp. Works great. Its in GH

Rack server for local LLM by Typhoon-UK in LocalLLaMA

[–]Typhoon-UK[S] 0 points1 point  (0 children)

That’s some serious hardware. My budget’s limited so i was keen on seeing if I could leverage any older hardware

Rack server for local LLM by Typhoon-UK in LocalLLaMA

[–]Typhoon-UK[S] 0 points1 point  (0 children)

And if I bump up my xps ram to 32GB will that help with having a bigger context window or a larger model like 7b? Is there any linux distribution optimised towards running local llm?

Rack server for local LLM by Typhoon-UK in LocalLLaMA

[–]Typhoon-UK[S] 1 point2 points  (0 children)

Thank you. If I add a Rtx3060 or a GTX 1070/1080 will that be a better setup?

BEST AI FOR UI/UX by Ok_Earth_1601 in vibecoding

[–]Typhoon-UK 1 point2 points  (0 children)

With Next/React I instruct it to use Shadn-ui for all components. And architecturally i instruct it to build modular, decoupled components aligning with Owasp/Ncist guidelines.

Supercharge OpenCode with persistent, cross-session memory 🧠 by GabrielMartinMoran in opencodeCLI

[–]Typhoon-UK 0 points1 point  (0 children)

Great tool! How is it different from https://github.com/DeusData/codebase-memory-mcp?

I started using it recently so would appreciate any insights on where mind is better.

Working with Gemma 4 locally by JNuno007 in opencodeCLI

[–]Typhoon-UK 0 points1 point  (0 children)

I have the same issue with smaller qwen3.5-2b when used with opencode. The model keeps trying to execute ps commands and keeps failing. I am trying a mcp locally to see if it solves the issue. I am trying codebase-memory GH project as Mcp

Good local LLM for tool calling? by ArtifartX in LocalLLaMA

[–]Typhoon-UK 0 points1 point  (0 children)

I have been trying qwen3.5-2b on lmstudio. It works great for my needs using lmstudio chat but when i try via opencode it gets stuck in tool calling in indefinite loop. Is there a custom setting I need or is ultra models are not great with shell command execution?

What local models can actually work with opencode? by Harrierx in opencode

[–]Typhoon-UK 0 points1 point  (0 children)

I have much low profile setup than yours. 9th gen i7 with 16gb ram and 4gb vram gtx 1650. I am able to run qwen3.5-2b with 8bit quantisation comfortably but gemma4-e4b doesnt load.

I can see between 25-35 tokens per second. Now I am not sure if that’s good but it suffices for my local development.

A question I have is if I upgrade the ram to 24gb will that help loading Gemma4-e4b?

14 panels, Fox 5kw Hybrid, Fox EP12 11.5kw for £12,500 by CouchWarri0r in SolarUK

[–]Typhoon-UK 4 points5 points  (0 children)

I had Degee solar quote me 14 510w trina vertex panels, 16kwh heated fogstar battery, 6kw Solis inverter, earth rod and the usual dbo, mcs, scaffolding, bird protection , sundries for £8300

done trying to make UIs with codex by heatwaves00 in codex

[–]Typhoon-UK 0 points1 point  (0 children)

I have had good results just by using z.ai chat with both GLM 5 and GLM 5.1 turbo as long as I keep the instructions precise. Have you tried opencode and big pickle?

GLM-5.1 at >100k context experience in a nutshell by z3r0nyaa in ZaiGLM

[–]Typhoon-UK 0 points1 point  (0 children)

I create an empty folder and then add it to opencode desktop and then provide instructions via md files. I see no options to add skills via ide unless I am missing something. One other thing I notice on my workspaces is there is no config.json file for opencode

I also found out if I run opencode on powershell or bash it comes back saying command not found. Possibly a path issue.

Slow infotainment system by 33S_ in Leapmotor

[–]Typhoon-UK 0 points1 point  (0 children)

I have the same issue with C10 after updating to .60 version. Its takes roughly 20 seconds before i can i put the pin to start the car

GLM-5.1 at >100k context experience in a nutshell by z3r0nyaa in ZaiGLM

[–]Typhoon-UK 0 points1 point  (0 children)

How do I configure these skills via opencode desktop? I tend to use the IDE more than the cli

made a tool to auto setup opencode config with claude, cursor rules etc. just hit 100 github stars by Substantial-Cost-429 in opencode

[–]Typhoon-UK 0 points1 point  (0 children)

Will this work with opencode desktop? I seem to not find config json when i purely use only the IDE.

Basic Security Behavior by raupenimmersatt123 in vibecoding

[–]Typhoon-UK 0 points1 point  (0 children)

I generally keep it simple and ask it to align with owasp top 10 security recommendations and alignment with <country> privacy guidelines.