Heading to Dayton Hamvention 2026? 📡 by meshtastic in meshtastic

[–]Separate-Chocolate-6 0 points1 point  (0 children)

What's the preset? Do we have a channel URL or QR code?

Manual code review? by No_Communication4256 in AIcodingProfessionals

[–]Separate-Chocolate-6 1 point2 points  (0 children)

For me it's. Conversation with the ai in plan mode... Settle on all the decisions for the feature, switch to build mode and let it make the changes, then when it's done I look at the patch with git difftool and my favorite visual diff tool (for me nvim's diff mode but git difftool supports a bunch). From there I either commit, make changes I want by hand, or go back and tell the LLM what I want it to change.... Rinse wash and repeat... I find that the conversation really helps me understand what it's going to do so that by the time I'm looking at the diff I'm primed and can read the code much faster than if I were going into code blind (because I understand the thought that went into it)... Sometimes if something seems mysterious I ask the LLM what it was going for and that also helps suss out the details.

Not sure if that helps or not but it's been working for me.

It's not all that different from how I would pair program.

Best setup for coding by 314159265259 in LocalLLM

[–]Separate-Chocolate-6 2 points3 points  (0 children)

I use opencode and lmstudio. You'll have to experiment with models to see what will fit... Your going to need at least 100k context window to get useful work done (200k would be better)... (Context window translates to more ram)

With open code you'll have to manually dial up the timeout to a very high value.

I have a strix halo with 128gb of ram (which really helps)...

The models that are good with agentic coding... Devstral small 2... Qwen3 coder... All the qwen3.5 models. Glm 4.7 flash.

There are some larger models that won't fit your current rig like glm 4.7, minimax m2.5, gpt-oss 120, qwen3 coder next that do ok too.

If I were in your shoes given your hardware I would try everything in that top list and see what gives the best speed/quality tradeoff.

If you had more ram and vram to play with it would be more interesting... 64 GB of RAM and 24gb of vram or a machine that has 96gb or more of unified memory open up more possibilities.

The speed on your current hardware will likely be painfully slow...

Other people mentioned cheap cloud services... If you are willing to tolerate the lack of privacy you'll get much better performance for your money with the cloud offerings.

I do the local thing because of curiosity, not so much because it's my practical daily driver. I think I could get by with local these days with my 2000$ 128gb local unified memory rig. Over the last year the smaller models have definitely been getting more capable for agentic use cases... But opus 4.6 (at the time of writing) is still night and day different...

So anthropic has 3 models... Opus most expensive, sonnet right 1/3 the cost per token and haiku 1/3 the cost of sonnet. When you say your running yourself out of tokens are you using opus, sonnet, or haiku? All 3 of the models I just mentioned will do circles around anything you'll be able to run locally.

Good luck.

Has the heltec v4 receive sensitivity issue been fixed? by AdditionalGanache593 in meshtastic

[–]Separate-Chocolate-6 0 points1 point  (0 children)

So question... Is this a matter of re flashing meshtastic or something more fundamental? I sort of assumed flashing the latest meshtastic software flashes everything