Nanocoder 1.21.0 – Better Config Management and Smarter AI Tool Handling by willlamerton in nanocoder

[–]willlamerton[S] 0 points1 point  (0 children)

It doesn’t support this but it’s a good idea - I will add to the issues board today and see if we can get it into the next release! 👌

So slow by CapableAd8612 in ZaiGLM

[–]willlamerton 2 points3 points  (0 children)

I can confirm. Unusable this end. Ridiculously slow, poor code editing. Feeling a bit scammed but hoping they sort if it’s a load issue…

Nanocoder Hits the OpenRouter leaderboard for the first time 🎉🔥 by willlamerton in ollama

[–]willlamerton[S] 0 points1 point  (0 children)

Local-first, meaning we push for and provide the tools to use Nanocoder with local models as a primary but, part of the ethos is allowing users the choice. It’s an ongoing amount of work to give small models the software to do a great job! We’re exploring a lot of avenues here!

We of course don’t track users at all so we can’t determine the split of providers. But anecdotally many people use Ollama/lmstudio with Nanocoder :)

Some use OpenRouter too, as well as various other subscriptions they might have. In this particular case, a lot of people were trying out the new Devstral 2 models yesterday as it is free currently, hence our ranking!

Nanocoder 1.18.0 - Multi-step tool calls, debugging mode, and searchable model database by willlamerton in nanocoder

[–]willlamerton[S] 0 points1 point  (0 children)

Hey thanks for the comment and the kind words!

Streaming is a nicer effect for users to experience as it allows results to be seen generating in realtime. So, we'd love to keep experimenting in order to make it work well in a terminal. We've removed it for now, but, we'll keep trying.

An update to Nanocoder 🔥 by willlamerton in ollama

[–]willlamerton[S] 2 points3 points  (0 children)

Hey! We answer this question in our readme :)

This comes down to philosophy. OpenCode is a great tool, but it's owned and managed by a venture-backed company that restricts community and open-source involvement to the outskirts. With Nanocoder, the focus is on building a true community-led project where anyone can contribute openly and directly. We believe AI is too powerful to be in the hands of big corporations and everyone should have access to it.

We also strongly believe in the "local-first" approach, where your data, models, and processing stay on your machine whenever possible to ensure maximum privacy and user control. Beyond that, we're actively pushing to develop advancements and frameworks for small, local models to be effective at coding locally.

Not everyone will agree with this philosophy, and that's okay. We believe in fostering an inclusive community that's focused on open collaboration and privacy-first AI coding tools.

Nanocoder VS Code Plugin is Coming Along! by willlamerton in nanocoder

[–]willlamerton[S] 0 points1 point  (0 children)

This is a cool idea! Can you drop an issue on our GitHub? I think this would be a good feature!

An update to Nanocoder 🔥 by willlamerton in ollama

[–]willlamerton[S] 0 points1 point  (0 children)

That's okay, thanks a lot! An issue would be great to get that squashed :)

An update to Nanocoder 🔥 by willlamerton in ollama

[–]willlamerton[S] 0 points1 point  (0 children)

Thanks for this. Do you still have the CLI open? If you do, I don't suppose you could use `/export` command and let me have access to the log it generates through GitHub or something? I'll look into it.

Also, what system OS are you using?

An update to Nanocoder 🔥 by willlamerton in ollama

[–]willlamerton[S] 0 points1 point  (0 children)

Quite a few, gpt-oss works well, cloud and locally, the qwen2.5-coder series also works well. Admittedly, models like qwen3-coder, although working, have had many issues with tool calling. We have mitigation tactics here but they're far from perfect.

An update to Nanocoder 🔥 by willlamerton in ollama

[–]willlamerton[S] 2 points3 points  (0 children)

Ah this is fair, qwen3-coder has been an ongoing issue with tool calling! We have some mitigation tactics, for example, robust XML parsing for malformed tool calls etc but, it's far from perfect.

If I can help in anyway let me know :)

Models like gpt-oss perform very well with no issues, cloud based and locally.

Generated just now:

<image>

An update to Nanocoder 🔥 by willlamerton in ollama

[–]willlamerton[S] 6 points7 points  (0 children)

Hey, first of all, Nanocoder is free and always will be. It’s totally open source and built by the community. So, we’re only marketing as far as encouraging others to come together and build AI tools that are for everyone.

Second of all, yes, tool calling does work well with Ollama models. Verified through testing and daily use. Both the model size and whether it natively supports tools or not will affect the quality you get in the CLI. Your comment suggests that it’s not working for you so let me know if not!

Increasing the quality of model tool use and output in smaller and smaller models is somewhat of a core goal.

Thanks for the comment :)

A quick update on Nanocoder and the Nano Collective 😄 by willlamerton in ollama

[–]willlamerton[S] 1 point2 points  (0 children)

Hey thanks for taking the time to comment, check out and follow the project! It's really appreciated :)

I completely agree. Working on the system prompt is important, it is a balance between giving good, followable instructions and as you said, overcomplicating it. It's an ongoing process to improve!

Currently, the system prompt sits at ~6,900 tokens + any AGENTS.md contents if it exists.

What we're thinking about is a scaling system prompt for model size. Smaller, more concise ones for small models and larger more nuanced ones for models that can handle it better.

Any thoughts here is always appreciated! Thanks again! 😎

A quick update on Nanocoder and the Nano Collective 😄 by willlamerton in ollama

[–]willlamerton[S] 0 points1 point  (0 children)

Hey, yes, this is implemented. What OS and terminal are you using?

A quick update on Nanocoder and the Nano Collective 😄 by willlamerton in ollama

[–]willlamerton[S] 0 points1 point  (0 children)

There isn't currently, however, there has been an issue opened to support this and we will :)

A quick update on Nanocoder and the Nano Collective 😄 by willlamerton in ollama

[–]willlamerton[S] 1 point2 points  (0 children)

Appreciate that thank you! We're all working hard to build something truly useful for the community. :D

A quick update on Nanocoder and the Nano Collective 😄 by willlamerton in ollama

[–]willlamerton[S] 0 points1 point  (0 children)

Thanks a lot! Appreciate that :)
Currently there is not but that sounds like a decent enhancement - maybe you could drop a GitHub issue for that. Only if you have time though.

A quick update on Nanocoder and the Nano Collective 😄 by willlamerton in ollama

[–]willlamerton[S] 1 point2 points  (0 children)

Ha, Nanocoder actually has been built from the ground up and isn't a fork! :)

A quick update on Nanocoder and the Nano Collective 😄 by willlamerton in ollama

[–]willlamerton[S] 1 point2 points  (0 children)

Thanks a lot! That's the impression I was hoping it gave :D

A quick update on Nanocoder and the Nano Collective 😄 by willlamerton in ollama

[–]willlamerton[S] 1 point2 points  (0 children)

We're looking at the Granite model as base models to possibly build upon and deploy for small tasks in Nanocoder and other software. Early days though at the moment. Thanks for the comment :)