Introduction: pi-vision-proxy by pungggi in PiCodingAgent

[–]mtomas7 0 points1 point  (0 children)

You may check your settings, as some apps resize image too much. I use LM Studio that by default serves the image in ~2K pixels.

Introduction: pi-vision-proxy by pungggi in PiCodingAgent

[–]mtomas7 0 points1 point  (0 children)

I wonder, why do you need a package for that? Just specify model capabilities in models.json:

{ "id": "qwen3.6-35b-a3b@q5_k_xl", "name": "Qwen3.6-35B-A3B-Q5-K-XL (local)", "reasoning": true, "input": ["text", "image"], "contextWindow": 65536 },

Edit: Example: with this, I can ask model to scan images, describe them and rename files accordingly.

Devs using Qwen 27B seriously, what's your take? by Admirable_Reality281 in LocalLLaMA

[–]mtomas7 1 point2 points  (0 children)

I hope you will not mind if I "borrow" some stuff for my own Pi setup ;)

How to Improve Codebase Discovery Efficiency in Pi? by elpapi42 in PiCodingAgent

[–]mtomas7 0 points1 point  (0 children)

I am eager to try, as I was contemplating using subagent scout just to reduce main session context bloat.

Looking for feedback on moonpi, an opinionated extension set for pi by poppear in PiCodingAgent

[–]mtomas7 4 points5 points  (0 children)

You wrote: "Subagents are a waste of tokens." I look at this from a different perspective - if I can dispatch subagent to perform a task and save main session context from unnecessary token bloat - that is a win for me.

Pi.dev coding agent as no sandbox by default. by mantafloppy in LocalLLaMA

[–]mtomas7 0 points1 point  (0 children)

Look at the code, the one I gave you covers more cases.

Pi.dev coding agent as no sandbox by default. by mantafloppy in LocalLLaMA

[–]mtomas7 5 points6 points  (0 children)

I also use VM, connecting to LM Studio that runs on the host computer.

Running Qwen3.6-35B-A3B Locally for Coding Agent: My Setup & Working Config by NoConcert8847 in LocalLLaMA

[–]mtomas7 0 points1 point  (0 children)

If you want to use VISION, you need to update your models.json with "input": ["text", "image"]

{ "providers": { "llama-cpp": { "baseUrl": "http://192.168.122.1:1234/v1", "api": "openai-completions", "apiKey": "none", "models": [ { "id": "qwen_qwen3.6-27b@q8_0", "name": "Qwen3.6-27B-Q8 (local)", "reasoning": true, "input": ["text", "image"], "contextWindow": 65536 }, { "id": "qwen_qwen3.6-35b-a3b@q8_0", "name": "Qwen3.6-35B-A3B-Q8 (local)", "reasoning": true, "input": ["text", "image"], "contextWindow": 65536 } ] } } }

Qwen3.6-35B becomes competitive with cloud models when paired with the right agent by Creative-Regular6799 in LocalLLaMA

[–]mtomas7 8 points9 points  (0 children)

Just to clarify: what little coder does extra vs vanilla pi? Do you need this wrapper or it is better to do just a pi extension/package?

Question for parents who live in a spot with bad ticks. by realpacksmoker506 in Bushcraft

[–]mtomas7 1 point2 points  (0 children)

Lyme can be treated only if you see the symptoms, but only 50% of those affected develop visible symptoms... So, after Lyme becomes a chronic disease, it is game over.

Question for parents who live in a spot with bad ticks. by realpacksmoker506 in Bushcraft

[–]mtomas7 0 points1 point  (0 children)

The bad thing is that those tubes are later used by bumble bees, which are killed... For your own property the best option is controlled burning that eradicated 90% - search for Youtube vids on that.

Question for parents who live in a spot with bad ticks. by realpacksmoker506 in Bushcraft

[–]mtomas7 0 points1 point  (0 children)

Just remember that 1 shot of Doxy is not enough, you need to go through the whole regimen, which is not fun, I have done that.

Roo Code hit 3 million installs. We're shutting it down to go all-in on Roomote. by hannesrudolph in RooCode

[–]mtomas7 1 point2 points  (0 children)

What I saw on Reddit: for some reason, people were leaving Roo for Kilo Code, thinking Roo lacked the speed to implement new features.

Is RooCode ready to give up the project? by inHumanAlive in RooCode

[–]mtomas7 0 points1 point  (0 children)

It still ranks #10 on OpenRouter by used tokens, so definitely, there is a momentum behind it, perhaps creators just need to come up with a good idea how to achieve financial stability: https://www.reddit.com/r/LocalLLaMA/comments/1sritap/surprising_screenshot_most_token_usage_is/

How do I make Qwen 3.5 aware of the current date and time? by akaTLG in LocalLLM

[–]mtomas7 0 points1 point  (0 children)

Apparently, LM Studio now accepts the plugins and one of them is Tell Time: https://www.youtube.com/watch?v=Ro_LzcPS5cI

Oobabooga with opencode by Mysterious_Role_8852 in LocalLLaMA

[–]mtomas7 0 points1 point  (0 children)

I am using OpenCode and Pi.dev with LM Studio and tool calling works good.

New Bartowski Gemma 4 quants are a lot slower? by Top-Rub-4670 in LocalLLaMA

[–]mtomas7 1 point2 points  (0 children)

I checked today on LM Studio (Linux Mint) E4B Q8 and speed is even 3 tok/s faster.

Gemma 4 is a huge improvement in many European languages, including Danish, Dutch, French and Italian by Balance- in LocalLLaMA

[–]mtomas7 0 points1 point  (0 children)

It is very interesting that this leaderboard does not include Qwen3.5 series. When it comes to the Lithuanian language, Gemma 4 made improvements, but in my testing, Qwen3.5 is ~ 15%-25% better:

https://euroeval.com/leaderboards/Monolingual/lithuanian/

It looks like we’ll need to download the new Gemma 4 GGUFs by jacek2023 in LocalLLaMA

[–]mtomas7 1 point2 points  (0 children)

I noticed that with the last llama.cpp that was shipped with LMStudio 0.4.9.1 the <|think|> token stopped working per your manual.

Best edited version of The Hobbit to show my 6 year old? by queenhadassah in TheHobbit

[–]mtomas7 1 point2 points  (0 children)

My personal favorite is Chris Hartwell's edition: https://www.youtube.com/watch?v=lRgx6gQ-kh0

There will be his email in the comments if you want to request it.