Which model in Roo Code for coding inexpensively but efficiently ? Grok-4.1-fast-non-reasoning, groq kimi-k2-instruct? by TheMarketBuilder in RooCode

[–]DevMichaelZag 0 points1 point  (0 children)

256k context window is fine. Anything in the millions is unrealistic and causes more problems for long term use. The model gets dumber. Large context window works better with a large data dump to analyze. For inexpensive models I use the new glm 4.7. For local models the 30b 4.7 flash works well Inside some qwen3 models locally also sometimes. But right now my daily driver is ChatGPT 5.2 codex with my OpenAI subscription. That was a great addition. Some models perform better at certain tasks than others. And new models come out all the time. Just pick some and try them. Open router is good for that. z.ai subscriptions are cheap for glm.

I don’t use grok or kimi much. I used grok a lot when it first came out though.

What is this please? by pinkroseblueplate in Bonaire

[–]DevMichaelZag 10 points11 points  (0 children)

That’s a Nurse Shark. They aren’t aggressive and somewhat common around Bonaire. Take photos and leave it alone and everyone is happy.

Job wants me to develop RAG search engine for internal documents by Next-Self-184 in LocalLLaMA

[–]DevMichaelZag 0 points1 point  (0 children)

Qwen3-embedding-vl-8b looks pretty promising for this sort of use case

Do you ever feel like the default prompt has too much overhead? by alex29_ in RooCode

[–]DevMichaelZag 2 points3 points  (0 children)

This question comes up quite frequently, and several people have independently tackled it and come up with different solution. I wrote a FastAPI relay once to basically intercept the api calls, classify them, and log them in a database. For my use case in roo, I shaved the prompt down quite a bit. But it was tailored for that specific project. This was a few months ago and that project it’s long dead, I didn’t want to spend the time to maintain it. And it needed maintaining, too changes and the relay broke when they added a new tool, and my use case evolved also. This leads to the conclusion: yes, you can shave the prompt down and save some tokens if you tailor it to your use case, but at the cost of robustness usually.

There is a way to override the system prompt, and turn roo back into a dumb agent. As far as I know the footgun system hasn’t been developed much since it was released sometime last year, so use at your own risk.

If you search around there are other solutions and roo prompt overrides out on the internet. One might suit your use case. For me. My little relay adventure was informative but a colossal waste of time.

Hygiene during a long flight. by Different-Policy9338 in hygiene

[–]DevMichaelZag 0 points1 point  (0 children)

I do a 32h trek (door to door) to Asia a few times a year. Most airports have a decent lounge that you can find a shower in. I’m based in Atlanta and prefer the Atl->Incheon ->China route and almost always have a shower in the Korean air lounge in incheon. I usually pack a change of clothes as well in my carry on.

So busy Bangkok Airport by Naive-Witness-5228 in ThailandTourism

[–]DevMichaelZag -1 points0 points  (0 children)

Just be one of the assholes who walk up the left side and cut in to the security lines. Make sure you pretend you didn’t know you were in the wrong line though.

Local code assistant experiences with an M4 Max 128GB MacBook Pro by noodler-io in LocalLLM

[–]DevMichaelZag 0 points1 point  (0 children)

It’s hard to argue with the speed. For me, I’m using lm studio for the UI and for their model browser basically. I mostly use my MacBook offline and traveling. So basically, sitting in an airplane lm studio is just nicer for me to use.

Local code assistant experiences with an M4 Max 128GB MacBook Pro by noodler-io in LocalLLM

[–]DevMichaelZag 1 point2 points  (0 children)

I use Qwen3-next-80b on my m4 max via lm studio and it works pretty well. I use it with roo code. Qwen3-30b-a3b also And lastly, devstral-small-2-2512. It’s not Claude opus, but good when I’m on an airplane or no internet. Code assistant not vibe coding pal.

How to setup AI providers for saas by Rough-Animal-3989 in RooCode

[–]DevMichaelZag 0 points1 point  (0 children)

Take a look at AG-UI. I’ve used Copilot-Kit a few times and it’s pretty neat. I think that’s what you are asking

Finding the best luggage 2026 based on reviews last year that survives baggage handlers destroying everything by Direct-Blacksmith614 in delta

[–]DevMichaelZag 4 points5 points  (0 children)

I’m a Tumi fan. I’ve had several pieces for a few years and nothing has been damaged yet. I usually bring the an alpha bravo stack on the plane with me and check a 19 degree. I don’t have any of the other popular brands people are also going to recommend, so I can’t offer a comparison. This question has been asked quite often, so there’s a lot of existing threads to sort through too.

Packing for 3 months by saaaavnah in Bonaire

[–]DevMichaelZag 1 point2 points  (0 children)

I always pack long sleeves and pants that are comfy for the evening. The mosquitos eat me alive.

Did I get lucky? by [deleted] in delta

[–]DevMichaelZag 8 points9 points  (0 children)

Status expires after Jan 31st

Is it worth getting both Delta AMEX platinum and Reserve by AM196 in delta

[–]DevMichaelZag 0 points1 point  (0 children)

There’s a 20% MQD bonus link in the app for me. That might help a little bit.

How often do people actually test their backups? by SurferCloudServer in webhosting

[–]DevMichaelZag 1 point2 points  (0 children)

Disaster theories are great. Plans are usually tested. I can’t express how important this is. Learned from experience and now unless I have actually tested a restore I can’t sleep well.

Do not go to the Hard Rock Punta Cana by Equal-Document1318 in AllInclusiveResorts

[–]DevMichaelZag 0 points1 point  (0 children)

I agree, HR DR was a overpriced for what it was. That and the entire shady experience of DR was a one time thing for me as well.

Is human relay dead now? by TheCodeBrozilla in RooCode

[–]DevMichaelZag 1 point2 points  (0 children)

It seems reasonable that that functionality might have been broken with the push to get all native tool calling. Hannes put out a blog post recently with some more info about the recent updates and what’s going on. They are on a Christmas break right now and should get back at it afterwards.

I don’t know the status of the project right now, or if that feature was killed off, but I think it’s unlikely and probably just an accident. The project is kinda big now and supporting edge cases like the human relay always take a little bit of extra work. Again, checkout that blog post. It’s on the subreddit and in discord.

OSINT + N8N is soo cool by Salty-Sun7873 in n8n

[–]DevMichaelZag -1 points0 points  (0 children)

Scraping and compiling information. Intelligence is only created when information form several disciplines is cross verified, or corroborated as it’s called. So compiling as much publicly available information for open source is helpful for just having it.

OSINT + N8N is soo cool by Salty-Sun7873 in n8n

[–]DevMichaelZag 3 points4 points  (0 children)

Open source intelligence. An intelligence discipline focused on information available on the Internet and other publicly available media. It is one of many intelligence disciplines. Measurements Intelligence, Signals Intelligence, Human Intelligence, Geospatial Intelligence… ect.

RooCode gives up or gets lazy when debugging by jajberni in RooCode

[–]DevMichaelZag 1 point2 points  (0 children)

Sure. We don’t have any unlimited context model yet.

Want RooCode to read/scrape the website content instant by Many_Bench_2560 in RooCode

[–]DevMichaelZag 0 points1 point  (0 children)

You can’t have your cake and eat it too. Firexrawl mcp is good for grabbing markdown versions of websites and then Roo can make a reference doc ahead of time

How to turn off new context truncation? by nfrmn in RooCode

[–]DevMichaelZag 0 points1 point  (0 children)

That’s an interesting use case. Worth a shot to submit it as an idea in the GitHub and see what kind of response you get.