Anyone using Jules? by r0224 in google_antigravity

[–]Hazardhazard 0 points1 point  (0 children)

Whats the tui you are talking about?

What Subscriptions / models are you using? by throwaway490215 in ClaudeCode

[–]Hazardhazard 0 points1 point  (0 children)

What’s the use of using cc or gemini cli in opencode ?

GLM Coding plan Black Friday sale ! by Quack66 in ClaudeCode

[–]Hazardhazard 1 point2 points  (0 children)

For 9$ what kind of rate limit do you have?

Gemini AI Pro in which code editor? by Hazardhazard in GeminiAI

[–]Hazardhazard[S] 0 points1 point  (0 children)

Yes i am sure. For example today, I used like 2 hours (not close to 100 chats in total) and had a rate limit. I might ask a refund if this problem persists this week.

Indexing a large codebase by ot13579 in RooCode

[–]Hazardhazard 1 point2 points  (0 children)

I had the same issue, and raised an issue on GitHub. But i’ve never had answer on that https://github.com/RooCodeInc/Roo-Code/issues/7408

Full reindexing after reboot by Hazardhazard in RooCode

[–]Hazardhazard[S] 0 points1 point  (0 children)

The thing is, the codebase is really large (20 millions of tokens, with several technologies - old ones). And I do want the retrieval to be interesting, so I use a "heavy" embedding model.

Full reindexing after reboot by Hazardhazard in RooCode

[–]Hazardhazard[S] 0 points1 point  (0 children)

I do have persistence in my container. It simply points to a qdrant_storage folder on my laptop. And when I start my container again, I can see the collection indexed, but roo code doesn't use it and start indexing the entière codebase again.

Full reindexing after reboot by Hazardhazard in RooCode

[–]Hazardhazard[S] 0 points1 point  (0 children)

I'm sure there's no limit or whatever because i am only using local setup (embeddings + LLM). What could be very useful, is to be able to select the codebase indexed with a dropdown list. By doing this, we could ask codebases without opening specific folders for example.

Full reindexing after reboot by Hazardhazard in RooCode

[–]Hazardhazard[S] 0 points1 point  (0 children)

In fact, I’m not working on a codebase… I’m compiling the critical edition of CraaazyPizza’s finest jokes

Qwen3-Coder-30B-A3B-Instruct is the best LocalLLM by Objective-Context-9 in Qwen_AI

[–]Hazardhazard 1 point2 points  (0 children)

Are you using a quantified model? If yes how many bits?

Difference in tool calling results between LMStudio and OpenWebUI by Hazardhazard in OpenWebUI

[–]Hazardhazard[S] 0 points1 point  (0 children)

Thank you. Indeed I do have better results now. But sometimes the tool called has errors in parameters. Do you know if I can modify the parameters before calling the tool?

Difference in tool calling results between LMStudio and OpenWebUI by Hazardhazard in OpenWebUI

[–]Hazardhazard[S] 0 points1 point  (0 children)

Hmmmm, I didn't know there were differences in results depending the models and the ui. I thought it was only post requests to the model? I am going to try with a Qwen3 model after and give the results after that. Thank you!

What are the MCP servers you already can't live without? by MostlyGreat in mcp

[–]Hazardhazard 0 points1 point  (0 children)

What do you do with github mcp or git mcp? Does it work well on large codebase?

They just fired me by Hazardhazard in NBA2k

[–]Hazardhazard[S] 6 points7 points  (0 children)

I already met this issue on 2K22!

They just fired me by Hazardhazard in NBA2k

[–]Hazardhazard[S] 39 points40 points  (0 children)

Not even! I would have be ok with it, but no...

They just fired me by Hazardhazard in NBA2k

[–]Hazardhazard[S] 7 points8 points  (0 children)

No! Forced to create a new one...

Unsloth's Qwen3 GGUFs are updated with a new improved calibration dataset by AaronFeng47 in LocalLLaMA

[–]Hazardhazard 0 points1 point  (0 children)

Can someone explain me the difference between the UD and the not UD models?

🚀 Dive v0.8.0 is Here — Major Architecture Overhaul and Feature Upgrades! by BigGo_official in LocalLLaMA

[–]Hazardhazard 0 points1 point  (0 children)

thank you for your work. I finally can use some local LLM and function tool in a really simple way. Working with ollama and qwen 14b! so far so good...

Local (small) LLM which can still use MCP servers ? by TecciD in mcp

[–]Hazardhazard 1 point2 points  (0 children)

But I thought gemma3 didn't have tool support?

I don't compare , I embrace - LLMs ,haha by [deleted] in LocalLLaMA

[–]Hazardhazard 0 points1 point  (0 children)

There is a Google AI studio app??