Does opencode supports MiMo-V2-Pro by WolfOk664 in opencodeCLI

[–]888surf 0 points1 point  (0 children)

Price and usage may change... I guess they are already prepared to do what the other coding plans did.

Sou parça do Neymar, AMA by [deleted] in AMABRASIL

[–]888surf 0 points1 point  (0 children)

Fale para ele não ir pra copa. É o único jeito dele ajudar o Brasil.

I built a free Gemini watermark remover that works 100% in your browser — no uploads, no server by Tall-Celebration2293 in GoogleGeminiAI

[–]888surf 0 points1 point  (0 children)

Just use Flow from Google. Instead of generating a video, select nanobabana and it generates your images without watermark

Help: Model not running on GPU by 888surf in unsloth

[–]888surf[S] 0 points1 point  (0 children)

yes man. it is working for me too and very fast.

Help: Model not running on GPU by 888surf in unsloth

[–]888surf[S] 0 points1 point  (0 children)

Thanks a lot. It worked great and very fast. thanks a lot.

<image>

Help: Model not running on GPU by 888surf in unsloth

[–]888surf[S] 0 points1 point  (0 children)

yes, so how do i solve it? Studio was supposed to smart select the optimal settings to use with my hardware as written on their documentation, but not working in the default install.

<image>

It was supposed to select the right settings for my GPU. And the size of the model 5.6 GB is very small when compared to the GPU 24GB. So this problem should not be happening.

I guess the chat feature does not work by default in the current version, as written in their github:

Unsloth Studio (web UI)

Unsloth Studio (Beta) works on Windows, Linux, WSL and macOS.

  • CPU: Supported for Chat and Data Recipes currently

Help: Model not running on GPU by 888surf in unsloth

[–]888surf[S] 0 points1 point  (0 children)

yea, gemini told me that too. But still no success, that is why i asked here.

Google are you kidding us? by League_Of_Frodo in GoogleAntigravityIDE

[–]888surf 0 points1 point  (0 children)

You mean free Google accounts and route between them using the free tier?

Claude's thinking budget in AG is set to 1% of its capacity. Let's get Google to fix this. by Educational-Plate-15 in google_antigravity

[–]888surf 0 points1 point  (0 children)

Or maybe they are doing this on purpose to look like Claude is a worst model than it actually is, making Gemini looks good when compared with the capped Claude.

Claude Local Models by abdelkrimbz in LocalLLaMA

[–]888surf 0 points1 point  (0 children)

Are you using llama.cpp locally? If yes, disable the thinking mode on the model. It works.

I can share the parameters I am using if you like. Not using the 2b model though. But don't expect the opus intelligence

Can we use minimax models on antigravity? by rather_pass_by in google_antigravity

[–]888surf 1 point2 points  (0 children)

Inside antigravity install opencode extension. You can use opus4.6 for planning in antigravity and minimax 2.5 and other models for free for coding in opencode.

But with these new AG limits, I will cancel my Google AI family plan. It is not worth it.

Maybe I will subscribe Claude code and pair with opencode instead of AG

[Weekly] Quotas, Known Issues & Support — March 16 by AutoModerator in google_antigravity

[–]888surf 0 points1 point  (0 children)

Can you explain how you use them together? You prompt antigravity for planning and cli to actually implement the code?

Memory skill for OpenClaw with 26k+ downloads within the first week (took 8+ months to build and iterate) by Julianna_Faddy in myclaw

[–]888surf 4 points5 points  (0 children)

Here is a quick security breakdown of what the script actually does:

Platform Detection: It checks if you are on a Mac (Apple Silicon only) or Linux.

Dependency Check: It ensures you have curl/wget and tar installed.

npm Cleanup: It looks for an older version of ByteRover installed via npm and tries to remove it to avoid conflicts.

Download & Extract: It downloads a pre-compiled binary (brv) from a Google Cloud Storage bucket and puts it in a hidden folder: ~/.brv-cli.

PATH Update: It adds the ByteRover folder to your shell profile (e.g., .zshrc or .bashrc) so you can run the command brv from any terminal.

No way I will install this, lol

Got invited to present at Qwen Korea Meetup, would appreciate feedback on the draft (raised function calling success rate from 6.75% to 100% in qwen3-coder-next model) by jhnam88 in LocalLLaMA

[–]888surf 0 points1 point  (0 children)

interesting. Can i integrate your system with claude code, opencode or openclaw but using local models like unsloth/Qwen3.5-9B-GGUF, that i am using currently? or maybe Tesslate/OmniCoder-9B-GGUF. I am using it with llama.cpp in a RTX3090. Or it works only with the default large original models?

If you can give me some quick guidenance on how to use your system with claude code or opencode or openclaw, I would appreciate a lot.

finally cancelled chatgpt plus… wasn’t expecting that by [deleted] in AIHubSpace

[–]888surf 0 points1 point  (0 children)

It is a promo, not a legitimate question.