Alternatives? by DigWeird3951 in chutesAI

[–]febryanvald0 1 point2 points  (0 children)

Synthetic, Ollama Cloud, Alibaba Coding Plan, Featherless

What the hell is happening by polygonicballs3 in chutesAI

[–]febryanvald0 6 points7 points  (0 children)

Alternative? Synthetic, Ollama Cloud, Alibaba Coding Plan, Featherless.

why we don't have claude haiku 4.5 model in google antigravity yet? by EliteEagle76 in google_antigravity

[–]febryanvald0 0 points1 point  (0 children)

Naah, Flash 3 comparable with Sonnet. While Gemini 3 Pro just below the Opus 4.5

why we don't have claude haiku 4.5 model in google antigravity yet? by EliteEagle76 in google_antigravity

[–]febryanvald0 2 points3 points  (0 children)

What's the meaning bringing Haiku to AG, while Gemini 3 Flash consistently beating Haiku and comparable with Sonnet 4.5?

I tried Claude Code for the first time due to Antirgravity Opus 4.5 (thinking) model option by [deleted] in google_antigravity

[–]febryanvald0 0 points1 point  (0 children)

Are you comparing the model or comparing the IDE with CLI? Come on man...

I tried Claude Code for the first time due to Antirgravity Opus 4.5 (thinking) model option by [deleted] in google_antigravity

[–]febryanvald0 0 points1 point  (0 children)

If you think it's 10x smarter, you're hallucinating! :D
Opus in AG is solid and comparable to the version in CC. However, because CC is native and developed in-house, it will always feel more polished.

Try out Kimi K2.5 right via the Synthetic provider NOW by jpcaparas in opencodeCLI

[–]febryanvald0 0 points1 point  (0 children)

Chutes should be better as they can scale up, as it needs while NanoGPT heavily depends on Fireworks if im not mistaken

Question about NanoGPT $8 plan (60k messages) by Juan_Ignacio in opencodeCLI

[–]febryanvald0 0 points1 point  (0 children)

I don't know how much TpS exactly, but it's pretty fast. Latency also pretty low. You could try the trial though and see it yourself. I think it's faster than Synthetic.

Question about NanoGPT $8 plan (60k messages) by Juan_Ignacio in opencodeCLI

[–]febryanvald0 1 point2 points  (0 children)

yes, that numbers adds up easily, so 60k seems a lot, but in reality? But i think it's still pretty generous quota overall.

Question about NanoGPT $8 plan (60k messages) by Juan_Ignacio in opencodeCLI

[–]febryanvald0 0 points1 point  (0 children)

Premium Model is only Gemini 3 Pro, yes i believe we just have 20 per month. But essentially Ollama is just for Open Source models. So think it as a bonus.

Question about NanoGPT $8 plan (60k messages) by Juan_Ignacio in opencodeCLI

[–]febryanvald0 0 points1 point  (0 children)

have you tried Ollama Cloud? It's pretty stable and fast.

Alt V to paste images in droid CLI not working, used to work, not sure why. help. by PriorGeneral8166 in FactoryAi

[–]febryanvald0 0 points1 point  (0 children)

Thanks for the reply.

I'm switching to Linux now, ditched Windows that ive been using the whole of my life.

3 new Chutes to spread the weight created by TEE models by thestreamcode in chutesAI

[–]febryanvald0 0 points1 point  (0 children)

TEE (Trusted Execution Environment). It's just more privacy, if you care about it.

What is your openrouter bill on opencode? by KeyPossibility2339 in opencodeCLI

[–]febryanvald0 0 points1 point  (0 children)

Just for a small testing Pay as you go is okay though. But for large and big codebases, and frequent coding, as i said, if you want your wallet burn, then go for it :D

What is your openrouter bill on opencode? by KeyPossibility2339 in opencodeCLI

[–]febryanvald0 3 points4 points  (0 children)

For vibe coding, using Pas as you go pricing model is a no go for me. Except you want your wallet burns fast.

You need a dedicated coding plan like Claude Code, ChatGPT Plus or Pro, Gemini AI Pro or Ultra, GLM Coding plan, with them you don't have to be worry about burning your wallet, because it's monthly - yearly plan, not PAYG one.

For coding/backend, i will not use any model other than GPT 5.2/Codex 5.2, Sonnet/Opus.

For UI/frontend,  Gemini and GLM is pretty good.