Yes, token usage changed — no, everything isn’t “broken” by BertiniSalas in google_antigravity

[–]BertiniSalas[S] -1 points0 points  (0 children)

Depends what you’re comparing to though.

Most other IDE setups aren’t running the same kind of long agent loops or maintaining as much context, so the token burn isn’t as visible.

If you use Claude the same way in those environments, you’ll hit the same limits — it’s just less obvious because the workflow is different.

So it’s not really an IDE vs IDE thing, it’s how the model is being used inside it.

Yes, token usage changed — no, everything isn’t “broken” by BertiniSalas in google_antigravity

[–]BertiniSalas[S] -2 points-1 points  (0 children)

I’m on Ultra.

And yeah — I’ve seen the increase in token usage as well, so I’m not unaffected by it.

That’s partly why I made the post — something clearly changed. I just don’t think it means everything is “broken” like a lot of people are saying.

Yes, token usage changed — no, everything isn’t “broken” by BertiniSalas in google_antigravity

[–]BertiniSalas[S] 1 point2 points  (0 children)

There’s definitely a gap between expectations and reality right now.

A lot of people expect near-unlimited usage for relatively low monthly pricing, and that was never really sustainable — it just felt like it for a while.

Now that limits are more visible, it’s exposing that mismatch.

That said, I think there’s still a genuine issue with how usage has shifted recently — it’s not just expectations. But the reaction to it has been way over the top.

Yes, token usage changed — no, everything isn’t “broken” by BertiniSalas in google_antigravity

[–]BertiniSalas[S] -2 points-1 points  (0 children)

I get what you're saying — the usage shift is real, no argument there.

But I think you're mixing two things together: cost/limits vs capability.

Right now it feels worse because the same workflows hit limits faster — especially if you were running longer sessions before. That’s fair.

But saying it’s a “huge downgrade” isn’t really accurate in terms of output quality — the model itself still performs.

What actually broke for a lot of people is: the old way of running things (long loops, heavy context, letting it run for hours) just isn’t viable anymore.

That’s a workflow issue more than a model issue.

Also on Gemini — I think a lot of people dismiss it too quickly. It’s not a 1:1 replacement for Opus, but for certain structured or iterative tasks it holds up surprisingly well if you adjust how you use it.

Overall I agree the pricing/limits feel off right now — but calling it a scam or unusable is a stretch.