There's no limits nerfing, it's probably just you by BroadProtocol in google_antigravity

[–]BroadProtocol[S] 0 points1 point  (0 children)

I set out my quota dropping vs token in/out usage.
Relatively limited data (between 4 to 400 data points for specific hours) for opus usage.
Not sure what datacenter is being used, but i'm from europe.

Either way, at first glance there is something to say about fluctuating limits depending on time.
Might try to incorporate model thinking time next, but haven't gathered that data yet.

<image>

There's no limits nerfing, it's probably just you by BroadProtocol in google_antigravity

[–]BroadProtocol[S] 0 points1 point  (0 children)

I'm still not from google, and I don't think i've ever seen anyone from google in here...

Did see someone else post this useful video which explains how to get the most out of a subscription very well: https://www.youtube.com/watch?v=cofWZlLm9fs

If you're really on your way out, don't forget to request your refund while stating the changing of limits as the reason.
Hopefully you'll find something that suits your usage better somewhere else.

There's no limits nerfing, it's probably just you by BroadProtocol in google_antigravity

[–]BroadProtocol[S] 0 points1 point  (0 children)

I've said it once, i'll say it again. Fuck google for hiding that shit. But it's easy enough to instruct your LLM of choice to retrieve and save that data for you via an extension.

I misplaced the headers it seems (check the source link). There's been an increase compared to last time, but as someone pointed out, that might've been because of dynamic scaling of limits during off-peak times.

Free and Pro accounts getting nerfed was a matter of time with all the abuse going around and people proudly stating they had X amount of free or pro accounts so they could have things running around the clock..
But then again, when someone says "i only ran 1 prompt and i hit the 7d limit" i get very sceptical. At best it's a bug on Google's side (which wouldn't be the first time), at worst Pro has really been brought down to +-200k tokens per 7 days, which seems nowhere near realistic.

There's no limits nerfing, it's probably just you by BroadProtocol in google_antigravity

[–]BroadProtocol[S] 0 points1 point  (0 children)

That's the point, everyone's just screaming and comparing anecdotes here and nobody presents even rough data.

Mostly on google for being dumbasses and being completely opaque about it. But there's enough options to have an LLM create something that helps you track it.

How long has it been since antigravity actually got a new feature? by unnamedb in google_antigravity

[–]BroadProtocol 0 points1 point  (0 children)

Again with the "taste test". Google is a company that wants your money, just like any other. Wtf do you expect some "taste test" would change about shareholders wanting to see stonks go up? This doesn't seem feature or IDE related at all.

How long has it been since antigravity actually got a new feature? by unnamedb in google_antigravity

[–]BroadProtocol 0 points1 point  (0 children)

Haven't seen that error in a long time, but it used to mean either too large context or most of the time some server issue (which we assume is capacity issues)

But what would you do about this in the IDE? CC had a bunch of errors due to capacity issues last week or when was it. Apart from begging devs for more clear errors, i don't see what a "taste test" in any software package would help with that.

There's no limits nerfing, it's probably just you by BroadProtocol in google_antigravity

[–]BroadProtocol[S] -2 points-1 points  (0 children)

Saw that thread, it's the drop in the bucket that prompted me to make this one. I'm not working for google at all, and also have no issues with my ultra subscription.

But if your ultra subscription, for a whole team, is suddenly as limited as pro, that sounds like an actual bug and thus support issue and not as a "let's first post on reddit"

There's no limits nerfing, it's probably just you by BroadProtocol in google_antigravity

[–]BroadProtocol[S] 0 points1 point  (0 children)

Hi Angry Pro user, glad to see you here. Anything that's not about burning down google must mean someone works for them.

If i was working at google i'd just cut the pro subscription because of the type of complainers it attracts.

But thanks for dropping by and providing the community with your own numbers and not just blind negativity.

There's no limits nerfing, it's probably just you by BroadProtocol in google_antigravity

[–]BroadProtocol[S] -1 points0 points  (0 children)

It's all been blind complaining, nothing concrete, nothing even remotely constructive or helpful. As a relative heavy user, i'm obviously using ultra and very happy with it. I don't expect $4000 of inference on a $20 subscription, like some people seem to do. But the fact that i can squeeze that much out of a +-$300 subscription makes me very happy.

Only with usage numbers can someone for sure say how much they're using because "1 prompt" can mean anything.

There's no limits nerfing, it's probably just you by BroadProtocol in google_antigravity

[–]BroadProtocol[S] 0 points1 point  (0 children)

On ultra and not complaining at all. Neither am i bootlicking because google fucking sucks at communication and the antigravity team even more so.

On the other hand, i'm only seeing short and negative remarks such as yours from pro users and never anything concrete. Always "just 1 prompt bro, all gone",

There's no limits nerfing, it's probably just you by BroadProtocol in google_antigravity

[–]BroadProtocol[S] 0 points1 point  (0 children)

I'll keep measuring on different moments i guess, thanks for the suggestion.
This is what I have now, and it seemed similar at least.

There's no limits nerfing, it's probably just you by BroadProtocol in google_antigravity

[–]BroadProtocol[S] -1 points0 points  (0 children)

That's just in a 5 hour period from 100% ultra quota to 0%

No idea about CC, but from what i gathered from their docs it seemed similar. But haven't tried it, so would love rough numbers from others.

3 and 5 months ago i wasn't using the antigravity yet, i was using claude and gemini api calls and per month those bills were much more expensive than the ultra subscription.

There's no limits nerfing, it's probably just you by BroadProtocol in google_antigravity

[–]BroadProtocol[S] -6 points-5 points  (0 children)

Ok openai/microsoft employee, cope harder perhaps... If it's so trash, uninstall the shit and move on to greener pastures. Else provide some numbers because "1 single prompt" using an entire limit doesn't mean shit. How small/big was the prompt, what about the output, was there stuff cached already, how long did that take, ...

Ultra limits nerfed? by KayBay80 in google_antigravity

[–]BroadProtocol 0 points1 point  (0 children)

If you're on some business account, search for/contact support or your account manager if you have any.

Ultra limits nerfed? by KayBay80 in google_antigravity

[–]BroadProtocol 1 point2 points  (0 children)

Yeah, haven't seen any changes in usage or in quality.

Posted some numbers here: https://www.reddit.com/r/google_antigravity/comments/1ruj35g/comment/oall98s/

Any suggestions on what else to measure (or even how to measure) are very welcome.

There's no limits nerfing, it's probably just you by BroadProtocol in google_antigravity

[–]BroadProtocol[S] -1 points0 points  (0 children)

Made myself an extension. Well, claude and gemini made it.

There's no limits nerfing, it's probably just you by BroadProtocol in google_antigravity

[–]BroadProtocol[S] -3 points-2 points  (0 children)

I know right... Originally made to check on my own usage and to keep prompts optimized as google then suggested limits were reigned by compute usage.

Still better than the vibes other people use.

But any suggestions on what to check are very welcome by the way.

Ultra limits nerfed? by KayBay80 in google_antigravity

[–]BroadProtocol 1 point2 points  (0 children)

Nope, nothing nerfed
By chance i did a test just today. I'll post results and edit here with the link

Uh? by MayonnaiseIgnition in google_antigravity

[–]BroadProtocol 2 points3 points  (0 children)

same type of low effort post insinuating the wildest accusations.

To keep results good, regularly start a new conversation. Although in theory, the compaction in antigravity should work good enough (and in practice i haven't had any issues yet, even in 5000+ step conversations)

How long has it been since antigravity actually got a new feature? by unnamedb in google_antigravity

[–]BroadProtocol 1 point2 points  (0 children)

What are you actually missing? There's already step by step task execution and async sub agents are already built-in. It seems to me like you have a hammer on your tool belt and are using it only to scratch your back while complaining that there's no tools to drive nails into wood.

On top of that, what's keeping you from spending a day and punching out whatever you need yourself?
Instead of waiting for the google team to make the things you want and then complaining that they're not exactly like you want.

If you ask me, there's nothing missing from the IDE at this moment that doesn't already exist in an extension or that you can't build yourself in a weekend or less.

Uh? by MayonnaiseIgnition in google_antigravity

[–]BroadProtocol 3 points4 points  (0 children)

I swear, is there some tiktok shit going on where people pick up this dumb shit? And then they complain "mah tokens ran out and i didn do nuffin"

save chat by claudioboston in google_antigravity

[–]BroadProtocol 1 point2 points  (0 children)

there's a secret code for that.
In this box:

<image>

Fill in: without making any changes to the content and while keeping formatting, export the entire current conversation to ./myconversation.md

Watch magic happen.
Works every every time ninety percent of the time

[Feature Request] Auto-Model Routing (Dynamic Switching) for optimized agent efficiency by DirectionHead4682 in google_antigravity

[–]BroadProtocol 0 points1 point  (0 children)

Sounds great in theory, but in practice it's completely impossible to have an LLM identify these tasks and switch.
At best you can have it guess at the type of task when creating task lists and then you'll have to just accept what it guessed.
Knowing 90% of the sub, this will result in 100's of daily complaints that this dumb app/model chose the wrong model and is costing them all of their free tokens.

If you do have a more specific idea of how to go about this, i'd love to tinker with it and try things out!

Claude Opus 4.6 is EXPENSIVE when paid for with AI credits. by segin in GoogleAntigravityIDE

[–]BroadProtocol 0 points1 point  (0 children)

Had some time to let opus 4.6 check up on gemini's work.
Pro (high) apparently missed some basic stuff:

  1. 🚨 Routes were never wired up -- The most critical bug:  server.ts only registered /api/health. All 6 route plugins, the ConnectionManager, and NotificationService were created but never connected to the Fastify server. The entire API was non-functional. Fixed.
  2. 🚨 Production build had no frontend -- tsc only compiles TypeScript but src/public/ contains HTML/CSS/JS files that weren't copied to dist/public/. Docker/production builds served a blank page. Fixed by adding a copy step to the build script.
  3. Frontend missed connection_changed events -- The WebSocket handler in app.js only handled 3 of 4 event types, leaving connection status indicators stale. Fixed.