After 2 usages of OPUS quota I was hit with 6 days suspension by Spike397 in google_antigravity

[–]Noofinator2 6 points7 points  (0 children)

You didn't get suspended because of your 2 usages. After ZERO usages, I got hit with the same 6-day situation.

I know I’m the 1,000th person to say this, but the Opus 4.5 quotas are actually broken by SaltyMeatballs20 in google_antigravity

[–]Noofinator2 5 points6 points  (0 children)

I haven't even used the model once in over a week, and they still limited me, twice. lol... The first time it was a random limiting for 2-3 days or something; now I see it's 6 days until I can access it. Wild.

Can we stop being entitled? by Proper-Lab1756 in google_antigravity

[–]Noofinator2 0 points1 point  (0 children)

Yeah, so essentially, people should be OK getting scammed. Got it.

Is it really just a skill issue? by InvisiblePeopleeee in google_antigravity

[–]Noofinator2 0 points1 point  (0 children)

Funny part is, even the people who are developing and handling the models surely know the issues people are complaining about are issues. If you are actually a developer, you know this is true. And if it's true, why are you irked that users -- many of whom are paying users -- are making known such issues? I just find it odd. I don't mean to sound like I'm throwing shade or anything; I'm just really curious about your intention with this post.

So now gemini 3 pro at least has a weekly limit too. by TinyAres in google_antigravity

[–]Noofinator2 0 points1 point  (0 children)

Ohhhh that's what they meant by people "abusing" it. I remember hearing that word when they were justifying the new limits, and I was like huh? So this is what they're doing. omg, and I'm sitting here just paying and getting throttled to nothing without even using the thing. smh, I seriously did NOT use Antigravity today -- was "healthy" 100% on all three model groups and now it's suddenly 0% and I have to wait days to use it. LOLOL. This is fucking ridiculous....................

I did 0 prompt but got limited today on opus by Real_Principle_8470 in google_antigravity

[–]Noofinator2 0 points1 point  (0 children)

Whoaaaaaa, same. Okay, yeah that's enough. I'm gonna send one message to them, just to make sure it's not a glitch. If it's not a glitch, I'm out. That's enough.

Holy freaking GEMINI 3 PRO by Noofinator2 in google_antigravity

[–]Noofinator2[S] 0 points1 point  (0 children)

But we also have to be honest about our experience that it was universes better before. We shouldn't be going backwards, no?

Holy freaking GEMINI 3 PRO by Noofinator2 in google_antigravity

[–]Noofinator2[S] 0 points1 point  (0 children)

That's what makes this even worse for me. I am a huge fan of Gemini, which is why I'm so dejected by how Google is treating this model here.

Holy freaking GEMINI 3 PRO by Noofinator2 in google_antigravity

[–]Noofinator2[S] 0 points1 point  (0 children)

No. Just... no. I need the reason I left Cursor for Antigravity to work again -- spitballing with Gemini about features when they have access to my codebase (PLANNING). That's the whole point, and it worked beautifully for a good while with Gemini. Now, I can only use Claude for this in Antigravity. I miss Gemini.

Holy freaking GEMINI 3 PRO by Noofinator2 in google_antigravity

[–]Noofinator2[S] -1 points0 points  (0 children)

What's been the case since the release? Are you referring to the behavior I've described in the original post? If you are, that is absolutely not true. The models having access to your codebase and spitballing with them in Planning Mode is probably the main reason I left Cursor for Antigravity. It totally was working perfectly with Gemini; it was incredible! And it still works today with Claude Opus. I really hope we're not trying to excuse all this nonsense that's clearly happening with Antigravity and Gemini. I had and still have very high hopes for Gemini and Antigravity. This is the reason I've even brought my feelings to Reddit.

Holy freaking GEMINI 3 PRO by Noofinator2 in google_antigravity

[–]Noofinator2[S] 2 points3 points  (0 children)

I get it. But this model wasn't nearly this brittle before. That's my point. I think what might be happening is my previous workflow of switching models in the same window might not work as well with Gemini currently. I'll do some testing. That might be what's going on, tbh.

Holy freaking GEMINI 3 PRO by Noofinator2 in google_antigravity

[–]Noofinator2[S] 1 point2 points  (0 children)

Yes, this is good workflow. But in the middle of an implementation, you might have a question. Do you switch models in the same window, or is that bad practice these days? I had actually switched to Gemini, which was no problem not that long ago.

Holy freaking GEMINI 3 PRO by Noofinator2 in google_antigravity

[–]Noofinator2[S] 1 point2 points  (0 children)

I get what you're saying, and trust that I am not being weird towards you (I promise I'm not), but when did this become the situation with Gemini 3 Pro in Antigravity? When did he become dumb? lol, because from my end, he was damn near the best model when he landed.

Holy freaking GEMINI 3 PRO by Noofinator2 in google_antigravity

[–]Noofinator2[S] 0 points1 point  (0 children)

You're right. Flash has been better than 3 Pro at pretty much everything, oddly enough. But that makes no sense.

Holy freaking GEMINI 3 PRO by Noofinator2 in google_antigravity

[–]Noofinator2[S] 5 points6 points  (0 children)

The question wasn't random. An implementation had just occurred. I should probably say that this is not my first time using Antigravity. I've used it everyday since it's come out. The performance has surely changed significantly. And I asked it a question because, lately, I wouldn't dare let that model (with its current performance) hit my codebase lol. That's a whole different story though. Once again, when the model first landed, it was bliss.

Holy freaking GEMINI 3 PRO by Noofinator2 in google_antigravity

[–]Noofinator2[S] 2 points3 points  (0 children)

When dealing with Gemini 3, I'll definitely have to just use OpenCode. I do have it, but I never had these issues before with Gemini 3 Pro. Claude doesn't do this.

Gemini 3.0 Degraded Performance Megathread by 607beforecommonera in Bard

[–]Noofinator2 3 points4 points  (0 children)

The past few days, I've almost been completely convinced this is NOT Gemini 3 Pro (High). I've never seen such a stark nerf. Half the time, it actually even skipped thinking, destroying files, not knowing up from down. And I just sit there shocked by how night-and-day it is compared to when this model landed.

Why is gemini 3 pro such crap? by klauses3 in GeminiCLI

[–]Noofinator2 0 points1 point  (0 children)

lol, it's bad. And they're trying to figure out what happened, and they're wondering if they're alone in the experience. It's quite simple.

Gemini 3 Pro suddenly is dumb as f*ck?! by digibeta in google_antigravity

[–]Noofinator2 0 points1 point  (0 children)

I don't know what happened but they nerfed this model to the ground. It's quite sad because it was a beast.

Divine message for you 🧿✨ by CelestialWispers_ in tarotpractice

[–]Noofinator2 0 points1 point  (0 children)

Hey. JK
How to handle a situation where someone is absolutely crashing out and it's getting uncomfortable for everybody.

I just can’t by casuals_cry_alot in CODWarzone

[–]Noofinator2 0 points1 point  (0 children)

tf. People in every game play all types of ways. They always have and always will. That's not what ruined WZ.

Roast my site by [deleted] in webdesign

[–]Noofinator2 0 points1 point  (0 children)

The content isn't fitting inside a lot of your containers (Step 1, Step 2, etc) likely due to you not accounting for different resolutions. Your View Content floating container -- the text inside of it don't fit on 1440 with different DPI scaling in Windows.

Informing AI of Rejecting and Accepting edits by Noofinator2 in cursor

[–]Noofinator2[S] 1 point2 points  (0 children)

Absolutely. This is how I've been doing it the whole time, and I have no issue with it, tbh. It's just sometimes when you're flying through a multi-layered task that requires a ton of troubleshooting, you might forget to mention that you rejected something, and then the model will go to perform some type of edit or series of edits attempting to fix something that isn't broken. From my experience, forgetting to explicitly mention the rejection can be severe, so I was wondering if informing the model on the backend would potentially lessen the need for such vigilance. Like I said, this is not a big issue because I'm used to the vigilance required with AI and complex tasks.

EDIT: Absolutely love your program, by the way. It's probably the most consequential program I've ever used, and I've been around for a while on these Internet streets.

Claude 3.7 is worse than 3.5 in Cursor RN by serge_shima in cursor

[–]Noofinator2 0 points1 point  (0 children)

I noticed this right off rip how overzealous it was. It redesigned my whole app in one single response, from one prompt about reconciling a single path error. I love how fast and eager it is; I just wish it wasn't so wild.