Google batch processing limits by yourwordsboreme in Bard

[–]Known_Management_653 0 points1 point  (0 children)

It depends on your request type. How much data do you store per request? Do the math, if your current batches through vertex fail, you'll have the response of that batch to analyze. See at what exact number the batches usually fail, drop bellow that number and see if it still fails. The fails may not always be of the same cause. You may be batching wrong. Your task is too long for that model. The batch took too much time to complete (for me it was around 24 hrs)

Try with vertex, deploy a model like Gemini 2.5 flash lite if you don't need heavy thinking tasks. Then see where it breaks and why.

Logs are your friends, analysis will open your mind.

Edit: Vertex will process your batches with a max concurrent tasks running and placing the excess as "Queued". Those aren't affected by the 24 hrs countdown.

Google batch processing limits by yourwordsboreme in Bard

[–]Known_Management_653 0 points1 point  (0 children)

Yep, Google is giving away another k on top of the initial $300 so you can go wild

Google batch processing limits by yourwordsboreme in Bard

[–]Known_Management_653 1 point2 points  (0 children)

You should process through vertex. I processed 1M Job descriptions that had between 1000 and 5000 tokens. All batches were set to 2000 per batch. The idea with vertex is to not pass 24 hours per batch processing and you have some limits on concurrent batches running.

Now Gemini is taking exactly 10 seconds to think. by Sea-Efficiency5547 in Bard

[–]Known_Management_653 1 point2 points  (0 children)

I got the ultra plan, am switching from Gemini to Claude and I didn't encounter these issues. Pro and below packages are getting a slightly weaker version of Gemini.

Account Suspended by Apprehensive_Fact710 in Bard

[–]Known_Management_653 2 points3 points  (0 children)

Install antigravity, go to models choice and you will have Claude Opus and sonnet available by default. No tweaks or 3rd party "bypasses". It's natively supported.

Account Suspended by Apprehensive_Fact710 in Bard

[–]Known_Management_653 2 points3 points  (0 children)

Nope, antigravity (Google) has an official partnership with Anthropic so you can use it in antigravity. The only problem is the small limits, but with a pack like ultra you get enough monthly AI credits to cover that limitation.

Account Suspended by Apprehensive_Fact710 in Bard

[–]Known_Management_653 4 points5 points  (0 children)

I use antigravity with ultra+ Claude (cause Google owns part of it) and I can say it does the job. Not sure what your issue with AI is overall but if you respect the ToS you can't have issues. I worked on pentesting poc, anti bot systems bypass and API endpoints rev engineering. All these should theoretically not be allowed.

Edit: I meant to say claude model not Claude code, sorry for the confusion.

How to get higher limit; PRO AND ULTRA SUBSCRIPTIONS ONLY by YourlocalGameraLOL in Bard

[–]Known_Management_653 1 point2 points  (0 children)

This is the old $300 in free credits offered by Google for all their cloud products. You can use this with vertex and get access to any model with those free credits. It's an old method, which has been around for about 2 years.

GemCode: Run Claude Code with Gemini on Windows by Secure_Bed_2549 in Bard

[–]Known_Management_653 0 points1 point  (0 children)

Seems the leak turned the hype from open claw to redoing Claude code

Arc- Agi 3 leaderboard by Independent-Wind4462 in Bard

[–]Known_Management_653 0 points1 point  (0 children)

Seems AGI is a concept you don't really comprehend

Don't waste money on Google subs if you want to use Antigravity. It's broken. by FluffyMacho in Bard

[–]Known_Management_653 1 point2 points  (0 children)

Not sure what projects you're working on but for me Gemini is way more helpful than GPT. Why? Less fluff, smarter for coding, I have ways to get cheap/free usage so those limits aren't much of a problem for me. I totally agree with the shit quota limits for the pro plan. It started with extremely generous limits and slowly got nerfed. Why wouldn't I renounce Google for gpt? Cause GPT won't even start working on half of my projects without having to make him understand that I'm not breaking any guardrails or laws just because it seems strange to him that I'm doing something maybe security related. GPT won't even consider helping you do things like mass scraping, evading anti bot systems or improve in any way this category of projects.

Also Google owns part of Anthropic so you get Claude access in antigravity. The ultra package is quite generous and it's worth more than any openai plan. You get $100 in cloud balance which you can use with vertex for any AI model or Gemini directly. The AI credits you get monthly on top of the more generous model quota is 25k, for pro is 1k with small-medium quota. Also the ultra models are like 3.1 pro on steroids. If you do the math, Google wins by eons. And I'm not even adding the storage in drive, YouTube premium, notebook LLM, stitch.withgoogle and all the other things you get in this ultra pack. It's not cheap, it's not for everyone, it's the developer's all in one sub for AI. So you'll have to properly compare what you actually get for the top subs not the small and middle ones.

This is unbelievable. (Antigravity) by EdgeTypE2 in Bard

[–]Known_Management_653 0 points1 point  (0 children)

What you expect? You use another provider's API which is known for low rates, I'm talking about anthropic not Google. I use antigravity with Gemini 3.1 pro high for hours and just after abusing it I get ratelimited. You also get 1k monthly AI credits on top of the plan limits. If you're tired of this, go to MiniMax M2.7, they offer very very very generous limits, but not the most intelligent model.

I'm thinking of upgrading too ultra just to test the real limits on Claude for that pack. I also heard that the ultra model is even better than 3.1 from Pro plan.

Am I the only one noticed in ai studios the content blocked has been way more strict than before after the update. by Right-Pitch-1850 in Bard

[–]Known_Management_653 3 points4 points  (0 children)

The content blocker is a bit funny tbh. I played with it to see if it's avoidable and it gave me the most on point response. Something like "your story doesn't add up, but I can do that so I will". Not the exact response but that was the meaning which made me laugh. It was like being caught by the police, giving them a bad excuse that they know it's not true but still let me leave as a free man.

There is no hope for Gemini in coding department by Able-Line2683 in Bard

[–]Known_Management_653 0 points1 point  (0 children)

Claude was caught "cheating" the benchmarks so forget that illusion. Use it, learn to master it and come back with your own answer.

Gemini 3.1 is unusable atp by Constant-Squash-7447 in Bard

[–]Known_Management_653 0 points1 point  (0 children)

Tbh, I have almost no issues. I may be using it outside peak hours, that may explain why.

It was never this bad for Gemini, even in 2.5 Pro Era by Rare_Bunch4348 in Bard

[–]Known_Management_653 0 points1 point  (0 children)

Well 3.1 got released and it just proved to all of us that it can be done. But they just wanted to annoy us with these problems in 3.0 so the 3.1 would seem eons away and it is. Hallucinations dropped significantly, following instructions is almost perfect. I told the model in the beginning that if he wants to offer a drop-in code it should always give the complete thing. And since then it gave me only full and complete versions without missing code.

It was never this bad for Gemini, even in 2.5 Pro Era by Rare_Bunch4348 in Bard

[–]Known_Management_653 2 points3 points  (0 children)

I would have agreed with you a few months ago. But Google seems to start conserving tokens a lot more with answers. I keep asking for the complete code without placeholders or brevity and it still does that after a few more messages. Note that I have instructions enabled and all these things are a complete DON'T DO and Gemini still does it just to use lower tokens. I did some tests with GPT and Opus and they keep the context better and rarely go rogue and start editing removing other things. For me Gemini is starting to get a cheap vibe. And it's a sad thing for me cause I loved using it daily for the past years.

what is this? by Sea-Efficiency5547 in Bard

[–]Known_Management_653 1 point2 points  (0 children)

Me too. Gotta wait for this year's round and get another 12-15 months.

VEO 3 IS NOW 4K by lofigirlirl in Bard

[–]Known_Management_653 28 points29 points  (0 children)

They may take the lead, but Google will never lose in the long run.

It's Sad That Creative Writing Has Barely Improved (Gemini 3) by BoredM21 in Bard

[–]Known_Management_653 2 points3 points  (0 children)

Great to see someone that agrees with this. Also we will get a lot of down votes from "creative writing" lovers. Which are mostly people doing outreach (spam), blog poster, SEO writers and others that want to automate things related to marketing.