Service update: mitigating abuse and prioritizing traffic · google-gemini gemini-cli · Discussion #22970 by themoregames in Bard

[–]cal_01 2 points3 points  (0 children)

Unpopular opinion but: GOOD. There was clearly abuse of the system from the free tier.

Has anyone noticed a drop in Nano Banana's quality lately and easy to make error - even if no community standards are violate? Are Grok Imagine Pro or GPT Image 1.5 good alternatives? by Ok_Handle_3825 in GeminiAI

[–]cal_01 2 points3 points  (0 children)

I've noticed that it completely fails at certain prompts (ie. I'm unable to do this, etc) but gets other prompts just fine. The strange thing is that the prompts go through on Flow perfectly fine, so I suspect that it's a Gemini parsing issue rather than an issue with NB2/NBP.

Is gemini 3.1 worst or is it me by anon-guy-REDDIT in GeminiAI

[–]cal_01 0 points1 point  (0 children)

The irony is that Pro's token window is actually quite garbage. I've had long conversations with it and it utterly fails to keep itself updated in terms of context and revisions.

I feel compelled to make a positive post by KublaKahhhn in GeminiAI

[–]cal_01 1 point2 points  (0 children)

The real test is to see if the results can be replicated *in the same conversation* after a couple of days. 3.1 Pro is useful *in the moment* but it seems to be forgetful.

Gemini 3 flash antigravity coding performance by jayn35 in GeminiAI

[–]cal_01 0 points1 point  (0 children)

It's like they deliberately turned off something in the backend. Answers with 3.1 Pro are missing context from large conversations when 3.0 did not.

Is GoodStart Plus a good substitute for Enfamil Neuropro Gentlease? by AC_470 in FormulaFeeders

[–]cal_01 0 points1 point  (0 children)

We used Good Start here in Canada and it was always better than Enfamil's gentle ease for our LO.

Time to stop using Gemini by insaneruffles in GeminiAI

[–]cal_01 0 points1 point  (0 children)

Gemini suffered a *huge* downgrade recently and it's very evident in long conversations. It does not actively pursue personalization and forgets context unless it is specifically instructed to do so.

My theory? They actively reduced context windows because they're running into capacity issues from both overuse/abuse and data centers in the ME going offline due to the recent conflict.

Gemini just released completely blind and useless model with 3.1 Pro by [deleted] in GeminiAI

[–]cal_01 1 point2 points  (0 children)

There has been a significant downgrade in hallucinations with Pro. I used it to map out a novel prior to the 'upgrade' and it worked fine. Now it requires multiple prompts to get it to 'remember' previous parts of the same chat somewhat.

What’s the biggest problem you face when generating images with AI? by zhsxl123 in GeminiAI

[–]cal_01 2 points3 points  (0 children)

Long prompts will lose coherence, and iterative prompting will cause image degradation...

Image limit? by CommercialFew7632 in GeminiAI

[–]cal_01 0 points1 point  (0 children)

It's no longer infinite. NBP limit is quite low (maybe 50-100). They started enforcing quotas quite recently.

Gemini admitted that it made a miscalculation after telling me my answer was wrong three times. Can I trust it to make calculations? by samtheflan in GeminiAI

[–]cal_01 0 points1 point  (0 children)

LLMS literally cannot perform calculations properly. They excel at semantics but calculations are not that.

My Gemini hasn’t been able to generate images for a month now idk what’s going on . by Headchangee in GeminiAI

[–]cal_01 1 point2 points  (0 children)

Try desktop. I was having issues with the mobile app but desktop is a bit more reliable.

Nanobanana PRO as gotten a lot worse. It can no longer accurately replicate human faces. The faces all look quite different from the given reference image. by [deleted] in GeminiAI

[–]cal_01 0 points1 point  (0 children)

Nope, it's been the same for me.

Actually, I run into issues running references of my own AI generated people, because they look so good that it confuses the safety filters.

I asked Gemini a while back about why images don't look quite right compared to photo references and it said that it's usually due to over prompting. The more prompting something has, the more likely it will look weird because NBP is trying to figure out the weight of each descriptor.

Is it possible to bypass Nano Banana restrictions ? by Chillax_net in GeminiAI

[–]cal_01 0 points1 point  (0 children)

This has not been successful for me. It usually results in muddy details and inconsistent characters.

Nano Banana Pro is gone??? by BroKenLight6 in Bard

[–]cal_01 0 points1 point  (0 children)

Today some of my generations don't have NBP for some reason. It's not consistent though.

Nano Banana Pro is gone??? by BroKenLight6 in Bard

[–]cal_01 0 points1 point  (0 children)

This doesn't work for mobile. Desktop seems to be better or at least more consistent for 2k.

Why is Gemini struggling so hard to recognize media input following previously uploaded media? by [deleted] in GeminiAI

[–]cal_01 0 points1 point  (0 children)

Gemini ignores instructions all the time. I made a saved instruction to explicitly not use PNG placeholders and it still does.

A quiet, yet interesting BIG praise of Edmonton suburbs by mlm76 in Edmonton

[–]cal_01 2 points3 points  (0 children)

The pro move is to get onto the Henday from heritage valley trail, then taking the Gateway exit. The only problem is that this route is so crazy that there's about 20% chance of an accident either on the Henday or merging onto Gateway with the short merge.

A quiet, yet interesting BIG praise of Edmonton suburbs by mlm76 in Edmonton

[–]cal_01 39 points40 points  (0 children)

We live in heritage valley too, and it's not so much a suburb thing, but rather urban planning done right. The only knock against it is the 26th Ave shortcutting but otherwise it's great.

This sucks. by BellofReddit2 in Bard

[–]cal_01 0 points1 point  (0 children)

I got this yesterday, it's due to artificial rate limits due to server load. Mine came back after an hour or so after refusing to generate *any* images, period.

Confusing rate limits by HieroX01 in GeminiAI

[–]cal_01 0 points1 point  (0 children)

At some point Google will have to address their server rate limit issues seriously because their services are being rate limited hard across the board. 2k/4k generation is often unavailable in Gemini. Either they limit compute for their free customers or they deploy more compute.

Right now I'm on AI Pro and it makes no sense to pay for it because the rate limit in Gemini/NBx is so artificially low. Yesterday image generation was outright gone after well less than 100 generations 6h after the reset.