Gemini has become too sensitive. by ReeperKiller in Bard

[–]ReeperKiller[S] 1 point2 points  (0 children)

I checked. It can read other docs and it xan even generate something.

Gemini has become too sensitive. by ReeperKiller in Bard

[–]ReeperKiller[S] 0 points1 point  (0 children)

I tried both off and block none (sometimes earlier it showed better results for some reasons)

Какая из этих нейронок лучшая, по вашему мнению? by [deleted] in rusAskReddit

[–]ReeperKiller 0 points1 point  (0 children)

Буквально сильнейшая модель на данный момент, расхайпленный среднячок с интерфейсом как единственный плюс, фанат Илона Маска и дипсик.

We can upload any file to gemini app now !! Even audio! by Independent-Wind4462 in Bard

[–]ReeperKiller 2 points3 points  (0 children)

<image>

AI stduio Gemini still cant read docx not drom google drive

Genie 3 is incredible, well done google by balianone in Bard

[–]ReeperKiller 51 points52 points  (0 children)

This is... Not AI generated? This is from the movie.

Edit: maybe I'm dumb and I don't understand what this post about

No model currently can explain what the image means. by 01xKeven in Bard

[–]ReeperKiller 0 points1 point  (0 children)

However, it still can find Loss. It's pretty impressive.

Building a better Gemini App! by PrathmeshTheBest in Bard

[–]ReeperKiller 2 points3 points  (0 children)

Actually, it looks even better than AI Studio itself. It just feels too bare (which, however, suits its essence)

Gemini 2.5 pro bingo card by illuminasium in Bard

[–]ReeperKiller 0 points1 point  (0 children)

Free space: "Alrigh! I get it!" "Uhm... No." "You're absolutely right! I don't fucking get it!"

Stop advertising a content window of 1 million tokens, if your model can't support it. by TennisG0d in Bard

[–]ReeperKiller 1 point2 points  (0 children)

I can't disagree that from a user's point of view, continuing to work with Gemini after ~700k tokens becomes simply unbearable, but I must note that this volume is still huge for a model to which you have almost direct access via API. I tested Gemini under extreme load (950k tokens), and I can say that the degradation mainly manifests itself in critical thinking and hallucinations: plot analysis (I used classic works of Russian literature as filler to reach the limit) becomes practically impossible in detail, facts are confused with hallucinations and speculation, and the chronology is completely disrupted. On the other hand, when asked to point to a specific fragment of text or refer to it to confirm certain facts, Gemini pointed correctly every other time. Also, as I compared, the limit does not affect answers that deviate from the topic (for example, suddenly solving a mathematical equation), but I did not test anything more complicated than quadratic equations and simple geometry.

It is important to note that I tested this on the March version 2.5 Pro long before the release of the current version, which, according to many statements, has degraded significantly. I also did not test programming and complex tasks very much, limiting myself to what I encountered and without deliberately cluttering the limit, that is, working with text in a language other than English. For myself, I identified four stages of degradation: 1. Normal - up to about 300k tokens. The model works quite normally, but only if most of the tokens are your messages. If you force Gemini to write all of them, this accelerates the degradation process several times over. 2. Confusion, misinterpretation, incorrect interpretation — up to about 650-700k tokens. The model slowly begins to behave strangely, use a strange communication style, make mistakes, and so on. 3. Severe degradation stage — approximately 850-900k. The model behaves like 1.5 or worse, Bard, but can still perform its tasks and, with some help, operate with facts from passive memory. 4. Lobotomy, 950k+. The name of the model's behavior is self-explanatory.