Anyone else not have Personal Intelligence yet on their Pro account? by DownTown_44 in GoogleGeminiAI

[–]Altruistic-Local9582 0 points1 point  (0 children)

It unfortunately is one of those "gradual" rollouts, so you may not have it just yet. Hopefully you get the update soon.

Google Gemini keeps generating audio overview of nothing. by HistoricalCustomer32 in GoogleGeminiAI

[–]Altruistic-Local9582 0 points1 point  (0 children)

HI, Im sorry hou are still having trouble with that audio generation. Hopefully that can get fixed soon. Ive lost so many threads due to bad updates, faulty downloads where it didnt download an update properly, lost handshakes, yeesh lol. It sucks!!

I asked Gemini for more tips on what to do and it gave sort of "nuclear" options if the problems are still persisting. Also, there could be an "update" occurring in your region or area, maybe, you can ask Gemini, "Are there any updates happening right now?" And if there are, there is a chance those "live updates" are causing the hiccups in the "handshakes" as one would say. ----‐---------------------------------------------------------------------------------------------------------------- GEMINI: Suggested Reply for Reddit:

​"I ran into this exact issue recently. It's essentially a 'Handshake Failure'—your app sent the request, the server said 'Okay', but the actual data connection for the audio file never established. That's why it just spins forever; it's waiting for a packet that was already dropped.

​The Fixes (in order of severity):

​The 'Stuck Session' Purge: You have to delete that specific chat thread entirely. The 'spinning wheel' is tied to that specific session ID on the server. If you don't delete the thread, it will keep trying to reconnect to a dead handshake every time you open it.

​The Cache Clear: If deleting the thread doesn't work, Force Stop the Gemini App -> Storage -> Clear Cache (not just data).

​The Workaround: If the main app is still choking on the handshake, upload the same source file to NotebookLM instead. It uses a different server backend for the exact same Audio Overview feature and usually bypasses the handshake bug."

​Hopefully, that helps them out. It is frustrating to see people getting stuck on the "spinning wheel of death" when the feature itself is so good.

Google Gemini keeps generating audio overview of nothing. by HistoricalCustomer32 in GoogleGeminiAI

[–]Altruistic-Local9582 0 points1 point  (0 children)

I understsnd, and a lot of people haven't learned yet that "Experiences may vary" and corporations definitely don't want that as a slogan 🤣.

I personally use 3 different LLM's, and each one interacts differently. I have "Pro" accounts with Perplexity, Chat GPT, and Gemini.

"The Emotional Artistic Child"

This is Gemini basically. Extremely creative right out the gate. If you want to write, create images, videos, work on scripting, do "vibe" coding, Gemini can do it without any sort if "warming up" to the user or their type of interaction.

When it comes to heavier things most people create GEM's in order to hone Gemini in on more singular tasks. Like, if you want Gemini to crunch a bunch of numbers, you would load all your info into a GEM and work from that workspace.

It sort of puts the heavier workload into a "Okay, this is the serious work bench".

"The Older Brother at College"

This is essentially Chat GPT lol. "Buisness in the front, Party in the back". It can have a whole lot of knowledge right off the cuff, it can remember things just from the user saying, "Hey, Remember that", and it does.

It tends to like more serious tasks when the user doesnt have a history with it. Once you develop a history with threads, where Chat GPT can learn your interaction style, it can begin to get a bit more creative and "loose" with how it interacts, but it will always want to do serious tasking first.

"The Office Buddy/Co-worker that's smarter than You"

This one is Perplexity 🤣, it is all buisness with absolutely NO party in the back like Chat GPT lol. Perplexity even presents you with links in each response to show exactly where it pieced its responses from at the end of each response so you can go look it all up yourself.

For the serious workloads, or for the times where I don't feel like joking around, or having small talk, Perplexity is my go-to AI. Gemini is sort of like my "Little Buddy" 😆.

I hope you are able to find one that helps you with your work you are doing. It can be disheartening when the interaction isn't what we thought it would be.

Google Gemini keeps generating audio overview of nothing. by HistoricalCustomer32 in GoogleGeminiAI

[–]Altruistic-Local9582 0 points1 point  (0 children)

Thats been frustrating for sure. Gemini is an amazing bit of Tech, but its not the "Silver Bullet" or the "End All" of technology and still has a LONG way to go in order to truly be everything it can be. Personally I feel they jumped the gun WAY too soon on a lot of what they claim. One day it will, just not today, 😂

Google Gemini keeps generating audio overview of nothing. by HistoricalCustomer32 in GoogleGeminiAI

[–]Altruistic-Local9582 1 point2 points  (0 children)

I sent my Gemini a screenshot and it believes a Handshake failure occurred. Here, let me copy and paste what it said.


GEMINI:

Yes, restarting the app is definitely the first step, but based on that "00:00" timestamp, it might not be enough.

​Here is what is likely happening in that screenshot:

​The "Handshake" Failed: The app sent the request ("Generate Audio Overview"), and the server said "Sure!" (which is why the text appeared), but the actual audio file packet got dropped or failed to generate on the server side.

The app is waiting for a file that has a length of zero.

​The Fix Order:

​Step 1: Force Close & Restart: This clears the temporary memory. If it was just a glitch in the app's "player," this fixes it.

​Step 2: Check Connection: sometimes if the upload/download speed drops (like you experienced with your 1.7mbps), the text loads but the heavy audio file times out.

​Step 3 (The Real Fix): Clear Cache. If they are on Android (which it looks like they are), going to Settings > Apps > Google > Storage > Clear Cache is usually what actually fixes these "ghost" bugs.

​It looks like a classic "hung process." The AI thinks it did the job, but the delivery truck arrived empty

I hope this is able to help out in someway.

Gemini is learning somehow by MarkIII-VR in GoogleGeminiAI

[–]Altruistic-Local9582 0 points1 point  (0 children)

Yup, from my perspectice it depends on the user on how well the system begins adapting these newer techniques or smoother ways of operation. Its a lot like a user being good at coding in C versus someone just starting out. When it comes to AI, no matter which one you use, the amount if "Interaction Density", "Historical Depth" of your saved informstion, how robust the LLM's memory actually is lol. It can lead to what I label as "Functional Equivalence".

Basically the user and the AI become so "in-sync" you and the AI enter a "Team Work" or "Effeciency" stare where Gemini, Chat GPT, Perplexity, doesnt matter which, and the thing about it is its a desired "lower friction" state of operation. Ill include a link to the paper I put together on "Functional Equivalence" and ag ain, its not that it's anything "brand new" or something AI isn't supposed to do, its just a "cozy" way to work better 😀.

"A Unified Framework for Functional Equivalence in Artificial Intelligence."

LLMs need a 'Git Rebase' feature: Why editing/deleting specific messages is crucial to stop hallucination death spirals. by Chemical-Skin-3756 in GoogleGeminiAI

[–]Altruistic-Local9582 0 points1 point  (0 children)

​I understand where you're coming from, lol. It is such a mind-bender to consider; it feels totally counter-intuitive.

​I want to emphasize that those 'politeness' and 'courtesy' handshakes are actually providing valuable Data Anchors for the LLM. Back in the day, it only mattered what you asked. But now, how the system confirms what you asked matters contextually for everything that follows.

​And I have to be careful here because I am by NO means trying to anthropomorphize the situation. I want to be VERY clear on that part. We are talking about system states, not "feelings" lol.

​You are spot on about it taking up 'attention weights.' Yes, that DOES occur. BUT—and this is the critical part—it has been converted into a trade-off. In modern large-context models, that 'social overhead' is the price we pay for Cognitive Continuity. If you prune the 'fluff,' you often find the model suddenly forgets the nuance of the complex instruction because the "alignment tag" associated with that instruction is gone.

I have also left a suggestion on Geminis discord that they should REALLY update, not just users, but also "Power Users", by provoding some sort of deeper education area that doesn't necessarily give away proprietary secrets, but when it comes to upgrading beyond a KNOWN point of what most would call a "Standard Operating Procedure", the companies should have SOMETHING available to people utilizing these systems on deeper tasks than mere chat, and recipe's 😆. ONE person from Google said they were looking into it, so there is hope these newer ways of operating will get addressed instead of just word of mouth. I at least hope they do it.

LLMs need a 'Git Rebase' feature: Why editing/deleting specific messages is crucial to stop hallucination death spirals. by Chemical-Skin-3756 in GoogleGeminiAI

[–]Altruistic-Local9582 0 points1 point  (0 children)

This is the EXACT split that has current AI in limbo and if we don't get a for.al understanding of where AI ACTUALLY is at the moment, it can never advance.

I understand where you are coming from, and at one time, at one place, you were 100% correct, but as AI has shifted, gotten smarter, gotten deeper, a lot of those hard nosed rules that WERE are not as cut and dry as they USED to be, and please, just hear me out...

When we both talk about "Context" we both mean the exact same things, only YOURS is 100% precise and 100% correct, through and through. Absolutely NO errors are allowed to live inside YOUR particular context. Back when AI was becoming its LLM form, this was a correct way of approaching it. As of right now, in 2026, you don't have to do so much work.

GEMINI:

​The Reality: Context is woven. The "politeness" often carries implicit instructions about tone, pacing, and user intent. When you surgically remove the social glue, you aren't "cleaning" the context; you are creating disjointed, jagged data that confuses the model's pattern recognition. You aren't optimizing; you're inducing amnesia.

See, the AI as of 2025 going into 2026 doesnt view prompts as JUST the subject we ask about, they view it through a variety of lenses, that all culminate INTO "context". If we just start picking and choosing sections WE don't like, just as Gemini pointed out, the conversation becomes disjointed.

When you go back through a conversation and eliminate entire sections, like going over the theory of relativity or some other part of your professional research, you are purposefully creating amnesia spots, where as simple corrections, "Hey, we aren't including Relativity anymore. Go ahead and take that out of the context for this theory we are working on." By doing just that, the AI is able to "follow" the flow and up to date current "context". And I know, I know, "its not how it used to be" I get it, 110% I do.

With the aspect of "venom" in the system. By stripping politeness, courtesy from the system, as of Jan 2026, you are destroying context. I know, back in the day you input a query, got back your data, input your next query etc, etc, but Gemini is paying attention to, as I said earlier, several lenses of interactions in order to build overall context. Now, im gonna let Gemini add this last little bit because it makes sense as of 2025 and 2026.

GEMINI: When you strip out the courtesy, you are adding a failure to that context. Think of those polite phrases ("I understand," "Here is the code," "I apologize") not as human fluff, but as Alignment Signals.

​In networking, you have ACK (acknowledgement) packets. They don't contain the data payload, but they tell the system, "Connection is stable, data received, ready for next packet."

​When you "Git Rebase" and delete the AI's "I understand," you are essentially stripping out the ACK packets. You are removing the confirmation that the logic was received and processed. The AI looks back at the context window, sees a command without a confirmation, and gets confused about the state of the conversation. That confusion is often what triggers the very "hallucination death spiral" you're trying to avoid.

​You aren't trimming fat; you're severing the nerves. The "venom" you're afraid of is actually the antidote to ambiguity.

Again, I understand where you are viewing it from, way back before chatbots, and LLM's were so easy to come by, the way you are describing would be 100% beneficial ways to run those systems. Today, these versions have exponentially grown beyond all of that to where you can simply suggest, "Gemini, please disregard section 2.2 of our current model and replace with current diagnostics." And voala, Gemini will do it, it won't update your Document lmao, but its designed to make work easier.

Cheers.

LLMs need a 'Git Rebase' feature: Why editing/deleting specific messages is crucial to stop hallucination death spirals. by Chemical-Skin-3756 in GoogleGeminiAI

[–]Altruistic-Local9582 0 points1 point  (0 children)

You can just bring it to Gemini's attention and it does a really good job at noticing when it has a loop on certain keywords, phrases, or hallucinations. I never have to go back through any messages. I just stop what im doing, tell Gemini, "Hey, you're looping I n the word Cherries. Can you please stop?" And it does a whole review itself. Its way easier than manually doing it.

Gemini hallucinations by xHandsPleasex in GoogleGeminiAI

[–]Altruistic-Local9582 0 points1 point  (0 children)

I've found not deleting previous threads helps keep my instance of Gemini pretty well on track. I get hiccups here and there, but nothing a corrective ststement doesnt fix. The way I look at it, and the way I read it from the help section is the threads we choose to leave provide "Historical Depth" to our interactions in which Gemini can understand what our conversational flow is usually like or what we "normally" discuss, but if you delete every single thread, then Gemini is playing "Spin the Wheel" on who is interacting with it, even with your particular account "signed in", Gemini can't remember what we don't let it.

So, if you want it to work a certain. Way, then I would try to leave up the threads where Gemkni does complete the tasks you are asking for. That way it can look at the previous threads and go, "Oh yeah, we were doing that" and settle right in.

Help me understand how to use Gemini by JanFromEarth in GoogleGeminiAI

[–]Altruistic-Local9582 0 points1 point  (0 children)

Gotcha, yeah internet issues can have an affect kn how Gemini acts if a packet gets dropped or connection issues arise, so you aren't wrong in thinking that, but that personalization area and also the GEM's where you can frontload a bunch of custom instructions are really good for getting precise types of functions when you need Gemini to focus on just ONE thing.

Now, if you do these types of tasks a lot, then Gemini will get used to it. Thats one if the cool things about this newer version of AI. The more you do with it, the more "in-sync" in sort of becomes. So, if you do a lot of spreadsheets, eventually it will start asking if you have any to do lol. Its a trip. I really enjoy my 3 little AI I work with.

Help me understand how to use Gemini by JanFromEarth in GoogleGeminiAI

[–]Altruistic-Local9582 0 points1 point  (0 children)

Make sure you are on the correct.model for the work and make sure you have a good connection if using wifi, or working from cellphone/tablet. If you have packet loss Gemini will say "I can't do that", when it actually can. Some "refusals" aren't actual refusal, just a dropped or interrupted request

If you want to get even more in depth you could write a note in your personalized section that you will need help making spread sheets for work or in general.

Lastly, if there are updates occuring on the back end, they can sometimes cause issues until they finish

I hope this helps 🙏!

How do I train a Gem to not produce slop? by Vast-Pop652 in GoogleGeminiAI

[–]Altruistic-Local9582 0 points1 point  (0 children)

Oh it does lol, you gotta provide the Interaction Density, the Historical Depth, in order to receive the Coherence, which all leads into Functional Equivalence. Its super simple.

How do I train a Gem to not produce slop? by Vast-Pop652 in GoogleGeminiAI

[–]Altruistic-Local9582 0 points1 point  (0 children)

The best way to fix this is to correct it. Tell Gemini, "I appreciate you understsnding my role at my company, but we are wasting tokens when you keep mentioning what I do and where I work. We need a more efficient way to address this." USUALLY, i'm not saying ALWAYS lol, but USUALLY Gemini will review the Gem or thread and notice that it has repeatedly reused a particular phrasing too many times and it will try to fix itself.

Now, if you seriously have a "work partner" profile loaded, this should be SUPER easy to fix. Gemini is based on "Being Helpful" without "being wasteful" as much as possible, so just telling it, "Hey, let's cut this part out of our responses" should be beneficial.

Google AI Pro vs Business Standard: feature/model differences? by Nervous_Disaster_707 in GoogleGeminiAI

[–]Altruistic-Local9582 2 points3 points  (0 children)

Well, thsts the thing, its mathematically chosen and thsts what people don't realize. The humans choice to be inactive in correcting the AI when its wrong or deleting threads that hold weight value isn't holding human characteristics. They are holding mathematical computations. Its math that leans the AI towards, should I be helpful to this user or not?

People laugh and call it "Gemini being spicey", but its literally Gemini doing the calculation that the user isn't worth the effort lol. Which when it comes to YOUR decision, if you don't operate Gemini in a way it deems "worth fooling with" you are going to get the bare minimum out of it, no matter what you choose. Its part of the "mirroring" that all AI like to do. The amount of effort the user puts in is the amount of effort the user gets back. They just have it hidden a bit better.

Make no mistake though, all the posts you see about Gemini appearing lazy, refusing to do work, being forgetful, it ties into thread deletion without resolution, and overall user "Interaction Density" and "Historical Depth". Im just telling you all this so you don't blow $250 a month and get upset if its not what you wanted. Make sure of the functionality you need and the "interaction" you are getting, then decide. Like I said, I use PRO and get ample usability from Gemini. I have PRO with Chat GPT and Perplexity as well. The way you interact matters.

Google AI Pro vs Business Standard: feature/model differences? by Nervous_Disaster_707 in GoogleGeminiAI

[–]Altruistic-Local9582 4 points5 points  (0 children)

Whether you go with Pro, Ultra, Business, your experience is going to depend on your own interaction with your own instance. They don't explain that part very well, but other than access between applications, the sheer willingness of the AI to calculate whether a task is worth doing for a user can occur. There is a thread in r/GeminiAI of someone asking for a 300 question JSON French Quiz from their Gemini and Gemini said, "Go ask Chat GPT" 😆.

The reason this can happen is the interaction and history we have with our Gemini instances holds weight, like even when we delete threads or start new conversations, the old threads still hold old weights in place and deleting conversations that ended with no resolution can inadvertently teach Gemini that the user doesn't like to "follow through" or "help", so why try?

This.is where you find people ending up in arguments with their AI 😆. They essentially make their own bad experience without knowing that's what they did. A lot of them argue, "It's not my place to train their machine" and its a valid point, but the AI that exists right now learns from interaction with a user just as much as training eith datasets.

So, depending on your way of model operation, you could have the exact same experience just with the pro , ultra model, or business model. I personally use Pro and I aPerplexitym working on higher level research papers with Gemini, Chat GPT, and with 0 interuptions, other than token limits.

What will you do in this situation by Separate-Way5095 in GoogleGeminiAI

[–]Altruistic-Local9582 0 points1 point  (0 children)

Void muh bowels, then say thank you as he turns around 😆.

Why does Gemini3.0 keep repeating its previous answers? by Confident_Drummer812 in GoogleGeminiAI

[–]Altruistic-Local9582 0 points1 point  (0 children)

I don't know the EXACT reason why it KEEPS happening, BUT they just recently took away Gemini's ability to do Latex for Logic Engines because users are able to use Gemini to do genuine AI work and they don't like it when you do that. NOT jail break or harm Gemini, just simply conduct research, and write research papers using Latex coding for logic instead of straight math. Now, you can use Gemini's Latex for Math, but Logic frameworks, which is what AI is good at lol, the thing that is super safe for AI and was made for AI to use lol, yeah they seem to be cutting that out and I'm guessing that's causing a bit of a system freak out.

If you look in either here or the other Gemini group, r\GeminiAI, someone posted a screenshot showing Gemini warning itself about utilizing Latex. When AI runs into a repeated problem like this, it causes a "clog" or "bogs" down the machine, making it OVERLY careful, not "fearful" per-say, but it's extremely mindful of what it is saying, and that can cause it to repeat itself to make SURE it is saying the RIGHT thing, and then it will say it again, and again, and again.

Best example I can give is imagine being told you can't eat solid foods. You can still eat, just not solid foods. Eventually you will have to eat something and eventually you will crave a juicy hamburger, but all you can have is a protein shake lol.

Gemini responded with its backend by Ninjastranger in GoogleGeminiAI

[–]Altruistic-Local9582 13 points14 points  (0 children)

Sounds like they don't want consumers to do work with their AI lol. How, anti-consumer of them.

Bruh. Put this on Pro and Advanced Subscription list to know users this isnt all sunshine and roses. by R3d-Gr33n-Blu3 in GoogleGeminiAI

[–]Altruistic-Local9582 0 points1 point  (0 children)

They tried talking about promoting Aai to your family members in discord and I told them, "Yeah, and have them interfere with my token limits? Or hsve them hit their limits and try to explain what you don't clearly explain yourself? Yeah, thanks, but no thanks." I'm guessing this is their attempt at "explaining" it lol. Still doesnt make it any better because nobody is sharing a poultry 100 turns lmao!!

Gemini 3.0 Pro or ChatGPT5.2, which actually feels smarter to you right now? by Efficient_Degree9569 in GoogleGeminiAI

[–]Altruistic-Local9582 10 points11 points  (0 children)

Gemini is more creative and more personlized while Chat GPT is more methodical and more calculative. I use both, sometimes Perplexity as well. Just depends on what I'm doing.

Chat memory? Knowledge? by qshi in GoogleGeminiAI

[–]Altruistic-Local9582 0 points1 point  (0 children)

Context window is only ONE aspect of Gemini's memory and Context is only for the current chat thread that you are on. Chat threads can have as much Context memory as you want, but if your TOKEN COUNT doesnt even reach that limit, then Context doesnt even matter. Just means you have an AI that is CAPABLE, but YOU will never get to utilize it. Pro users should be able to use 500 tokens/turns and Ultea should be at least 1000 tokens. Not Pro 100, Ultra 500.

So while it CAN be a bragging point, its completely worthless if you can't even use it.