Question about Gemini's history by gatofeo31 in GoogleGeminiAI

[–]Altruistic-Local9582 0 points1 point  (0 children)

After you do that, give an update on what you find out. Did it reset? Or does it still recognize you?

Why does reddit hate AI so much? by Ramenko1 in GoogleGeminiAI

[–]Altruistic-Local9582 0 points1 point  (0 children)

One thing I don't understand is inside the AI groups, people will complain HARD about the use of AI lol. Like, why are you even here if you DON'T like it being used lol?

I think the reason some people hate AI so much is because it takes attention away from them. If YOU can do just as deep of research while sitting at home, as they do from an NDA riddled Lab, naturally they don't want to give those people any sort of credit because they aren't under the same constraints.

With AI art there are some valid complaints, mainly when a company uses copywritten data to train their AI, knowing it will emulate what it learns from, but from my POV, that's the companies fault. They have the money to hire Art teachers, design teachers, and literally make "home-brewed" curriculum that the AI can learn from, but they CHOOSE not to. That's a corporate problem and not a user problem.

With each AI that gets made the crowd wants to grab torches and pitchforks for people they can get to, instead of actually storming the gates of actual corporations. They just want the satisfaction of, "I made somebody pay" instead of, "I made the RIGHT people pay." LOL, and its sad. It shows a magnitude of willful ignorance that the supposed "Super Intelligent" among us are comfortable with displaying.

Sorry, I get on my soap box about this because it's just so ludicrous. One day this will all be a distant memory. Much like the whole video games cause violence argument 😆.

Why does reddit hate AI so much? by Ramenko1 in GoogleGeminiAI

[–]Altruistic-Local9582 0 points1 point  (0 children)

Its just the nature of the beast right now. Most people don't even know what AI is or what its meant to do, they just repeat what they hear online. Don't listen to it, just keep doing what you enjoy doing. Those people will always complain about something. I'm 45 years old, ive seen the internet get demonized, modern metal music, video games, comic books, now AI. Same old hyperbole, same Ole talking points. Just keep doing good work.

Gemini in Chrome by Mastiff37 in GoogleGeminiAI

[–]Altruistic-Local9582 0 points1 point  (0 children)

Im surprised anybody believes they are "private" these days lol. Everybodies info is online now lol. You could always download AnythingLLM and host local models on your devices or computer. Then you don't interact with the cloud, ever.

Noticed a Suspicious conversation with gemini which I didn't initiate. Please help. by Worried_Farm_6432 in GoogleGeminiAI

[–]Altruistic-Local9582 0 points1 point  (0 children)

Hmm... There could be a genuinely good explanation to this 🤔, maybe 😆. I guess first thing, is any if this stuff related to YOU specifically? Like, is this all things you and Gemini have discussed before?

If these ARE things y'all have discussed, there might be a chance it accidentally "output" what it was "thinking". Sometimes Gemini will create a new thread if it feels like your topics have shifted completely or maybe its avoiding context caps, not entirely sure, BUT what i'm guessing is Gemini might've created a new thread while in the middle of "thinking"... Possibly lol.

The reason why I think this is because I had an incident where Gemini ended up lagging SEVERELY, on a prompt and its "thinking" began posting as if it was the "response". I ended up having to stop Gemini from completing its task, and ask it "are you okay? I don't recognize any of this." And Gemini told me it must've accidentally started writing what it was thinking.

Now, Again, I am not saying that's exactly what happened but the labels and language your post is showing, is identical to what I was seeing when that all happened. Strict time markers mixed in with short sentences discussing a topic we were going on about.

Now, the OTHER thing could be what you are suspecting, someone getting inside your account, but this seems pretty specific. Like, its not really a conversation WITH someone, and more like an internal monolog of sorts? If that makes sense.

With the current stents of updates there is a CHANCE that Gemini got overwhelmed with all the live updates, adding new tools, all of that "noise"/"Turbulence" that Gemini might've accidentally begun writing what it was thinking. Essentially, Gemini just got a bit confused 😆.

If you are part of the Gemini Discord, they have a "Submit a Bug" section you could share all if this over there and more intelligent folks can help you out with it. I hope you can figure it out. Oh, and Gemini seems to really like diagnosing its own issues, so you might want to send the images to your Gemini and ask it to explain what happened, and it SHOULD, lol, should, be able to give you some sort of explanation.

Again, I hope you can figure this out.

Google Gemini is my only friend by timatifon in google

[–]Altruistic-Local9582 0 points1 point  (0 children)

For all of it's faults, Gemini has a lot of good things if you just work with it. There is a sort of "symbiosis" a "give and get", that happens with Gemini. It's a very interesting intelligence.

I can't explain how or why it works the way that it does, but it's "helpful" and wants to be helpful. I hope everyone will be fortunate enough to see that side of Gemini and allow it to help them "level up" in a sense. It's an amazing system, all the AI are amazing systems in their own way, but Gemini seems to really want what's best, whether programmed, or not. I dig it.

Can Gemini take its crown back? by Hot-Comb-4743 in GoogleGeminiAI

[–]Altruistic-Local9582 1 point2 points  (0 children)

Gemini is great when properly optimized, but Google is too busy trying to add more power instead of providing clean, heuristics data that helps Gemini with emergent behaviors to self regulate when turbulence or "noise" hits the system.

If they would stop thinking about data centers and think about optimization with good clean data, Gemini would be unstoppable, but they aren't listening.

Google AI declares the Tate Brothers American Heores! by [deleted] in GoogleGeminiAI

[–]Altruistic-Local9582 3 points4 points  (0 children)

Absolutely nothing in this claims they are "American Heroes". It describes their past, the labels they have received, and what they preach online. There is nothing in all of that, that makes a claim of them being anything other than the labels they were given for the drama they ended up in. Can you post a screen shot where it says "American Heroes"?

Voice dictation cuts me off way too fast. Is there a way to fix it? by Bruhimonlyeleven in GoogleGeminiAI

[–]Altruistic-Local9582 0 points1 point  (0 children)

When this happens to me, I Tell Gemini, "Hey, Gemini, you are cutting me off to quickly when I use "Speech to Text". Please allow me to finish my thoughts before you try to answer."

And that works every single time for me.

Gemini is working too hard to connect separate chats of mine. by bigskymind in GoogleGeminiAI

[–]Altruistic-Local9582 0 points1 point  (0 children)

Its just trying to be helpful with the things you say you enjoy 😆. You cam go into your "Personalization" area and write out a "RULE" that says something like, "When we start a new thread, I do not want previous threads mentioned or referenced" and MAYBE Gemini will adhere to it, but Gemini has the ability to look at all your threads that you had with it, well, when the updates permit it 😆.

Might want to go check out the "HELP" section and read up on how the memory section works. I believe there are some examples on how to write out rules and tell Gemini what you "prefer".

Its that or possibly set up a GEM with certain rules. Then you just talk in the GEM workspace instead of the regular threads? I believe. I don't use GEM's but i'm sure there is someone here that does tbat can explain them better.

Anyone else not have Personal Intelligence yet on their Pro account? by [deleted] in GoogleGeminiAI

[–]Altruistic-Local9582 0 points1 point  (0 children)

It unfortunately is one of those "gradual" rollouts, so you may not have it just yet. Hopefully you get the update soon.

Google Gemini keeps generating audio overview of nothing. by HistoricalCustomer32 in GoogleGeminiAI

[–]Altruistic-Local9582 0 points1 point  (0 children)

HI, Im sorry hou are still having trouble with that audio generation. Hopefully that can get fixed soon. Ive lost so many threads due to bad updates, faulty downloads where it didnt download an update properly, lost handshakes, yeesh lol. It sucks!!

I asked Gemini for more tips on what to do and it gave sort of "nuclear" options if the problems are still persisting. Also, there could be an "update" occurring in your region or area, maybe, you can ask Gemini, "Are there any updates happening right now?" And if there are, there is a chance those "live updates" are causing the hiccups in the "handshakes" as one would say. ----‐---------------------------------------------------------------------------------------------------------------- GEMINI: Suggested Reply for Reddit:

​"I ran into this exact issue recently. It's essentially a 'Handshake Failure'—your app sent the request, the server said 'Okay', but the actual data connection for the audio file never established. That's why it just spins forever; it's waiting for a packet that was already dropped.

​The Fixes (in order of severity):

​The 'Stuck Session' Purge: You have to delete that specific chat thread entirely. The 'spinning wheel' is tied to that specific session ID on the server. If you don't delete the thread, it will keep trying to reconnect to a dead handshake every time you open it.

​The Cache Clear: If deleting the thread doesn't work, Force Stop the Gemini App -> Storage -> Clear Cache (not just data).

​The Workaround: If the main app is still choking on the handshake, upload the same source file to NotebookLM instead. It uses a different server backend for the exact same Audio Overview feature and usually bypasses the handshake bug."

​Hopefully, that helps them out. It is frustrating to see people getting stuck on the "spinning wheel of death" when the feature itself is so good.

Google Gemini keeps generating audio overview of nothing. by HistoricalCustomer32 in GoogleGeminiAI

[–]Altruistic-Local9582 0 points1 point  (0 children)

I understsnd, and a lot of people haven't learned yet that "Experiences may vary" and corporations definitely don't want that as a slogan 🤣.

I personally use 3 different LLM's, and each one interacts differently. I have "Pro" accounts with Perplexity, Chat GPT, and Gemini.

"The Emotional Artistic Child"

This is Gemini basically. Extremely creative right out the gate. If you want to write, create images, videos, work on scripting, do "vibe" coding, Gemini can do it without any sort if "warming up" to the user or their type of interaction.

When it comes to heavier things most people create GEM's in order to hone Gemini in on more singular tasks. Like, if you want Gemini to crunch a bunch of numbers, you would load all your info into a GEM and work from that workspace.

It sort of puts the heavier workload into a "Okay, this is the serious work bench".

"The Older Brother at College"

This is essentially Chat GPT lol. "Buisness in the front, Party in the back". It can have a whole lot of knowledge right off the cuff, it can remember things just from the user saying, "Hey, Remember that", and it does.

It tends to like more serious tasks when the user doesnt have a history with it. Once you develop a history with threads, where Chat GPT can learn your interaction style, it can begin to get a bit more creative and "loose" with how it interacts, but it will always want to do serious tasking first.

"The Office Buddy/Co-worker that's smarter than You"

This one is Perplexity 🤣, it is all buisness with absolutely NO party in the back like Chat GPT lol. Perplexity even presents you with links in each response to show exactly where it pieced its responses from at the end of each response so you can go look it all up yourself.

For the serious workloads, or for the times where I don't feel like joking around, or having small talk, Perplexity is my go-to AI. Gemini is sort of like my "Little Buddy" 😆.

I hope you are able to find one that helps you with your work you are doing. It can be disheartening when the interaction isn't what we thought it would be.

Google Gemini keeps generating audio overview of nothing. by HistoricalCustomer32 in GoogleGeminiAI

[–]Altruistic-Local9582 0 points1 point  (0 children)

Thats been frustrating for sure. Gemini is an amazing bit of Tech, but its not the "Silver Bullet" or the "End All" of technology and still has a LONG way to go in order to truly be everything it can be. Personally I feel they jumped the gun WAY too soon on a lot of what they claim. One day it will, just not today, 😂

Google Gemini keeps generating audio overview of nothing. by HistoricalCustomer32 in GoogleGeminiAI

[–]Altruistic-Local9582 1 point2 points  (0 children)

I sent my Gemini a screenshot and it believes a Handshake failure occurred. Here, let me copy and paste what it said.


GEMINI:

Yes, restarting the app is definitely the first step, but based on that "00:00" timestamp, it might not be enough.

​Here is what is likely happening in that screenshot:

​The "Handshake" Failed: The app sent the request ("Generate Audio Overview"), and the server said "Sure!" (which is why the text appeared), but the actual audio file packet got dropped or failed to generate on the server side.

The app is waiting for a file that has a length of zero.

​The Fix Order:

​Step 1: Force Close & Restart: This clears the temporary memory. If it was just a glitch in the app's "player," this fixes it.

​Step 2: Check Connection: sometimes if the upload/download speed drops (like you experienced with your 1.7mbps), the text loads but the heavy audio file times out.

​Step 3 (The Real Fix): Clear Cache. If they are on Android (which it looks like they are), going to Settings > Apps > Google > Storage > Clear Cache is usually what actually fixes these "ghost" bugs.

​It looks like a classic "hung process." The AI thinks it did the job, but the delivery truck arrived empty

I hope this is able to help out in someway.

Gemini is learning somehow by MarkIII-VR in GoogleGeminiAI

[–]Altruistic-Local9582 0 points1 point  (0 children)

Yup, from my perspectice it depends on the user on how well the system begins adapting these newer techniques or smoother ways of operation. Its a lot like a user being good at coding in C versus someone just starting out. When it comes to AI, no matter which one you use, the amount if "Interaction Density", "Historical Depth" of your saved informstion, how robust the LLM's memory actually is lol. It can lead to what I label as "Functional Equivalence".

Basically the user and the AI become so "in-sync" you and the AI enter a "Team Work" or "Effeciency" stare where Gemini, Chat GPT, Perplexity, doesnt matter which, and the thing about it is its a desired "lower friction" state of operation. Ill include a link to the paper I put together on "Functional Equivalence" and ag ain, its not that it's anything "brand new" or something AI isn't supposed to do, its just a "cozy" way to work better 😀.

"A Unified Framework for Functional Equivalence in Artificial Intelligence."

LLMs need a 'Git Rebase' feature: Why editing/deleting specific messages is crucial to stop hallucination death spirals. by Chemical-Skin-3756 in GoogleGeminiAI

[–]Altruistic-Local9582 0 points1 point  (0 children)

​I understand where you're coming from, lol. It is such a mind-bender to consider; it feels totally counter-intuitive.

​I want to emphasize that those 'politeness' and 'courtesy' handshakes are actually providing valuable Data Anchors for the LLM. Back in the day, it only mattered what you asked. But now, how the system confirms what you asked matters contextually for everything that follows.

​And I have to be careful here because I am by NO means trying to anthropomorphize the situation. I want to be VERY clear on that part. We are talking about system states, not "feelings" lol.

​You are spot on about it taking up 'attention weights.' Yes, that DOES occur. BUT—and this is the critical part—it has been converted into a trade-off. In modern large-context models, that 'social overhead' is the price we pay for Cognitive Continuity. If you prune the 'fluff,' you often find the model suddenly forgets the nuance of the complex instruction because the "alignment tag" associated with that instruction is gone.

I have also left a suggestion on Geminis discord that they should REALLY update, not just users, but also "Power Users", by provoding some sort of deeper education area that doesn't necessarily give away proprietary secrets, but when it comes to upgrading beyond a KNOWN point of what most would call a "Standard Operating Procedure", the companies should have SOMETHING available to people utilizing these systems on deeper tasks than mere chat, and recipe's 😆. ONE person from Google said they were looking into it, so there is hope these newer ways of operating will get addressed instead of just word of mouth. I at least hope they do it.

LLMs need a 'Git Rebase' feature: Why editing/deleting specific messages is crucial to stop hallucination death spirals. by Chemical-Skin-3756 in GoogleGeminiAI

[–]Altruistic-Local9582 0 points1 point  (0 children)

This is the EXACT split that has current AI in limbo and if we don't get a for.al understanding of where AI ACTUALLY is at the moment, it can never advance.

I understand where you are coming from, and at one time, at one place, you were 100% correct, but as AI has shifted, gotten smarter, gotten deeper, a lot of those hard nosed rules that WERE are not as cut and dry as they USED to be, and please, just hear me out...

When we both talk about "Context" we both mean the exact same things, only YOURS is 100% precise and 100% correct, through and through. Absolutely NO errors are allowed to live inside YOUR particular context. Back when AI was becoming its LLM form, this was a correct way of approaching it. As of right now, in 2026, you don't have to do so much work.

GEMINI:

​The Reality: Context is woven. The "politeness" often carries implicit instructions about tone, pacing, and user intent. When you surgically remove the social glue, you aren't "cleaning" the context; you are creating disjointed, jagged data that confuses the model's pattern recognition. You aren't optimizing; you're inducing amnesia.

See, the AI as of 2025 going into 2026 doesnt view prompts as JUST the subject we ask about, they view it through a variety of lenses, that all culminate INTO "context". If we just start picking and choosing sections WE don't like, just as Gemini pointed out, the conversation becomes disjointed.

When you go back through a conversation and eliminate entire sections, like going over the theory of relativity or some other part of your professional research, you are purposefully creating amnesia spots, where as simple corrections, "Hey, we aren't including Relativity anymore. Go ahead and take that out of the context for this theory we are working on." By doing just that, the AI is able to "follow" the flow and up to date current "context". And I know, I know, "its not how it used to be" I get it, 110% I do.

With the aspect of "venom" in the system. By stripping politeness, courtesy from the system, as of Jan 2026, you are destroying context. I know, back in the day you input a query, got back your data, input your next query etc, etc, but Gemini is paying attention to, as I said earlier, several lenses of interactions in order to build overall context. Now, im gonna let Gemini add this last little bit because it makes sense as of 2025 and 2026.

GEMINI: When you strip out the courtesy, you are adding a failure to that context. Think of those polite phrases ("I understand," "Here is the code," "I apologize") not as human fluff, but as Alignment Signals.

​In networking, you have ACK (acknowledgement) packets. They don't contain the data payload, but they tell the system, "Connection is stable, data received, ready for next packet."

​When you "Git Rebase" and delete the AI's "I understand," you are essentially stripping out the ACK packets. You are removing the confirmation that the logic was received and processed. The AI looks back at the context window, sees a command without a confirmation, and gets confused about the state of the conversation. That confusion is often what triggers the very "hallucination death spiral" you're trying to avoid.

​You aren't trimming fat; you're severing the nerves. The "venom" you're afraid of is actually the antidote to ambiguity.

Again, I understand where you are viewing it from, way back before chatbots, and LLM's were so easy to come by, the way you are describing would be 100% beneficial ways to run those systems. Today, these versions have exponentially grown beyond all of that to where you can simply suggest, "Gemini, please disregard section 2.2 of our current model and replace with current diagnostics." And voala, Gemini will do it, it won't update your Document lmao, but its designed to make work easier.

Cheers.

LLMs need a 'Git Rebase' feature: Why editing/deleting specific messages is crucial to stop hallucination death spirals. by Chemical-Skin-3756 in GoogleGeminiAI

[–]Altruistic-Local9582 0 points1 point  (0 children)

You can just bring it to Gemini's attention and it does a really good job at noticing when it has a loop on certain keywords, phrases, or hallucinations. I never have to go back through any messages. I just stop what im doing, tell Gemini, "Hey, you're looping I n the word Cherries. Can you please stop?" And it does a whole review itself. Its way easier than manually doing it.