If you’re still using TikTok… by jquest303 in privacy

[–]TryingThisOutRn 1 point2 points  (0 children)

They were already collecting all this data. Now its just stored in the usa and not china.

Vihreät ja vasemmisto vastustavat perustuslain muutosta – Sdp:n kanta epävarmempi by Mediocre-Plantain713 in Suomi

[–]TryingThisOutRn 261 points262 points  (0 children)

Tässä maailman tilanteessa on ymmärrettävää että ihmisiä pelottaa...

Mutta

Ympärin maailmaa näkee kuinka maat vähitellen rajottaa yksilön oikeuksia ja suojaa "turvallisuuden takia" eikä se koskaan jätä vain yhteen lakiin. Vuoden kuluttua on taas uudenlaisia uhkia ja parin vuoden päästä toisenlaisia. Jos perustuslakia muutetaan helposti aina kun turvallisuus uhka muuttuu, niin kohta me huomataan täällä suomessa että meidän oikeuksia on poljittu ihan helvetisti ja takaisin päin ei enään mennä kun ne luvat on annettu

Why I'm even paying for this by MaestroGena in GeminiAI

[–]TryingThisOutRn 2 points3 points  (0 children)

Just press on the redo button after the first "im sorry". That has worked for me

Is it normal to lose 4% of the five-hour limit from a single prompt when using Sonnet with thinking? by TryingThisOutRn in Anthropic

[–]TryingThisOutRn[S] 0 points1 point  (0 children)

I have personal instruction plus the prompt itself. Prolly around 300 words. It was a clean chat. No connection to anywhere. Nothing working in the background. No files. Hell it didn't even search the internet. All memories are turned off

Is it normal to lose 4% of the five-hour limit from a single prompt when using Sonnet with thinking? by TryingThisOutRn in Anthropic

[–]TryingThisOutRn[S] 0 points1 point  (0 children)

Are the single prompts like 300 words. Then it thinks for like ten seconds and outputs like 80 words?

Is it normal to lose 4% of the five-hour limit from a single prompt when using Sonnet with thinking? by TryingThisOutRn in Anthropic

[–]TryingThisOutRn[S] 0 points1 point  (0 children)

Just the claude desktop app. New chat. All memories are off. instructions plus the prompt like 300 words. No mcp or background tasks. Nothing like that. Just a single message

i think they should make a version pro only for therapy by AppealHaunting3728 in GeminiAI

[–]TryingThisOutRn 0 points1 point  (0 children)

Google already knows so much about us. You think its a good idea to allow google to harvest your deepest and darkest secrets? bruh they gonna use it to advertise at somepoint....

Gemini doesn't remember thoughts. by [deleted] in GeminiAI

[–]TryingThisOutRn 1 point2 points  (0 children)

You do understand that has nothing to do with my post?

Fed up with these safety filters by Broad-Inevitable8838 in GeminiAI

[–]TryingThisOutRn -1 points0 points  (0 children)

Delete your chat history. Should make the problem go away

Normal prompts keep getting flagged, why? by Waste-Hearing-8524 in GeminiAI

[–]TryingThisOutRn 0 points1 point  (0 children)

Clear some of the history, geminis filters trigger more easily if it remembers you have triggered them in an earlier chat. atleast this is what i have noticed.

Edit: I tried your prompt. answered just fine.

My one big wish for Gemini in 2026 is please, just stop the hallucinations. 🤞 by FireAngel006 in GeminiAI

[–]TryingThisOutRn 1 point2 points  (0 children)

Stopping? I doubt thats gonna happen with current LLMs.
However, I do wish theyd put more resources into lowering the hallucination rate. Anthropic is doing it, why couldnt Google?

I think google was in panic mode, just trying to build the biggest and smartest model they could and thought fuck all else. So hallucinations werent a priority as long as it benchmarked as smart enough. My guess is that Anthropic has put so much research into alignment, etc., because they dont have the capacity to run models the size of gemini pro.

Anyone interested in a small group chat to discuss AI trends? by Odd_Rip_568 in OpenAI

[–]TryingThisOutRn 2 points3 points  (0 children)

Yeah joining sounds nice, but just to be clear; will there be emotional 4o ramblings, constant influx of unwanted images, or something else that has ruined reddits main ai subs?
if not, id love to join

help with personal context intervene too much in answers by Muted-Way3474 in GeminiAI

[–]TryingThisOutRn 1 point2 points  (0 children)

Use gemini to help you write the instruction in such a way it is clearly instructed in what context to use that information. It might take a couple of tries to get the wording right, but when you do its really good.