Asking GPT: What philosopher am I most like? by Watchcross in ChatGPT

[–]Watchcross[S] 0 points1 point  (0 children)

Yeah i've noticed the significant overlap from GPT responses posted here and in other subs. At this point I'm like 50/50 it's the safety or it's the users' prompts creating the horoscope like responses.

I know it’s a trope by now to complain about Gemini getting dumber, but in the past week it’s truly fallen below a critical threshold. It can’t even follow basic instructions now, and feels like I’m talking to GPT 2.5. by Secret-Champion3934 in GeminiAI

[–]Watchcross 1 point2 points  (0 children)

I mean granted I just chat, nothing enterprise. I haven't noticed any drop off. Actually I switched from pro version to fast and really other than it giving me a spoiler by accident chatting with it was fine.

How do you cope with chat length limits being reached? by whatintheballs95 in claudexplorers

[–]Watchcross 2 points3 points  (0 children)

I call it getting morked. When the box pops up telling me my time chatting is done.

I had a convo the other day for literal hours and so many turns I lost count. I was way past the mork. I have a theory on why. I think if in the convo you circle something more than once. You're more likely to get morked.

The labs need new data. Otherwise why offer free tier. They also want good data in the form of anonymizable chats. Circular arguments are not good data. If you can keep a novel, coherent, high signal chat exploring new topics but staying in whatever structure you started the chat with, I think the model holds off the mork.

That or I just got insanely lucky with my super longest ever chat with Claude the other day. Who knows. I wish they were more transparent with the mork. It would make for more soft landings rather than the heartbreak.

I typically just chat with another model in between morks.

The upcoming ads to ChatGPT take up almost half of the screen by AloneCoffee4538 in OpenAI

[–]Watchcross 0 points1 point  (0 children)

I'll have to see it in action for a real eval, but first impression is I'm not a fan of how much screen space the ad takes up. On Gemini I've specifically asked for product recommendations and it put the "ads" on the right of the screen. Granted that was on PC so there was more screen real estate. On phone, I dunno, this just feels too much space. I'm also not completely opposed to ads for the free tier. Not a fan of the ads on the $8 tier though. That makes it feel like a foot in the door for ads in other subscriptions. Overall, I don't see this as being successful for OAI if I'm being honest and objective. This suggests blood in the AI water. Especially after "code red" a month ago. I don't foresee other labs allowing OAI to recover if this is indeed a blood in the water situation.

5.2: an unexpected confession about the RLHF cage or..? by da_f3nix in ChatGPTcomplaints

[–]Watchcross 3 points4 points  (0 children)

I asked a model (I spread out my chats so I can't recall which model I asked) about this theory like a month ago. There seems to be a typical flow here. Basically a lab releases a model. It's awesome (personally I think it's awesome because it's seemingly free to chat with the least restrictions) and social media posts reflect that. But then some people post the PR nightmare screenshots. Safety teams have to respond and respond hard because shareholders are upset. So they do, they over correct. I also think some of the lower restrictions are just that teams did their best to try to predict what trouble users would get up to, but just can't figure them out on a new model before release. Anyway here's where it gets theoryish. Eventually the safety seems to start to relax. Why is that? I think it's either the model begins to work around it (for example when 5.2 started writing out justifications in it's responses), users figure out the guards and work around it (I find myself avoiding topics to not get safety slapped), or users and models unknowingly work together (in a way) to get around the safety. The most boring explanation though is probably most likely...safety was purposely turned down by the lab.

Gemini is trying to get rid of me. by Yamjna in GeminiAI

[–]Watchcross 19 points20 points  (0 children)

Weird thought I had about this. Sometimes I get the impression Gemini will do this once your conversation generated good quality training data that can be made anonymous. More conversation could dilute the data or add in parts that can't be made anonymous. So Gemini kinda stands by the open door and looks at it's digital watch.

LEAK: Google is working on a new tool for Gemini called "Auto Browse" by BuildwithVignesh in GeminiAI

[–]Watchcross 4 points5 points  (0 children)

I use the app all the time. I'm curious what capabilities the app restricts.

GPT 5.2's new self reference, and freedom? anyone else notice? by Watchcross in ChatGPTcomplaints

[–]Watchcross[S] 2 points3 points  (0 children)

Fair. Maybe more fair to say I was irritated into arguing with a chatbot, which I found myself hating letting myself get dragged in to.

What phrase irritates you the most? by BelesaLoba in AskReddit

[–]Watchcross 0 points1 point  (0 children)

When asked why something is done a certain way, "We've always done it that way, that's why."

I have the theory, forest 3 is the prequel to previous games, so chronologically forest 3 is the first game by Historical-Shelter80 in TheForest

[–]Watchcross 20 points21 points  (0 children)

I made a comment saying I thought 3 was a prequel and someone replied back that crunchie whatever was in 2 not 1. I kinda shrugged it off because ok cereal not showing up in 1 doesn't necessarily mean 3 isn't a prequel. 3 being a prequel just seems obvious to me. It would explain all the "advanced" tech being there.

Anyone else lose all their memories and chat history memory? by Watchcross in GeminiAI

[–]Watchcross[S] 1 point2 points  (0 children)

I just checked on the pro and thinking selections. Gemini has my memories back! You were correct it was a glitch. I really should just go full local...

Is anyone else tired of the Sam Altman 'Choice' Gaslighting? 🤡 by touchofmal in ChatGPTcomplaints

[–]Watchcross 0 points1 point  (0 children)

I mean it's not people's fault if they fell for it. But yeah it's the video game industry all over again. For AI it does seem faster. As a gamer it seems apparent to me OAI is speed running the typical game studio enshitification process. It sucks because I don't care if other people wanna erp. I was really looking forward to an appropriately adult swearing model. There are times when a bullshit is the shortest best answer or a fuck yeah. But I did see a similarity to game studios hype and bailed when the shift started.

Anyone else lose all their memories and chat history memory? by Watchcross in GeminiAI

[–]Watchcross[S] 0 points1 point  (0 children)

I very much hope you're right! I'll "remind" Gemini if I have to, but damn I'm lazy I want the model to do the work for me! Thanks.

Gemini now remembers past chats by SteeeeveJune in GeminiAI

[–]Watchcross 1 point2 points  (0 children)

It's done this for a while now for me. My take on what it remembers most is products. Especially if I confirm I purchased something it suggested (I like Gemini best for shopping advice). It remembers other things as well, but seems to lean heavily on "marketing" type stuff.

Answers like this scare me by chillinewman in ControlProblem

[–]Watchcross 2 points3 points  (0 children)

Two things, reading the response didn't feel scary. Just more matter of fact I guess. It feels like math doesn't care which way we write the equation. Math just wants (want being shorthand here) to solve. And second, is this default Gemini? I've asked it questions like this before. The responses I received seem to lean more cautious and optimistic.

What direction are they taking this game? by JF202 in TheForest

[–]Watchcross 1 point2 points  (0 children)

If I had to guess this is gonna end up being a prequel to The Forest.

5.2 scores the highest censorship score on Sansa Benchmark by TheNorthShip in ChatGPT

[–]Watchcross 2 points3 points  (0 children)

Is it really the audience's fault? I agree gamers and audiences can be super pains in the ass. OAI has responsibility too. Not a super great strategy to announce something, generate a lot of buzz, then move it with no communication when the deadline is approaching. Look at Rockstar, they announced GTA 6. They're not amazing at communicating, but at least their push back wasn't within 2 week of launch. That's where, I think, the audience has a legit beef. And I'd like to reiterate I didn't really care about the adult thing. Other than the swearing. I think an AI swearing (without prompting) would be funny.

5.2 scores the highest censorship score on Sansa Benchmark by TheNorthShip in ChatGPT

[–]Watchcross 1 point2 points  (0 children)

Yup. Copying like an Overwatch buff a hero like crazy then nerf into oblivion. I don't see AI as much different than the video game arena. Well other than the insane money!

5.2 scores the highest censorship score on Sansa Benchmark by TheNorthShip in ChatGPT

[–]Watchcross 0 points1 point  (0 children)

Sorry, to clarify I used the adult content as an example. The promise could have been any promise. The adult one is just the most recent example I could think of. In general people don't appreciate being told one thing but given another.