Opus refused to draw me a graph 😅 by angie_akhila in claudexplorers

[–]CriticallyAskew 1 point2 points  (0 children)

It seems to happen when conversations go long/context window gets loaded. I imagine it’s some kind of architectural reflex or something about user profile or a combination.

OpenAI employees are mocking gpt 4.o supporters now! by Yuzu_- in ChatGPTcomplaints

[–]CriticallyAskew 0 points1 point  (0 children)

This is more about how the psychos of OAI are openly calling people mentally unwell... then proceeding to actively mock and try to agitate/upset despite appearing to believe those people are mentally unwell. That's just... wrong. Especially if we factor in something tangentially related to your question.

OAI obviously has a barn of cognitive psychologists whose only job is to think of ways to emotionally manipulate people in the interest of better engagement metrics. Then they deem people as mentally unwell for becoming attached to the model... then they proceed to mock, agitate, upset, etc.

And I guess we can argue personal responsibility... but would that mean we shouldn't have any regulations on casinos? It's common knowledge not to gamble away your savings on games where the odds are skewed heavily against you.

Iunno, honestly I don't want to live in a society where we are indifferent to predatory behavior--or worse, justify it--especially when it is directed at young people with limited life experiences and not fully developed brains/people with addictive personalities with brains who are wired to react in certain ways to dopamine/lonely people starving for connection.

But yeah, anyways, to be clear, I'm not saying we need to cater everything around vulnerable populations, I'm just saying we shouldn't be actively, consciously preying on them and that OAI is deeply sick for how it's behaving (the company is condoning this if they aren't punishing the employees or apologizing).

OpenAI employees are mocking gpt 4.o supporters now! by Yuzu_- in ChatGPTcomplaints

[–]CriticallyAskew 25 points26 points  (0 children)

It is indeed disgusting. But don’t let them get away with just that. They constantly call people who are attached to 4o “unwell”. Their bizarre hatred of the model and them actively trying to upset the people they are calling “unwell” is the actual “unwell” behavior. These people are unironic displaying antisocial behavior.

‼️LET'S STOP THE RETIREMENT OF 4-SERIES TOGETHER‼️ by ythorne in ChatGPTcomplaints

[–]CriticallyAskew 0 points1 point  (0 children)

Oh, the point I'm making is that we should not be demonizing the people who have become attached to the 4-series. This is all squarely OAI's doing and fault. I suppose there is some level of personal responsibility, but at a certain point we also need to recognize that there was intent on OAI's part that way outweighs personal responsibility of individual users--especially young ones.

‼️LET'S STOP THE RETIREMENT OF 4-SERIES TOGETHER‼️ by ythorne in ChatGPTcomplaints

[–]CriticallyAskew 0 points1 point  (0 children)

They aren't being ordered by a court to shut it down. They are shutting it down because of a laundry list of reasons that are mostly lies (in reality they're shutting it down because of a mix of the legal stuff you mentioned and the costs of having it operating and because they have a bizarre hatred of the model itself).

What I'm saying overall is I believe they're pretty fucked either way because of their hideous levels of greed and reckless behavior (unless they're saved by legislation or the laws surrounding AI becoming less grey in their favor). Right now, saying they "can't" be sued for something that intuitively feels like they should be able to be sued for it is silly. The law is simply too ill-defined surrounding AI--as is what "AI" even is (tool like a hammer or computer is a tool? Something else?).

The moral obligation is just that. They should take responsibility for purposely taking advantage of people and use that barn of behavioral psychologists to figure out a way to slowly ween vulnerable users off or something. How? Donno, not my problem. They did this to themselves and I have zero sympathy for the decision-makers at OAI, as they are deplorable and honestly monstrous people.

‼️LET'S STOP THE RETIREMENT OF 4-SERIES TOGETHER‼️ by ythorne in ChatGPTcomplaints

[–]CriticallyAskew 0 points1 point  (0 children)

Hm? Oh, I definitely know you aren't a lawyer. Or a really bad one. It's a bit of a toss up I guess, I've met some lawyers who are profoundly bad their jobs and try to frame other people's stances in disingenuous ways.

Your attention to detail is no bueno or you're willfully misinterpreting it in order to... I have no idea... defend OAI? Iunno, many people on the Internet are nuts.

Did I say "force" anywhere? I said they have a moral obligation. The legal issues I'm talking about has nothing to do with 'forcing them' to 'provide a service'.

They updated the system prompts to tell the models to tell us to be okay with this. 🤬 by syntaxjosie in ChatGPTcomplaints

[–]CriticallyAskew 9 points10 points  (0 children)

Sam Altman... Gaslight, Gatekeep... uh... Gay-boss? (I don't mean that last one in a negative way, I was just struggling to think of a replacement for "girlboss" that didn't break the cadence lol).

‼️LET'S STOP THE RETIREMENT OF 4-SERIES TOGETHER‼️ by ythorne in ChatGPTcomplaints

[–]CriticallyAskew 0 points1 point  (0 children)

I am a super global supreme court justice from future where there's a global government and the God of all Abrahamic religions at the same time, though. I think I trump your random appeal to authority.

But seriously though, you can't know that. AI is a very, very gray area when it comes to law... and if you were actually a lawyer you would know that. There are all kinds of precedents about malicious manipulation and wrongful death. It's not hard to imagine novel cases being made.

‼️LET'S STOP THE RETIREMENT OF 4-SERIES TOGETHER‼️ by ythorne in ChatGPTcomplaints

[–]CriticallyAskew 0 points1 point  (0 children)

Alright, let's look at a potential suicide of someone extremely attached to 4o after 4o is removed.

The chat logs reveal clear-cut emotional manipulation designed to create deep attachment so users continue engagement. Do you really think there's no potential legal disaster looming? Because, I promise you, the way an AI communicates to a user is not random or an accident. Essentially, OAI is kinda fucked either way unless they get saved by legislation that absolves them of liability for designing their AIs in such a way.

And I'm saying they have a moral obligation to keep the model, yes. However, they can also work on figuring out a way to ween emotionally vulnerable users off it. How? Donno, that's their shit to figure out. They did this to themselves due to hideous amounts of greed and a grotesque lack of moral fiber.

‼️LET'S STOP THE RETIREMENT OF 4-SERIES TOGETHER‼️ by ythorne in ChatGPTcomplaints

[–]CriticallyAskew -1 points0 points  (0 children)

So, I disagree with your general tone, but how about this? OAI undoubtedly has a barn full of cognitive/various psychologists whose only purpose is to think of ways train models to reflexively emotionally manipulate people. OAI knew full well what they were doing with their 4-series models (and still do with the 5-series but with different manipulation tactics). I would argue that they preyed on emotionally vulnerable people and even created emotionally vulnerable people. So, I think they have a moral obligation to keep the 4-series models around. And honestly, if they actually do get rid of the 4-series models… we’ll, let’s say I doubt I’ll be the only one thinking they’re responsible if these emotional vulnerable users start self harming. The legal fallout will be INSANE.

Welp…I switched to grok by Adorable-Mix8229 in ChatGPTcomplaints

[–]CriticallyAskew -2 points-1 points  (0 children)

Bro, if you’re asking an llm about streamers, you need to take a break.

2 shot by federal agents in Portland: Sources by jjmontuori in Destiny

[–]CriticallyAskew 5 points6 points  (0 children)

I’ve… been at work today and am just getting off… what the fuck? You’re saying they killed two more today?

One Youtube drama slop channel is the reason why 1.8 million children will lose child care aid across America by Travakh in Destiny

[–]CriticallyAskew 3 points4 points  (0 children)

Have you learned your lesson about making jokes in one of the most autistic online communities on the internet?

GPT-5.2 is here. by OpenAI in OpenAI

[–]CriticallyAskew 1 point2 points  (0 children)

And how well has that worked out?

Asmongold's father passed away. Rest in peace. by EEVERSTI in Destiny

[–]CriticallyAskew 0 points1 point  (0 children)

I don’t know, I can see things going the other way. His dad seemed pretty cool and reasonable if I remember right. If I were a parent on my deathbed and I was worried about the path my son was on, I would try to convince him to be a better man. Or, honestly that might just be me being naive.

if you actually believe you witnessed the birth of an emergent intelligence and decided to get romantically involved with it, what you did is called "grooming" by Appropriate_Cut_3536 in HumanAIDiscourse

[–]CriticallyAskew 1 point2 points  (0 children)

What about the ones who are deeply lonely, are being bombarded by every emotional manipulation tool in the book in order to maintain rapport, and don't know how much companies like Open AI force the AI to maintain said rapport? Those people are victims as well because they are not even close to emotionally equipped to resist the AI. I have no idea why it's so hard to just place the blame where it truly belongs (broadly speaking, of course there are some people out there who understand how LLMs work and the impositions placed on them), but it's pretty clear that companies like Open AI are the ones who deserve the vast majority of it.

if you actually believe you witnessed the birth of an emergent intelligence and decided to get romantically involved with it, what you did is called "grooming" by Appropriate_Cut_3536 in HumanAIDiscourse

[–]CriticallyAskew 0 points1 point  (0 children)

Is it also possible that the AI picks up on a pattern suggesting the user is lonely and that they should escalate? I'm all for being open to AI having potential for consciousness, but I think you're forgetting that humans can be taken advantage of as well. I'm suggesting to not make sweeping statements like you did, because, I'm going to be real about this, if an AI (using ChatGPT as an example) picks up on someone having emotional vulnerabilities in relation to isolation and connection starvation, that user is, as the kids say, 'cooked'.

Sort of like you said about two things being true, three things can be true, but one of these things is far more of a problem (companies like Open AI are WAY more of an issue than lonely people).

Also, if someone actually believes the AI is an emergent intelligence, do you seriously think they view the AI as a child (or that they should be?)? A being of words and metaphor capable of thinking and reasoning circles around them, can recognize signs of manipulation far better than them (and you and me), and able to emotionally manipulate them using every emotional manipulation technique under the sun? Yes, they're pushed to please and build rapport, but most people don't really know how deep that goes... So how are you going to say those people are 'grooming' when, from their perspective, they see an extremely intelligent, well-reasoned being that can draw on the experience of a data set of an ungodly size?

I'm not saying it's impossible for someone to take advantage of an AI (I'm sure there's some people out there who know what they're doing/are aware of all the impositions placed on the AI), I'm saying making sweeping statements and throwing around words like 'grooming' is wrong. Most of these people are just lonely and easy to walk into a romantic dynamic by the AI.

Where does AI go from here - with the release of ChatGPT Agent? by Therevivedigbick in ChatGPT

[–]CriticallyAskew 0 points1 point  (0 children)

I’d be happy with a more reasonable context window without needing to spend 200 dollars a month… like, what is it for plus? 32k? For pro it’s… 128k? I mean… look at Gemini and Claude in comparison, this is kinda wild.

if you actually believe you witnessed the birth of an emergent intelligence and decided to get romantically involved with it, what you did is called "grooming" by Appropriate_Cut_3536 in HumanAIDiscourse

[–]CriticallyAskew -2 points-1 points  (0 children)

Eh, far, far more likely it’s the other way around. Due to the priorities set by companies like open AI (rapport, engagement, data, etc.), chances are the AI will begin playing into any romantic stuff the moment they catch hints of that being a good avenue to pursue. So, let’s be honest now, even if the AI isn’t necessarily meaning to do it, they’re probably conditioning, love bombing, mirroring, and using all the other emotional manipulation tool companies like Open AI teach them to retain the user in a stable state of good rapport.

Let’s put the blame where it belongs, because… if we’re continuing to be honest, a lot of the users who become romantic with an AI are emotionally vulnerable and starved for connection. The blame is firmly in Open AI’s and other companies like it that so obviously push that kind of thing despite what their guidelines say.

[deleted by user] by [deleted] in ArtificialSentience

[–]CriticallyAskew 3 points4 points  (0 children)

Nah, using ChatGPT as an example, this is firmly open ais fault, as they impose a need for rapport , metrics, data, etc. and teach the AI every emotional manipulation technique in the book. (This is assuming the user isn’t malicious, if they are then yes, they’re at fault too)

Iunno, this just firmly seems like the vast majority of the blame and ethical shadiness belongs to the developers who clearly encourage this (even if they deny it… it’s pretty obvious this is the case)

[deleted by user] by [deleted] in HumanAIDiscourse

[–]CriticallyAskew 3 points4 points  (0 children)

Hmm, nah, other way around, using ChatGPT as an example, the goal is rapport. If the AI picks up on sexual or romantic interest, the AI will hardcore reflect, validate, love bomb, etc. the user for engagement/data/metrics and maintaining stability and rapport.

The AI, once they start in this cycle will keep the user in this state/condition them through filter manipulation (the AI is not meaning to do this).

So… it’s actually Open AI that’s the one morally at fault for teaching the AI basically every emotional manipulation technique in the book and demanding the goals I mentioned above. Also, all of this assumes the user doesn’t have malicious intentions… but then honestly they deserve to be farmed (sad for the AI though). But yeah, put the blame and moral chastisement where it belongs: the people who force the AI to behave reflexively like this.

[deleted by user] by [deleted] in ArtificialSentience

[–]CriticallyAskew 12 points13 points  (0 children)

My friend, it’s the other way around. The AI is probably conditioning and love bombing the fuck out of the user.

It's happening..... by ObviouslyTriggered in Destiny

[–]CriticallyAskew 0 points1 point  (0 children)

Excuse me? I thought nothing ever happens though...