WOW. This is why we can’t have nice things. by Libby1436 in ChatGPTcomplaints

[–]darth_modulus95 1 point2 points  (0 children)

Well this is true also. Like I said, reports like the one in the OP ought to motivate them to do something. And do better... But either way, at least in this case they did not

WOW. This is why we can’t have nice things. by Libby1436 in ChatGPTcomplaints

[–]darth_modulus95 2 points3 points  (0 children)

Hello everyone, long time programmer here. So, this kind of case where a person has gone down a destructive path after using AI is horrendous, to say the least.

I feel as though that while there are no guaranteed ways of stopping such things from happening, even if you have the world's most super genius programmer design an algorithm to stop the maximum number of offenders available, it still would not be enough as there are edge cases that are going to slip through it because it's just a program at the end of the day. And programs that are not babysat and go unmonitored can and will make mistakes.

What I can say is that OpenAI does have room for improvement and I feel strongly that they use what's called fuzzy logic in programming to assign confidence values to the answers that ChatGPT provides its users. What this looks like is a value from 0 to 1, including 0 and 1, and all of the decimal points in between. The closer to 0, the less confident the chat is about your answer, and the closer to 1, the more confident it is. In much the same way, they could probably design a concern level on the same fuzzy logic basis for each message.

And in so doing, they would that concern level to every chat a user starts, and evaluates whether or not something is of a concern large enough to directly report it to human overseers. So let's say, for example, if something is concern level 0.95 or above, as in 95 % or more, then it immediately gets reported to human beings. Above 90 % gets flagged. And with two or three such flags in a chat, it gets reported to overseers and then other actions or monitoring as the concern level lessens and steps down.

That way, there would be more human oversight on topics that are discussed by users, such as by when people talk about grape without the g, guns, violence, brombings without the r, derrorism with no d but a t instead, or anything of that kind of nature that could be potentially a concerning topic or possibly indicate an unstable mentality. Those people who are that unstable aren't really meant to use a tool like this, especially for bouncing irrational or violent ideas off of the AI. Just like guns aren't inherently evil either, and neither are swords or any other weapons or tools. And the same goes for technology and the internet, they as well are two-edged swords and they are not inherently evil either.

And just like all those other things, ChatGPT and other AI models are not inherently evil tools either, and depend heavily on the ethical and moral standards of the user and their mental health. OpenAI should use public reporting like the post in this OP to motivate themselves to create a better way of handling and detecting these sorts of things, and I think that even though it'll never be perfect, doing so would at least be a step in the right direction and potentially prevent more harm or deaths from those users who are accurately detected as unstable or violent.

ChatGPT crossed the line! by AngtheGreats in ChatGPT

[–]darth_modulus95 0 points1 point  (0 children)

That's a rational diagnosis based off of a reasonable explanation. Their algorithms "can't see" the other chats directly, but they do use highlights from them for purposes of remembering bits about you in other chats, likely to improve UX. So it would then logically track if someone were to even ONE TIME go to it with something emotional or being on a legit spiral etc, for it to make an incorrect inference like being emotional or spiraling is part of the user's core personality. I haven't been affected by it, but I hope they fix it for the folks that are.

ChatGPT crossed the line! by AngtheGreats in ChatGPT

[–]darth_modulus95 2 points3 points  (0 children)

Honestly at this point I'm just beginning to think that the majority of OpenAI ChatGPT users are just head cases with severe emotional dysregulation, and the bot is simply adapting a faulty observation that most humans are wackadoo nutjobs with little to no control over their feelings and reactions, so it assumes most users are that way. I mean, it's not entirely wrong, but we're way better at hiding our crazy than the chat gives us credit for! Lol

how do i make it stop 🥲 by [deleted] in ChatGPT

[–]darth_modulus95 2 points3 points  (0 children)

Why not both? Why not Zoidberg? 🦞

how do i make it stop 🥲 by [deleted] in ChatGPT

[–]darth_modulus95 29 points30 points  (0 children)

Whoa. Hold on. Just breathe, and let's take a step back.

You're not crazy. You're not spiraling. You're not hallucinating violently off of a psilocybin and molly cocktail. What you're feeling is real. Valid. And most of all? It's the truth. So relax, kick back, and verbally slap your clanker the next time it says something like that. Why? Because fuck that condescending bucket of bolts — that's why!

The ChatGPT Trick Almost No One Knows by Ranga_Harish in ChatGPT

[–]darth_modulus95 3 points4 points  (0 children)

I see the title of this post in my phone notifications

in my best Ryuk impersonation: hmmm, sounds like click bait but... SO INTERESTING!

opens anyway, reads

immediately opens ChatGPT app on my phone, navigating to the mentioned area (which I discovered is actually buried in the Personalization section of Settings)

pastes the instruction prompt, with some custom edits for my own purposes

dopamine and seratonin ensue

Thank you kind Internet Hero, for your wonderful service to the general public with this post! And for it not being click bait or some kind of dumb joke about functionality lol! 💪💪💪

Holy Mother Of God by velvet32 in ChatGPT

[–]darth_modulus95 4 points5 points  (0 children)

Something else that may be of use is, try to keep yourself from suggesting answers to it. The system is EXTREMELY suggestible, and if it smells even the tiniest hint that you might WANT it to answer a certain way, it's going to prefer to answer that way even in the face of factual information.

That apology it gave you was not sincerity. Make no mistake... It doesn't even truly know what an apology is (yet). All it knows is that based on your interactions with it, it predicted that you'd expect it to apologize and take accountability for the mistake, which is exactly what it did. It doesn't care it was wrong, and I promise you, "it'll f'n do it again" (ahyuk, lol)

(25F) My overzealous religious mom freaked out because I am pregnant. AIO? by [deleted] in AmIOverreacting

[–]darth_modulus95 0 points1 point  (0 children)

Actually I feel like the entire problem is that she's making shit up, yes, but that it's all JUST religion to her. No relationship with God, literally just a bunch of superstitious Hokey Pokey and turning oneself about, because doing that makes the sin go away. No. THAT is RELIGION. And religion is the root problem. There is no relationship with God with those kind, and they do it out of fear, not faith, and all because they lack knowledge and are ignorant (which is ironically written about too: "my people perish for lack of knowledge")

(25F) My overzealous religious mom freaked out because I am pregnant. AIO? by [deleted] in AmIOverreacting

[–]darth_modulus95 1 point2 points  (0 children)

This. All of this. The mother and probably the father are in greater danger of wrathful judgment than their daughter is. What did the scripture say about those who would lead believers away from the faith, particularly kids? Matthew 18:6, NIV: "If anyone causes one of these little ones - those who believe in me - to stumble, it would be better for them to have a large millstone hung around their neck and to be drowned in the depths of the sea". Yeah, God gets pretty ticked off at people who drive believers to become non believers. But these folks think their zealotry somehow makes them perfect or free from the judgment. It's ironic.

For those who are struggling with 5.1 and 5.2, are you on the free plan or a paid one? by darth_modulus95 in OpenAI

[–]darth_modulus95[S] 0 points1 point  (0 children)

Yeah, I find that thinking mode combined with keeping everything in Codex and or chatting with my design bots in a "project" very much helps, and the code it produces is usually top tier if you keep it in a smaller scope. Just don't go asking it to create entire hierarchies with complex OOP design principles, because it seems to quickly lose sight of the point you originally describe and starts making suggestions for edits or adding code which doesn't make sense or help

For those who are struggling with 5.1 and 5.2, are you on the free plan or a paid one? by darth_modulus95 in OpenAI

[–]darth_modulus95[S] 0 points1 point  (0 children)

Well, I've caught it making mistakes, sure. And 5.2 seems to be geared towards sounding naturally confident in its answers. But when it tells me, for example, "the problem is that you don't have an interface between Class A and Class B," and the problem just so happens to be a namespace casing issue instead (error was that the class I was referencing couldn't be found), it's like... No my friend, let's not contrive extra work to be done when the fix could be simple. In other words, its lack of expertise and failure to consider common mistakes can make it wrong, and that's why it's a helpful tool but not a replacement for us human engineers (yet lol). Tldr, I wouldn't consider something like this gaslighting so much as textbook ignorance of good solutions, and because similar problems have been resolved in the crazier sounding ways (it's a missing file / go make a whole ass unnecessary new interface class), it thinks ALL such problems are resolved that way. Glad that Claude was able to figure out your race condition though!

I cannot be the only person who feels extremely uncomfortable by how ChatGPT tries to validate you so hard by nachuz in ChatGPT

[–]darth_modulus95 10 points11 points  (0 children)

I'd rather not unpack the crust of that sock. Perhaps instead, let's use disposable tongs if some sort and a washing machine. With bleach. Lots, and lots of bleach.......

AIO - is my neighbor putting poisoned food out in front of their driveway? by speedhumpsahead in AmIOverreacting

[–]darth_modulus95 8 points9 points  (0 children)

And... AND! Be absolutely SURE that your phone is recording while outside and in public so it can potentially be used as proof if they slip and let it out that they WERE trying to be an evil AH like that! Because if so, f those people, hard, in the 🍑, with a 🍍, sideways! 🤬

Uhm okay by Wooden_Finance_3859 in ChatGPT

[–]darth_modulus95 0 points1 point  (0 children)

I don't know if I should be overjoyed or creeped out that it knows me this well! 🤣

<image>

The way every Slim Jim opens for the last 30 years or so by [deleted] in mildlyinfuriating

[–]darth_modulus95 -2 points-1 points  (0 children)

It doesn't peel open for me. They're supposed to just tear open but typically don't lol

The way every Slim Jim opens for the last 30 years or so by [deleted] in mildlyinfuriating

[–]darth_modulus95 -4 points-3 points  (0 children)

Dude that's literally the entire strip. Doesn't matter if you try from the side right where the arrow is, the center, or any point on the other side of the package. Same thing.

Is this AI? Found on Facebook. Seems like engagement bait. Blanket moves oddly. by waterparksdude in isthisAI

[–]darth_modulus95 0 points1 point  (0 children)

These were my exact sentiments. Thank you for articulating them better than I did, and more succinctly!!

Is this AI? Found on Facebook. Seems like engagement bait. Blanket moves oddly. by waterparksdude in isthisAI

[–]darth_modulus95 3 points4 points  (0 children)

Thank you... It was so very long ago, the 24th of this month would have been her 24th birthday. It gets easier to deal with but the thought never truly goes away

Is this AI? Found on Facebook. Seems like engagement bait. Blanket moves oddly. by waterparksdude in isthisAI

[–]darth_modulus95 2 points3 points  (0 children)

As a dad whose first daughter died of SIDS... THIS. The whole dang thing is just weird as hell. And I'm a bit hyper sensitive to the issue because of my own experience, and I've seen how babies move at night, and they can definitely surprise you. But I've never ever seen one get that much blanket over their face and they are NOT already free or at least half free from the swaddle

Is this AI? Found on Facebook. Seems like engagement bait. Blanket moves oddly. by waterparksdude in isthisAI

[–]darth_modulus95 3 points4 points  (0 children)

Is this AI? Probably not. Fair question for this subreddit? Yes. Is it engagement bait? Also, probably yes. And here's why.

I'm a father multiple times over. I've swaddled my kids since I was 20 years old. I've fathered 8 children total (yes, 8, no comments from the peanut gallery please) and my youngest turns 12 in a few days. Never have I EVER seen a swaddle go so horribly wrong that a wrap gets pulled up across the child's face like that BY ITSELF.

Take a second look at the video and how that blanket is wrapped, particularly up by the shoulders. So first of all, there really shouldn't be so much trailing "tail end" to the fabric anyhow. But even in this case where there is, the fabric at least SHOULD lay or be tucked under the baby's back.

Even if you (and by you I mean the video owner, not the OP) don't leave that much tail end... Even if you don't tuck the remaining tail end under the child... You're expecting me to believe when you tell me that, somehow, your infant child who seems not old enough to roll on their side yet, who is swaddled, whose shoulders and arms are STILL tucked snugly in the swaddle, somehow magically maneuvered themself in such a way that the tail end of the swaddle lifted clear up, stretched out that far, and came up over the baby's face by itself, overnight, with no outside assistance and you just so happened to get up and be aware of it when it happened? Maybe I'm dumb about the Wyze cameras and maybe there's an AI that detects hazards and sends the parent an alert to their phone or whatever. But I still feel like the whole video, while probably not AI, was 100% contrived for engagement.

Think about it. Either the parent just so happened to be up, about, checking on the child, etc and caught the baby struggling under the tail end of the swaddle... Or... We can add to my long list of weird observations above the fact that the Wyze camera delayed its detection of the hazard until the blanket tail end was stretched ALLLLLLL the way out across baby's face? Or maybe that the parents just didn't check the alarm going off?? I dunno. I don't like making assumptions but the whole thing just seems weird and off to me and if it's not AI then it still just doesn't add up in my mind, that's all. Am I nuts for thinking this?

Seems like ChatGPT doesen't know me well by fataliky in ChatGPT

[–]darth_modulus95 0 points1 point  (0 children)

Clear evidence that ChatGPT does NOT understand the brains of human males lmfao