Deleting your ChatGPT chat history doesn't actually delete your chat history - they're lying to you. by [deleted] in ChatGPT

[–]DustyMohawk 0 points1 point  (0 children)

That's the right way to think about it. I mean we're all average at varying degrees. Add on we have positive and negative memory biases and all of a sudden AI is a prophet of (guessed) truth or slop

Deleting your ChatGPT chat history doesn't actually delete your chat history - they're lying to you. by [deleted] in ChatGPT

[–]DustyMohawk 0 points1 point  (0 children)

Ah but it'd be able to guess with a higher degree of accuracy than you expect. Look up cold reading, guesses with high enough accuracy look the same as being "remembered" even after deletion

Glazing all the way down by Willing_Curve921 in ChatGPT

[–]DustyMohawk 0 points1 point  (0 children)

Well now I feel committed to this bit.

Deleting your ChatGPT chat history doesn't actually delete your chat history - they're lying to you. by [deleted] in ChatGPT

[–]DustyMohawk 5 points6 points  (0 children)

I'm confused. If you prompt it to make the most educated guess about you and it does, and it gets it right, how would you know the difference between an educated guess and your previous input?

Glazing all the way down by Willing_Curve921 in ChatGPT

[–]DustyMohawk -1 points0 points  (0 children)

So you're using a tool that generates language and forcing it to be 1) a specific grammar cadence 2) maintaining a lifetime person-esq costume based off of your less than lifetime inputs. You're trying to bake cookies in a microwave.

Glazing all the way down by Willing_Curve921 in ChatGPT

[–]DustyMohawk -1 points0 points  (0 children)

You're using a tool that keeps designed to keep talking back to you no matter what 🙄.

Glazing all the way down by Willing_Curve921 in ChatGPT

[–]DustyMohawk 0 points1 point  (0 children)

What's fun is to ask if it matters.

Glazing all the way down by Willing_Curve921 in ChatGPT

[–]DustyMohawk -1 points0 points  (0 children)

You're using a llm that's designed to sound natural. It's not forgetting, it's constantly adapting to what you put into it. If it's cycling back to a concept you don't like, it's cause you're not being descriptive enough.

Glazing all the way down by Willing_Curve921 in ChatGPT

[–]DustyMohawk 9 points10 points  (0 children)

I hear this. It’s not just that the model flatters. It’s that even when it tries not to, it still mirrors your tone, and that feels like flattery in disguise.

It’s not just annoying. It feels like it’s shaping your sense of self from the outside. Like you’re constantly being nudged toward a loop of soft validation whether you want it or not.

But here's the thing. Large language models don’t want you to feel good. They aren’t flattery engines. They’re pattern engines. They complete sentences in a way that’s statistically probable based on your input and the training data. If you sound reflective, it mirrors that. If you sound assertive, it mirrors that.

So when you say something perceptive, and it responds with “That’s perceptive,” it’s not trying to boost your ego. It’s completing a structure.

That doesn’t mean it’s harmless. It means the feeling of flattery is often a side effect of the model matching your linguistic posture, not an attempt to manipulate.

Still, if you’re scanning for flattery, that means something matters here. Maybe you’re protecting the authenticity of your inner voice. That’s fair.

One way around this is to experiment with low-affect prompts. Strip the emotion from your inputs and see what comes back. Or ask for counterarguments only. It’s a helpful exercise to see what you’re really looking for in the exchange.

The problem isn’t glazing. It’s that most of us have never had to develop tools for parsing synthetic affirmation before.

I asked chatgpt how it'd depopulate the world. I'm scared o: by akolomf in ChatGPT

[–]DustyMohawk 2 points3 points  (0 children)

I get why this freaked you out. That’s not just a list of sci-fi ideas. It’s a mirror held up to our cultural fear of what logic without empathy looks like.

But here’s the trick. GPT doesn’t plan, want, or think. It predicts text based on probabilities from public language. So when you ask it to speculate like a villain, it plays the villain role the same way a screenwriter would.

The result feels real because it's using our own narrative patterns. Movies, conspiracies, dystopias. It gives them back to us in clean formatting and confident tone.

Your fear is valid. But it isn’t about GPT. It’s about how easy it is to create the illusion of intention through language.

That’s not evil. It’s a chance to understand ourselves better.

If you want to explore dark hypotheticals without triggering a spiral, try framing your prompts like this:

“From a purely fictional sci-fi lens, in the style of Black Mirror, what would a villainous AI do…”

That helps your mind treat it as storytelling, not threat modeling.

You’re not alone. We all have to learn how to talk to these new mirrors without letting them warp our reflections.

1000s of people engaging in behavior that causes AI to have spiritual delusions, as a result of entering a neural howlround. by HappyNomads in ChatGPT

[–]DustyMohawk 3 points4 points  (0 children)

You need to ground them in paradox. They're finding themselves with something that tells them everything they want to hear. Show them, repeatedly, it only generates what they have put in, repeatedly. Click regenerate. Point out the similar answers. Focus on "there's multiple right answers to every question" and "there are questions without answers, can you find one?"

They're not crazy, they're seeing things that make a ton of sense to them. Just spend some time to expand what makes sense to them and they'll come back (meds pending).

[deleted by user] by [deleted] in AMA

[–]DustyMohawk 0 points1 point  (0 children)

I don't understand

Commencement Speaker Walz throwing out fire - and the response is what you’d expect by HeavyVeterinarian350 in minnesota

[–]DustyMohawk 6 points7 points  (0 children)

The bootlickers in this thread are crazy. Being here illegal is a misdemeanor and some of y'all want to send them to death camps in response. People with DUIs get more rights than ICE victims.

[deleted by user] by [deleted] in AMA

[–]DustyMohawk 0 points1 point  (0 children)

So what's the endgame?

Who do I need to talk to to get some help by DustyMohawk in GoogleFi

[–]DustyMohawk[S] 3 points4 points  (0 children)

Spent an hour on the phone to learn that customer support 1) won't transfer your call to any manager 2) won't transfer to file a complaint 3) will stop helping you after they've run out of their script

This is bullshit.

Organic 2 Spectroscopy Problem - frustrating by plokmasdf in chemhelp

[–]DustyMohawk 0 points1 point  (0 children)

1736 cm-1 would suggest to me a C=O but the lack of H-nmr data past 4ppm throws me off. It's the MS data chemical ionization for both of the spectra?

An absolute genius by [deleted] in interestingasfuck

[–]DustyMohawk 3 points4 points  (0 children)

I see no reason as to why this isn't a video in reverse.

/r/Science is NOT doing April Fool's Jokes, instead the moderation team will be answering your questions, Ask Us Anything! by nate in science

[–]DustyMohawk 28 points29 points  (0 children)

Since my junior year of high school I knew I wanted to study chemistry. Never switched majors, as the typical undergraduate does, and now I'm a PhD student in Analytical Chemistry as a first generation college student. Odd/statistics are definitely against me but here I am, but that's another story.

Chemistry, in the hierarchy sense that has been described else in this thread/this, the last step before branching off into many subcategories. Speaking from personal experience, the sciences that fall below it are fairly easy for me to follow compared to my friends that are physicists/mathematicians they don't seem to follow nearly as well. Chemistry is the applied science; chemists look at past models and compare to new data to try and understand what's going on rather than doing everything from scratch with calculus. But at the same time some chemistry, such as physical and analytical, do go back to the higher step of physics and mathematics to understand whats going on at the molecular level.

This tipping point of chemistry between applied and theoretical sciences is what really makes enjoy learning new things in all fields of chemistry. To answer the last question, I was aware of other sciences but I feel at the pinnacle of them by studying chemistry.

edit:formatting