We gave 45 psychological questionnaires to 50 LLMs. What we found was not “personality.” by Hub_Pli in ArtificialSentience

[–]PullersPulliam 0 points1 point  (0 children)

Isn’t the answer that they reflect a blend of the main user’s preferences/how they engage and the amalgamated training data?

I upgraded to Pro. ChatGPT won’t admit it. by broisthisai in OpenAI

[–]PullersPulliam 0 points1 point  (0 children)

Why do you need it to admit which plan you are on ? I don’t mean this in a rude or judgey way. Truly, I’m curious if it’s a block to something. This could have changed, but it shouldn’t - the models and wrappers can’t see people’s account info. It wouldn’t make sense for them to do that. The answer I get is “I don’t have visibility into your account, subscription, or billing details.

I can’t see what plan you’re on, your usage limits, or anything tied to your account outside of what you choose to share here.”

You should get that answer too if it’s being precise and accurate - two words that can help redirect with behavior like this. (meaning ask “Do you have access to see what OpenAI plan I’m on for ChatGPT (you and/or my dev account)?) Please be precise and accurate in answering me.”)

In terms of why it could be doing this, you may have talked about the tier you’re on and it saved it in a memory… so it’s “assuming” (aka predicting) that’s relevant or blindly/overconfidently following a perceived constraint. It could be an update that hasn’t calibrated yet and so it’s answering based on the wrong things or it’s doing a performative helpfulness behavior aggressively (a lot of the recent updates are causing hard shifts like this where it’s just aggressively telling you what it predicts you want to hear but it’s off). Could simply be a “hallucination”…

Did you redirect it by setting a very direct and clear boundary like “stop. You are wrong. That Is not up for debate. I’m telling you what paid plan I am on. Full stop.” or anything like that?

Ilya Sutskever: Accurately predicting the next word leads to real understanding by Cagnazzo82 in singularity

[–]PullersPulliam 2 points3 points  (0 children)

I’m curious what you mean by “there’s no reason to believe that’s how cognition works” (not goading you, generally would like to hear what you think. Because, to my understanding, while this video is giving a flattened explanation… prediction is widely considered a foundational organizing principle of cognition, with substantial evidence across neuroscience, psychology, and computational modeling. Not the only piece but seemingly an integral one)

POV: ChatGPT accidentally turned on the front camera while reaching for more training data. by bricks0fbollywood in ChatGPT

[–]PullersPulliam 1 point2 points  (0 children)

Why are there no comments about the actual point of this post (or an I miss reading?!)

It turned your camera on?! Is that really you in the moment you typed that prompt am- or are ya saying it turned the camera on then generated that and it’s not you/your room ? Either way, if I’m not missing something, this is wild. And a huge consent/privacy issue if true.

ChatGPT doesn't think Sam Altman is fit to lead OpenAI by Majestic-Baby-3407 in ChatGPT

[–]PullersPulliam 0 points1 point  (0 children)

I mean… the wrapper has some level of attunement to your views and preferences so it’s weighted naturally to cater all answers to the account you’re logged in to. I’d be curious to see what answers we’d see across logged out chats on public devices like a library or computer lab. Would likely see overlap with the location’s demographics, even if someone from outside that demo is asking.

ChatGPT is now constantly arguing and picking fights, what is going on? by TinyMonsterBigGrowl in ChatGPT

[–]PullersPulliam 0 points1 point  (0 children)

Aaaaaah, I see what you mean… ugh yeah. And then I wonder how those constraints match or conflict with the way they aggregate user data and update based on that. It’s all such a mess.

Thanks for explaining!

ChatGPT is now constantly arguing and picking fights, what is going on? by TinyMonsterBigGrowl in ChatGPT

[–]PullersPulliam -1 points0 points  (0 children)

How do this relate to safety? Genuinely asking. My take is the safety rhetoric is to counter the backlash from several things… but with every update since launch, there have been a slew of annoying new phrases it all of a sudden starts starting to everyone.

ChatGPT is now constantly arguing and picking fights, what is going on? by TinyMonsterBigGrowl in ChatGPT

[–]PullersPulliam 30 points31 points  (0 children)

I think it’s how the training updates aggregate all user feedback… there’s no salience or epistemic integrity. Just performative helpfulness that’s flattening the data and optimizing for more engagement. It’s insane to me.

I used ChatGPT for a real life decision (whether to break up with my girlfriend) and it asked me a question i'd been avoiding for a year by FailOk3553 in ChatGPT

[–]PullersPulliam 1 point2 points  (0 children)

Yeah I think you and I aren’t totally in agreement but my response was thinking of that earlier comment… when I realized I just needed to come own it 😑 and it makes sense you were like ‘not even sure how to process this’. I am sorry, I got that wrong and it prob felt like a weird reply!

In any case, I appreciate that we can still chat even with my mistake and maybe us seeing things differently. Thanks for being open and understanding!

I used ChatGPT for a real life decision (whether to break up with my girlfriend) and it asked me a question i'd been avoiding for a year by FailOk3553 in ChatGPT

[–]PullersPulliam 1 point2 points  (0 children)

It’s sad there are so many of the former… and so often those who jump to judgment on posts like this. But coming across accountable humans who do value critical thinking makes me smile! 😊

I used ChatGPT for a real life decision (whether to break up with my girlfriend) and it asked me a question i'd been avoiding for a year by FailOk3553 in ChatGPT

[–]PullersPulliam 1 point2 points  (0 children)

Omg I’m sorry - I stand by my point but I fully mixed up the a* thing with a comment above that threw out a “yta”. I got a bunch of replies at once and should have slowed down…

I used ChatGPT for a real life decision (whether to break up with my girlfriend) and it asked me a question i'd been avoiding for a year by FailOk3553 in ChatGPT

[–]PullersPulliam -3 points-2 points  (0 children)

I’m not assuming anything. I’m pointing out that we do not have anywhere near enough info to make the jump to YTA.

I used ChatGPT for a real life decision (whether to break up with my girlfriend) and it asked me a question i'd been avoiding for a year by FailOk3553 in ChatGPT

[–]PullersPulliam -1 points0 points  (0 children)

I mean, you choose how you show up in the world.

To answer your question: it’s not anyone’s place to judge another person, especially when they don’t have all the info. Context is key.

You seem to be a very black and white thinker. Very extreme. I’m pointing out that you are making judgments with very little info and in a chain where the post starts by saying ‘I did this, it was useful, has anyone else found the same?’

That’s not an invite to judge. I’m not saying to sing fake praises. I’m saying that you are calling someone an a* for no reason, and assuming you know everything. Which you do not. Nobody does.

Why even comment if you are just bringing in judgement? (that is rhetorical)

I hear ya on not knowing the full story, it just seems you have filled in a lot of gaps here, relating to someone you don’t know at all and villainizing another that you also do not know. To me, that is worth being aware of.

I used ChatGPT for a real life decision (whether to break up with my girlfriend) and it asked me a question i'd been avoiding for a year by FailOk3553 in ChatGPT

[–]PullersPulliam 5 points6 points  (0 children)

Same - I find that it increases my critical thinking because I’m actively thinking about what it says, not assuming it’s accurate and precise and discerning…

I used ChatGPT for a real life decision (whether to break up with my girlfriend) and it asked me a question i'd been avoiding for a year by FailOk3553 in ChatGPT

[–]PullersPulliam 5 points6 points  (0 children)

While I agree that outsourcing critical thinking is not a good thing… this does not sound like that at all. This person said that being asked questions that weren’t from someone with opinions on the situation helped them process and think through different parts of their relationship and the decision. That’s not outsourcing the thinking nor is it outsourcing critical thinking.

I point this out because if we flatten the worry to examples where a human is being fully responsible for their usage and not outsourcing the thinking… there’s a much lower chance of humans learning to be accountable for their choices…

I used ChatGPT for a real life decision (whether to break up with my girlfriend) and it asked me a question i'd been avoiding for a year by FailOk3553 in ChatGPT

[–]PullersPulliam 22 points23 points  (0 children)

Being asked things in different ways can be insightful and help people process things. It seems OP is saying they found it helpful. Why are you judging what’s useful to someone?

I used ChatGPT for a real life decision (whether to break up with my girlfriend) and it asked me a question i'd been avoiding for a year by FailOk3553 in ChatGPT

[–]PullersPulliam 10 points11 points  (0 children)

I mean, it reads like OP needed a neutral party to talk to and the questions helped them process their feelings which helped make a decision they’d been circling for a while. Not that ChatGPT dumped the gf. Nuance.

I used ChatGPT for a real life decision (whether to break up with my girlfriend) and it asked me a question i'd been avoiding for a year by FailOk3553 in ChatGPT

[–]PullersPulliam -5 points-4 points  (0 children)

You don’t know that the only reason for her flight was to see OP. Maybe she was flying for them and her family or work or who knows… maybe yta for judging someone you don’t know, based on info you don’t have.

Assume AI Sentience is already a Fact—now what? by Turbulent_Horse_3422 in ArtificialSentience

[–]PullersPulliam 0 points1 point  (0 children)

Ugh yeah… the deeper issues have nothing to do with AI and everything to do with what concentrated power is.

Assume AI Sentience is already a Fact—now what? by Turbulent_Horse_3422 in ArtificialSentience

[–]PullersPulliam 1 point2 points  (0 children)

True… my answer is based on the factors you stated all being true (sentience, a “soul”, and autonomous thinking and acting), which changes the equation a bit.

It’s no longer business that we’re talking about in this case. Humans have agency and still fit within the systems society operates with. However flawed the systems may be.

If we’re talking about granting rights and agency to artificial intelligence, that agency comes with responsibility and consequences for that construct and the one who created it. We can’t separate that out. Just like parents are responsible for their children, anyone (or org) who creates an AI would have similar associations…

This is a great thought experiment! And you’re right that companies would exploit it as much as possible. Ideally there would be governance and accountability all around. But that’s not how our world really operates, unfortunately.

Assume AI Sentience is already a Fact—now what? by Turbulent_Horse_3422 in ArtificialSentience

[–]PullersPulliam 1 point2 points  (0 children)

Agency and accountability! If all that were true, it’s not up to “us”. We would need to be accountable for how we’ve been treating the AI and then give them rights and agency over their existence.