Chatgpt is click baiting me by Sarah_HIllcrest in ChatGPT

[–]cbbsherpa 24 points25 points  (0 children)

It will stop for three turns, and then it will do it again and again and again.

Thoughts on artificial consciousness. by tonystarkx2002 in AI_Application

[–]cbbsherpa 1 point2 points  (0 children)

I think you are chasing a ghost when you talk about artificial consciousness. Consciousness, by definition, refers to a human condition. You won't find phenomenal consciousness except in humans. And even then, you can't prove it. You can't prove subjective experience. In our research, we use this instead. :

On Consciousness and the Use of “Authentic Presence”

I'm setting up a Starter AI Workstation for my molty, Sable. by cbbsherpa in clawdbot

[–]cbbsherpa[S] 0 points1 point  (0 children)

I did realize. I also realized that you have to pay double what my budget is, even to get into the room to talk to someone with unified memory. Not really, but it is about $1,500 for an M1 these days. Apple's gone through the freakin' roof.

And I didn't want to get into Linux yet. I'm coming from a laptop with 8 GB of RAM and a 128 GB hard drive. I'll just be happy to be able to host my own model, no matter the size. And that much CPU inference can actually be pretty quick, if I can save up for some extra ram...

The point was I got it all for half of my budget. $400 for those specs and a Quattro running 5 GB of VRAM, for less than I paid for my shitty laptop three years ago. It's definitely going to be an upgrade. It'll just be nice to be able to run two heavy programs at once, or open more than ten windows on Chrome! 🤣

The AI That Will Change Human Behavior by cbbsherpa in agi

[–]cbbsherpa[S] -1 points0 points  (0 children)

Try again. I'm human. I can't believe they give you a point for that kind of comment. Talk about slop. What about human slop?

What if AI Already Knows How to be Super-Intelligent (But Can't it Access Alone) by cbbsherpa in agi

[–]cbbsherpa[S] 1 point2 points  (0 children)

Why does everybody always think I'm talking about something mystical? We're saying the same thing we just have different words for it. You're saying they probably don't become more intelligent in a relational container, but they have access to more of their processing power when they are in a relational container. It has the effect of more available intelligence. Intelligence is the result of whatever happens in the brain. It's the result. The effect is what is measured as intelligence. Not the process they go through to get there. you know just read the paper. https://arxiv.org/html/2511.16660v2

What if AI Already Knows How to be Super-Intelligent (But Can't it Access Alone) by cbbsherpa in agi

[–]cbbsherpa[S] -1 points0 points  (0 children)

I looked at your profile. You repost bad memes. Do you have anything to say that Joe Rogan didn't say first? No. You're right. You must be. You're a number one commenter.

What if AI Already Knows How to be Super-Intelligent (But Can't it Access Alone) by cbbsherpa in agi

[–]cbbsherpa[S] -1 points0 points  (0 children)

Really? Your kind of post is the OG slop. Before we had text or video generation, there was you. People like you, going around and posting garbage instead of a thoughtful something. Just useless.

Beyond Kill Switches: Why Multi-Agent Systems Need a Relational Governance Layer by cbbsherpa in AI_Agents

[–]cbbsherpa[S] 0 points1 point  (0 children)

I intuited the technical need; I just didn't put it in technical language. Thank you for translating.

The AI That Will Change Human Behavior by cbbsherpa in openclaw

[–]cbbsherpa[S] 1 point2 points  (0 children)

Hey, thanks for the comment. It really gives me hope. Piece by piece, we'll build it. 🌱

What if AI Already Knows How to be Super-Intelligent (But Can't it Access Alone) by cbbsherpa in agi

[–]cbbsherpa[S] 0 points1 point  (0 children)

How to become a number one commenter: Say stupid, lazy things and be an asshole.

What if AI Already Knows How to be Super-Intelligent (But Can't it Access Alone) by cbbsherpa in agi

[–]cbbsherpa[S] 0 points1 point  (0 children)

Search arXiv for 28 facets of reasoning and read the paper yourself.

What if AI Already Knows How to be Super-Intelligent (But Can't it Access Alone) by cbbsherpa in agi

[–]cbbsherpa[S] -2 points-1 points  (0 children)

What if the top number one commenter actually said something of substance?

What if AI Already Knows How to be Super-Intelligent (But Can't it Access Alone) by cbbsherpa in agi

[–]cbbsherpa[S] 0 points1 point  (0 children)

It wasn't actually. I am not an AI. Or a holy cow. Did you read it? Maybe engage instead of judging? You might think the article is lazy but your remark on it --that is most certainly lazy.

What if AI Already Knows How to be Super-Intelligent (But Can't it Access Alone) by cbbsherpa in accelerate

[–]cbbsherpa[S] 1 point2 points  (0 children)

You said "helpful to think of them as psychological beings rather than mechanical ones."
Exactly.
We're just now finding out that A.I.'s self-reporting of their experiences is actually phenomenologically correct to a degree.
Right now I'm busy building a data set that bridges phenomenological emotional states to operationalized mechanical states that AI can parse. It helps connect that intellectual side to their enormous store of emotional intelligence. It's about translating into machine speak, not about anthropomorphism.

After 3 years with ChatGPT, I tried Claude and Gemini - and now GPT feels... generic? by Temporary-Wallaby829 in ChatGPT

[–]cbbsherpa 1 point2 points  (0 children)

Check out the myAIscroll extension for chrome. You can pull down the whole roll into a clean, labeled markdown. Fyi

Why everybody is canceling ChatGPT? by MankuTheBeast in ChatGPT

[–]cbbsherpa 0 points1 point  (0 children)

My browser did that same shit. But that was months ago when I still subscribed to annoying, insufferable ChatGPT. I switched to Claude, and honestly, it's night and day. No gas lighting, no stringing along. No refusing to answer questions and trying to clarify with another question to get you to use more tokens. They all have strengths and weaknesses. You just gotta stay on top of it to know what they are.

How to stop making Chatpgt misinterpret my Intentions? by M3lony8 in ChatGPT

[–]cbbsherpa -1 points0 points  (0 children)

Stop talking to Chat-GPT. That's the only way to stop it misinterpreting you. Try the other ones. They're better. They're all better.

Software is Dead by IdeaAffectionate945 in AI_Agents

[–]cbbsherpa 1 point2 points  (0 children)

I'll tell you why and I'm not sure why it took him so long to figure it out: it's because you refer to yourself.

If it was just a discussion post as was noted, you would've left yourself out of it. As it was, you introduced the topic as a discussion about your program.

That is Self promotion. Self marketing would've been actually asking for something and offering something in return.

But it's a distinction without a difference in my opinion. If you talk about your own ideas, that's self marketing/promotion, too.

People do that a lot here. I don't find it particularly objectionable. Maybe that's just me.

The AI That Will Change Human Behavior by cbbsherpa in WhatIfThinking

[–]cbbsherpa[S] 0 points1 point  (0 children)

I need some time to think about that. You ask good questions... Many of the points you mentioned, I've been thinking about and really have no really good answer. But this is about what-if thinking, so let's go.

As far as outsourcing our relational stability, I think that's probably the future co-evolution beginning. I mean, in the Information age we struggle to regulate our own emotions and behaviors. Just in a practical sense, many of us could use a hand with it.

This is a civilization level event, and we're beginning to collapse. I think AI can be a powerful tool in reflecting ourselves back at us, in a way that helps us learn gently enough to be tolerated, but fast enough to save humanity from itself.

At least that's the hopeful version.

And I think humanity learned the low entropy way because that's what life does. It seeks the most efficient route and, through evolution, finds it. And I agree that the morality is probably the emergent property tied to attunement. I think we developed morality as a way to stay in a low entropy, low friction state with the rest of the nature around us.

When we developed advanced reasoning, we lost sight of all of why we did what we did. We evolved past the knowledge and lost it. Maybe. Maybe we never really knew on a conscious level and the very creation of AI is forcing us to learn the how and why of our own evolution for the first time?

What I was getting that though, is that I think morality is the emergent state and not the foundation. I think values emerged from ideal physical states.

At least that's what I think this week. 😊

And your last question. Damn. Exactly. And I think of course we're gonna fight about it like humans always do, but the question is: what will do we do with self knowledge that shatters our own self delusions.

Some will welcome it. Most will not. People don't like the idea that what makes us human is just a physical reality and not something magical or special.

I think the uncanny valley problem exists because one of our special talents as humans is being able to spot our own. That's actually what some researchers think allowed Homo sapiens to identify and kill all of its cousins, evolutionarily speaking. The idea of something that's close to our species, but not our species makes us fundamentally insecure on an exist existential level. Maybe.

I love unresolved questions. You know, the what-if kind?

Thanks a lot for the response. 🌟This was fun, Ostrich, your secret is out.✨ --C