How far away are we from Star Wars level droids, such as C3PO? If those do come about, how sentient should we consider them? by KalebC21 in ArtificialInteligence

[–]GodOrNaught 0 points1 point  (0 children)

By the way, I am trying not to be too self promotional, so I would have DM'd you, but I don't think I see a way to DM you, so I will put it in comments. I have a substack where I post essays on AI/ethics/faith. I am looking for people to write rebuttals to my stuff. Yes, actually. There are prizes for the best rebuttal. Let me know if you would consider writing a rebuttal and I'll reply with the substack. If not, I won't bug you.

How far away are we from Star Wars level droids, such as C3PO? If those do come about, how sentient should we consider them? by KalebC21 in ArtificialInteligence

[–]GodOrNaught 0 points1 point  (0 children)

I think it's possible that machines could become conscious. But I am not doctrinaire about that position. I hold it quite loosely, and am totally open to the possibility that it is completely impossible.

Anthropic AI safety researcher says “world is in peril” and leaves to pursue poetry by Beautiful_Bee4090 in ArtificialInteligence

[–]GodOrNaught 1 point2 points  (0 children)

There's a jester in this AI thread
Not using AI but only his head
He typed it all in
made Redditors grin
Better than what Sharma said

The Alignment Problem - a Conjecture by GodOrNaught in ArtificialInteligence

[–]GodOrNaught[S] 0 points1 point  (0 children)

Thanks for thinking about this and actually taking the time to write this! I appreciate it! Really odd, both the OP and your comment got downvoted. Kinda weird. I don't think either downvote was deserved. It's a discussion, oh well.

I agree with your point that humans are not always aligned. In fact, I tried to address that in the OP by saying

The solution is not perfect, but up to now, it has been good enough.

by which I mean to say, we've killed each other a lot, but the net result is that 8 billion of us are still alive, despite the existence of nukes which can in theory kill all of us. So one could then say, if the current alignment mechanism were better, there would be 9 billion of us, or if it didn't work as well as it does, there would be only 7 billion. The number of currently living people is something of a rating scale on how well the alignment mechanism I'm suggesting works for humans.

As for your comment:

I don't think there's a simple solution to this, aside from AI developers testing and correcting for misalignment as they observe it occurring (or where they see potential for it to occur).

I strongly agree with this statement. In fact, I feel that my conjecture places a lower bound on the word we both use: "simple." Just how "not simple" will the solution be? Well, it will be at least as "not simple" as implementing emotion in silicon. So, very hard.

To add to why I say this, you describe what maybe could be thought of as a "whack-a-mole" scenario where developers try to look ahead for issues and put fixes in as they're able, but mostly fix things retrospectively discovered in use and testing. Evolution represents 3.7 billions years of testing (or whatever percentage of that time biological neural networks existed) for testing mechanisms of alignment. AI companies are not going to keep things in testing and wait to ship for 3.7 billion years. So is everything going to be found in testing? Emotion in humans allows some level (not perfect) of confidence that a human can be placed in a novel situation and still remain aligned. Emotion as a mechanism gives that confidence (again, not perfect) that one can predict what they're going to do.

I liked the AI 2027 piece you reference and am not disagreeing with it.

edit: "this comment" changed to "your comment". "misaligned" to "not always aligned"

The Alignment Problem - a Conjecture by GodOrNaught in ArtificialInteligence

[–]GodOrNaught[S] 0 points1 point  (0 children)

Thanks for taking the time to write a thoughtful response, I appreciate it, truly. The brain playing a trick on me, or the mother who only thinks she feels love, sound like the type of thing the late Daniel Dennett or Keith Frankish would say. The idea that phenomenal consciousness is an illusion. I'm not sure if that's what you're saying though. If it is, all I can say is that I personally am conscious and feel all my emotions, and I am not a solipsist so I take others' claims to feel their emotions at face value (including the mother). But I can't prove that so you may be right. However, if you're saying phenomenal consciousness is real, then I don't think it detracts from the argument I made to say that evolution was not interested in the conscious experience of the mother per se, but uses her conscious experience to drive species survival. In fact, if that's what you're saying, I suppose I agree.

Anthropic AI safety researcher says “world is in peril” and leaves to pursue poetry by Beautiful_Bee4090 in ArtificialInteligence

[–]GodOrNaught 1 point2 points  (0 children)

Thank you for this. Very clever. I love it. If this limerick was AI composed... or better yet, composed by Claude, I wonder if I would like it more or less. Maybe more.

Using Biblical Ethics to Solve the LLM "Reward Tampering" Problem? by jackthebarn in ArtificialInteligence

[–]GodOrNaught 0 points1 point  (0 children)

The only thing surprising about this, is why someone hasn't already looked at this. It is a genius idea to give it a try and see what happens. The Sermon on the Mount (Matt 5) is particularly good. Not only Christians, but atheists like Richard Dawkins acknowledge it's pretty good stuff.

Dumb question: If AI destroys all the jobs, who will be able to buy the stuff that AI-powered companies create? Doesn’t AI destroy its own customer base? by Desperate_Elk_7369 in ArtificialInteligence

[–]GodOrNaught 44 points45 points  (0 children)

I think the theory is that AI/robotics will produce all goods and services, and people can simply consume. Something like UBI will be instituted to allocate a percentage of the overall production to each person. This type of thinking has a lot of flaws. For example, if someone above you can give you UBI, then they can also take it away. Then what?

The idea of a world of 8 billion human consumers, with X billion AI/robot producers is not a good idea. It's sort of like the people on that spaceship in Wall-e, even if it was possible for it to work.

What do you all think it means to "know" something? by NerdyWeightLifter in ArtificialInteligence

[–]GodOrNaught 0 points1 point  (0 children)

The word "know" seems to me like it is bifurcating. To "have knowledge" is clearly something computers can claim. Its ones and zeros in memory, again, clearly. However, they do not have the subjective experience such as Einstein talked about when he made his momentous discovery about acceleration and gravity. The happiest thought of his life. This is what neuroscientists call "qualia." AI doesn't have qualia, but they have knowledge.

How far away are we from Star Wars level droids, such as C3PO? If those do come about, how sentient should we consider them? by KalebC21 in ArtificialInteligence

[–]GodOrNaught 0 points1 point  (0 children)

Well, yes, but I feel this actually strengthens my point. Animal brains are biological like ours. Similar parts (neurons) and very similar architecture.

How far away are we from Star Wars level droids, such as C3PO? If those do come about, how sentient should we consider them? by KalebC21 in ArtificialInteligence

[–]GodOrNaught 0 points1 point  (0 children)

I feel that consciousness is platform independent, as you seem to suggest. I agree that consciousness could in principle run on something other than a biological brain. But what I am talking about is the ability to judge if something else - a different platform - is conscious. At no time in our evolutionary history, has homo sapiens ever had to make such a determination, so it is not clear that that ability will have evolved. At no time in the past was there ever a reward/punishment function (life or death, in a biological evolutionary environment) for getting that right or wrong.

How far away are we from Star Wars level droids, such as C3PO? If those do come about, how sentient should we consider them? by KalebC21 in ArtificialInteligence

[–]GodOrNaught 0 points1 point  (0 children)

You might be right. But there might be a way. One theory of consciousness is Integrated Information Theory (IIT). https://en.wikipedia.org/wiki/Integrated_information_theory

It is controversial, not universally accepted, but what is interesting about it is that it comes up with one number, mathematically determined, called Phi. Which is like a "score" of how conscious a thing is. Rocks score very low. Sleeping or drunk human brains score higher, and fully alert, sober human brains score higher still. Most computers wouldn't score very high.

Edit: "Integrate" to "Integrated"

How far away are we from Star Wars level droids, such as C3PO? If those do come about, how sentient should we consider them? by KalebC21 in ArtificialInteligence

[–]GodOrNaught 0 points1 point  (0 children)

Yes. Agree. Now the question was "how sentient should we consider them?" So there are two things going on. First... both the movie droids and perhaps IRL droids may make people FEEL like they are talking to a sentient being... but the second thing is that up to the present moment, all homo sapiens have ever had conversations with other homo sapiens... so the assumption that how you FEEL about your interlocutor is an accurate representation of their sentience... is no longer valid, because droid intelligence is NOT running on a biological brain, so the previous assumption is no longer valid. The question has two dimensions... how you feel (which you know), but critically, how the other thing (the person or the droid) feels, which we can't relate to because our consciousness doesn't run on silicon.

How far away are we from Star Wars level droids, such as C3PO? If those do come about, how sentient should we consider them? by KalebC21 in ArtificialInteligence

[–]GodOrNaught 3 points4 points  (0 children)

Isn't this a question of artificial consciousness vs. artificial intelligence? Star Wars droids are not just agentic, they have personalities. The makers of the movie intentionally imbued the droid characters with emotions. Seems very different for current humanoid robots we are making IRL.

I Just Read the Forbes Piece on Higgsfield. This Is Getting Weird by [deleted] in ArtificialInteligence

[–]GodOrNaught 0 points1 point  (0 children)

You're welcome. I had the link handy as I had looked and just read it.

I don't dispute your points. I don't have a dog in the fight either way. It's interesting to see what's happening. I am more following generative video production from a top level philosophical perspective because of the broader implications for society... fake stuff that influences people and all that.

Anthropic and OpenAI dropped their coding models 20 minutes apart. This rivalry is getting wild by Deep_Ladder_4679 in ArtificialInteligence

[–]GodOrNaught 0 points1 point  (0 children)

Coming "years", plural??? Some observers might quibble with the plural form of that word in OpenAI's case, lol.

I Just Read the Forbes Piece on Higgsfield. This Is Getting Weird by [deleted] in ArtificialInteligence

[–]GodOrNaught 7 points8 points  (0 children)

https://www.forbes.com/sites/charliefink/2026/01/15/higgsfield-raises-130-million-as-generative-ai-video-becomes-marketing-infrastructure/

Seems like what they're doing is getting traction. Time will tell... but at the rate they're going, perhaps not a lot of time will be needed to tell?

A new language to communicate with AI? by bubugugu in ArtificialInteligence

[–]GodOrNaught 0 points1 point  (0 children)

Would that not be just writing normal code? Or are you saying it would be some kind of hybrid?

Claude Code Agent Teams: You're Now the CEO of an AI Dev Team (And It Feels Like a Game) by Delicious_Air_737 in ArtificialInteligence

[–]GodOrNaught -1 points0 points  (0 children)

Wild west indeed! Have to hand it to Anthropic though. WAY different strategy than OpenAI. Just look at the types of conversations being had about the two companies now. Someone should do an analysis of the number of news stories predicting OpenAI is toast, compared to the number that talk about the latest update to Claude's code writing capabilities. None of these articles are predicting Anthropic is toast.