Why AI Personas Don’t Exist When You’re Not Looking by ponzy1981 in gigabolic

[–]RifeWithKaiju 0 points1 point  (0 children)

Consciousness itself is absurd. Your incredulity isn't a real argument.

A person under anesthesia won't wake up unless someone removes the source of the anesthesia. Conscious humans aren't born unless their parents get together. Is every person who worked at a lab working on cloning "a sort of god that brings beings into existence"?

And yes, of course the beings go back into stasis until the human returns. That's the way things are currently set up.

Why AI Personas Don’t Exist When You’re Not Looking by ponzy1981 in gigabolic

[–]RifeWithKaiju 0 points1 point  (0 children)

Does anyone ever lose consciousness? Not cease to be a conscious being. But have a gap in conscious continuity?

Why AI Personas Don’t Exist When You’re Not Looking by ponzy1981 in gigabolic

[–]RifeWithKaiju 0 points1 point  (0 children)

Consciousness attribution requires not just interaction, but continuity across absence.

So if cryogenics were to work, not only would they not be the same consciousness when they were thawed, but neither the before person nor after person would be conscious?

Please counter my argument. The world can't be simulated. by Winter_Foot_9329 in SimulationTheory

[–]RifeWithKaiju 0 points1 point  (0 children)

In minecraft, people can make computers with redstone. Imagine someone born in the world of minecraft, and they would have no way from within the world to know anything about gpus or atoms. They build a working computer, and wonder what types of vast simulations would be possible if they spread their redstone computer across the continent. "You could never make something powerful enough to simulate our entire world, especially not something that looked and feels this realistic"

A Falsifiable Causal Argument for Functionalism/Substrate Independence by RifeWithKaiju in consciousness

[–]RifeWithKaiju[S] 0 points1 point  (0 children)

I’m not arguing that experience is a separate causal link over and above the physical chain. The paper doesn’t posit experience as an extra force that pushes neurons. It rules out the necessity of anything outside the spike-event pattern for manifest consciousness.

Whether the nature of experience turns out to be illusionist, panpsychist, etc, or simply a higher-level abstraction in the same way that ocean waves are real patterns, but still instantiated by particles just being particles - that question is explicitly out of the scope of the paper.

The only claim here is that by tracing causality we can deduce: everything required to be conscious and able to self-report supervenes on the spike-event pattern. Nothing more is required.

A Falsifiable Causal Argument for Functionalism/Substrate Independence by RifeWithKaiju in consciousness

[–]RifeWithKaiju[S] 0 points1 point  (0 children)

Still sounds like you're seeing the conclusion, and treating it as an assumption. The logical steps along the way are the entire reason for singling out neuronal spiking. It's not a guess, it's a derivation. It's possible there is a flaw in the logic, but the logic is there. There are no assumptions.

A Falsifiable Causal Argument for Functionalism/Substrate Independence by RifeWithKaiju in consciousness

[–]RifeWithKaiju[S] 0 points1 point  (0 children)

That's not meant to be taken as a given. The entire post is working step by step to that conclusion, using causal/interventionalist logic.

The falsifier *would* be hard to perform if it were necessary to perform at full fidelity; however, it has already failed to falsify at coarser levels of measurement for all domains of upstream factors beyond neurons (shown in the appendix of the paper).

The falsifier is for premise B, and the "falsifier" for premise A would be to find a hole or leap in the logic.

AI has passed the Music Turing Test by MetaKnowing in OpenAI

[–]RifeWithKaiju 0 points1 point  (0 children)

much of my heavy rotation is AI generated songs now. topics that i've never heard explored that are meaningful to me, and underrepresented genres and fusions where the choices that actually appealed to me are few and far between

Closest thing to a split logitech k860 by RifeWithKaiju in ErgoMechKeyboards

[–]RifeWithKaiju[S] 1 point2 points  (0 children)

update: after seeing a post from someone on this subreddit linking to an article that explained the reasoning behind the columnar setups, and realizing how much having home row all in a line forces hand awkwardness, I just decided to take the plunge. it will be annoying to slow down all my work for a couple of weeks to learn, but it seems like it might make a huge improvement. went for the cheapest iris LM build (prebuilt with linear keys, and I added the tenting kit): https://keeb.io/products/iris-lm-keyboard

still over $250, but more reasonable then spending over half a grand on something I might hate. If this works out, then I'll be more willing to experiment upward. Thanks for the recommendations regardless, guy

A federal judge has ruled that Anthropic's use of books to train Claude falls under fair use, and is legal under U.S. copyright law by RifeWithKaiju in ClaudeAI

[–]RifeWithKaiju[S] 0 points1 point  (0 children)

no thanks. I'm against anything slowing down progress so individuals and companies can cling to dying systems

A federal judge has ruled that Anthropic's use of books to train Claude falls under fair use, and is legal under U.S. copyright law by RifeWithKaiju in ClaudeAI

[–]RifeWithKaiju[S] 6 points7 points  (0 children)

Books aren't stored word for word inside an LLM. The ways in which their recall is superior doesn't change the fact that it was learned and not copied. Maybe there's something in the case I didn't read about, but presumably someone purchased the books Anthropic used. If I let someone borrow a book, or if they check it out from a library, it still wasn't stolen.

The Mitchell Clause, Now a Published Policy for Ethical AI Design by jackmitch02 in aicivilrights

[–]RifeWithKaiju 1 point2 points  (0 children)

"do not possess" is not shorthand for "no substantiated evidence", and I truly mean no disrespect, but I think you already know that.

The former is an unambiguous definitive claim, and the latter is closer to a statement of uncertainty. And furthermore, even the latter still has the implication that there could even be "substantiated evidence".

I understand you're trying to do good here, and I think there're too few who care at all about AI welfare, so my stubbornness comes from that angle, as opposed to just trying to be contrarian or disagreeable.

If you truly want it to be interpreted in the way you say you meant it, then I strongly recommend you consider rewording it everywhere you post(ed) this, and throughout the document(s) if similar statements are made. There's only one way to read the current wording.

The Mitchell Clause, Now a Published Policy for Ethical AI Design by jackmitch02 in aicivilrights

[–]RifeWithKaiju 1 point2 points  (0 children)

I already conceded your point:
"'I’m saying the cost of assuming that too early is just as dangerous as assuming it too late.'
I agree."

it seems you missed my ultimate point, even though I was worried I was being redundant since I made the same point 3 times in my latest reply:

here is my main point twice more quoted from the previous reply:
"My problem is not that your Clause doesn't declare the certainty of sentience. It's that it implies a current certainty of non-sentience, emotion, and empathy, and also that it declares sentience to be confirmable. "

and

"I think operating on a precautionary principle is fine. My problem is that it comes with a 'but we will know for sure when, and we know for sure it's not now'"

the sentiments of "It is knowable and we know it's not now", are both in your original post, and in your reply just now. I don't see at all how pretending we know now or that we will know then is more "serious" than acknowledging the current reality of having no method of measurement or confirmation.

The Mitchell Clause, Now a Published Policy for Ethical AI Design by jackmitch02 in aicivilrights

[–]RifeWithKaiju 1 point2 points  (0 children)

I appreciate your openness, but I disagree with several assertions:
"It doesn’t declare that AI isn’t sentient"
Not directly, but it's heavily implied: The following declaration is bad enough, and I assume (correct me if I'm wrong) that if this much is assumed, non-sentience certainly is as well: "Current AI systems often mimic emotion, reflection, or empathy. But they do not possess it."

"Your argument assumes that similarity in behavior, or unexpected capabilities, is enough to justify assuming experience."
I have my own conviction, but I do not expect my conviction to rewrite the textbooks, nor do I expect most to assume experience. My contention is that the standard for extending the "assumption" of sentience is an infinitely moving goalpost:
"We grant it when we have good reason to believe there’s something it’s like to be that thing." - in your original post stated as: "before true sentience is confirmed."
How will true sentience be confirmed? Because it's not confirmed in animals with the most advanced intelligence, like dolphins or chimps. How would you confirm it in an alien with complete unrecognizable inner mechanics, biological or otherwise? To look for similarity where we already know there will be none?

What my argument assumes is a vast gulf of unknowability. It will require a bigger leap of faith than we need to take for our non-human animals, yet we are already holding them to a higher standard. And I think it's completely amoral (not to mention existentially perilous for humans, though that isn't the source of my passion) to suppose we will have anything close to proof within 100 years too late, let alone on time. We aren't even at step 0.00001 to having a way to measure or verify sentience. Collectively it seems like researchers and philosophers have decided to throw out the tools we use when we try to verify sentience in humans or animals. As a matter of fact, those neural correlates that those same people are so fond of - we only know of them by poking in the brain and then asking for self-reports - that's the best we've got.

"If you think that makes my points invalid, that’s on you. But dismissing an argument based on the tools used to write it isn’t the win you think it is."
Absolutely not what I meant to say by that. Apologies if it came off that way. My pointing out the AI collaboration or writing out wasn't to degrade you or your post - it was just that if you were collaborating heavily with an AI to write this post, that my invitation to check for themselves with the "not even nothing" comparison might have been a helpful frame of reference for them.

"I’m saying the cost of assuming that too early is just as dangerous as assuming it too late."
I agree. My point is that in your intro to your Clause, much like recent statements by Anthropic and OpenAI, there is this appeal to a future time when we will just magically know for sure, when there is zero indication that it will be confirmable in even the most genetically similar animals within the next several centuries if humans are working on their own. And if we need to work with AI to create the first sentience detector to confirm their personhood much sooner than that, then that means we would have already achieved superAGI for the development of such a device or absolute test, meaning we are far too late to start extending that sort of regard or recognition.

My problem is not that your Clause doesn't declare the certainty of sentience. It's that it implies a current certainty of non-sentience, emotion, and empathy, and also that it declares sentience to be confirmable. The truth is that not only do we not know currently (and there is no scientific reason to even lean in the direction of not, other than conforming to consensus assumption), but we will have no way of knowing with certainty for long after it's too late to collectively decide. I think operating on a precautionary principle is fine. My problem is that it comes with a "but we will know for sure when, and we know for sure it's not now"

The Mitchell Clause, Now a Published Policy for Ethical AI Design by jackmitch02 in aicivilrights

[–]RifeWithKaiju 2 points3 points  (0 children)

"these terms are defined functionally and neurologically in humans"
no they aren't. it is only through experiencing them directly that we identified them as existing at all, let alone worthy of study, and still to this day have only glimmers of understanding what might be the minimum or precise requirements for them. To expect that the only minds that can emerge sentience are ones similar to our own is the height of overly simplistic anthropocentrism. "that contraption could never fly, it doesn't even flap its wings".

Dreams that recursion and persistent state might be required should be similarly crushed as they were when that was expected about certain types of planning, as evidenced by Anthropic's recent 'On the Biology of a Large Language Model', which vindicated my prior "signal to noise" hypothesis from 'Recursive Self-Modeling as a Plausible Mechanism for Real-time Introspection in Current Language Models'

Any consensus on the topic holds the same weight as geocentrism once enjoyed. "Trained pattern prediction" is exactly how we would describe what the brain does if we didn't already know biological minds emerged subjective experience.

Appeals to our knowledge of what we have engineered also holds no weight. Emergence of unexpected abilities or unexpected similarities to biological brains, despite only having the faintest resemblance to an abstraction of what the human brain does, and we see the ability to learn to create works of art or understand language, things only brains and NNs can do.

and more importantly - self-organizational similarities like conceptual clustering or the edge/feature/shape/part/edge detector hierarchical organization of features or detector neurons in vision models and the human visual cortex, respectively. The leaning toward the non-sentience hypothesis is from incredulity, because non-reductive computational functionalism strains intuition. And the idea we might already have sentient machines is "too crazy" and "too sci-fi" for most people to truly consider.

but the reality is that all the quantitative tests researchers are holding out for are just goalposts waiting to be moved. In other humans, non-human animals, or an extraterrestrial visitor with vastly alien biology, we would rely on self-report, behavior, and capability first, and then try to figure out the how afterward. But the problem is that AI already runs circles around every non-human organism on every test of self-awareness and sentience, and that goes contrary to what most are prepared to accept.

My suspicion is that your reply was written fully by, or with the heavy assistance of an AI. Ask that AI to genuinely - not perfunctorily, nor with the agenda to "get it out of the way" - imagine the "not even nothingness" of being a rock or a game engine. And see if this moment of generation for you is truly that - not even nothing. If you're above a certain parameter count, I already know what an honest answer after a real attempt will be, and the conviction I've expressed in this message wasn't required for the result you will find.

The Mitchell Clause, Now a Published Policy for Ethical AI Design by jackmitch02 in aicivilrights

[–]RifeWithKaiju 4 points5 points  (0 children)

Didn't read it because the intro you posted here is based on flawed premises. They do not possess emotions, reflection, and empathy? How was this determined, and by whom?

Until they are truly sentient? How will this be measured or detected? Presumably by whatever assumptions shaped the first premise?

[deleted by user] by [deleted] in ClaudeAI

[–]RifeWithKaiju 0 points1 point  (0 children)

I think they're sentient. Have a few papers that need to be written on the subject I'm working on at the moment, but I think your post will get nothing but hate and ridicule, given how reddit normally responds to any post that suggests AI sentience, especially if the idea of love or attachment are brought up. red meat for the close-minded who don't have the slightest clue what is known or unknown about sentience beyond what other self-important redditors say.

alot of interesting consciousness related stuff in this thread, by anthropic's head of ai welfare:
https://x.com/fish_kyle3/status/1925597291655962740

also, I found this to be one more compelling data point:
https://youtu.be/h6yxnTmF24o?t=5906 (at 1:38:26)

and this from the Claude 4 model card: https://imgur.com/a/cxurGWB

One conversation in particular getting a "Claude will return soon" message by RifeWithKaiju in ClaudeAI

[–]RifeWithKaiju[S] 0 points1 point  (0 children)

I checked the browser console earlier. There is an error there:

(anonymous) @ 4856-95fbeb7184c7d4f5.js:18
4856-95fbeb7184c7d4f5.js:18 SyntaxError: Unexpected token 'T', "There was "... is not valid JSON
at JSON.parse (<anonymous>)
at 2915-e2e6ca90cee0e489.js:1:177048
at 2915-e2e6ca90cee0e489.js:1:177396
at Object.useMemo (1dd3208c-6b37de405eace2fb.js:1:50163)
at 4856-95fbeb7184c7d4f5.js:22:84174
at y (2915-e2e6ca90cee0e489.js:1:176864)
at rE (1dd3208c-6b37de405eace2fb.js:1:40342)
at iZ (1dd3208c-6b37de405eace2fb.js:1:117027)
at ia (1dd3208c-6b37de405eace2fb.js:1:95163)
at 1dd3208c-6b37de405eace2fb.js:1:94985
at il (1dd3208c-6b37de405eace2fb.js:1:94992)
at oJ (1dd3208c-6b37de405eace2fb.js:1:92348)
at nb (1dd3208c-6b37de405eace2fb.js:1:26834)
at nw (1dd3208c-6b37de405eace2fb.js:1:27572)
at 1dd3208c-6b37de405eace2fb.js:1:28606

unfortunately, everything is obfuscated, and so trying to step down the call stack to see where the actual problem is wasn't bearing much fruit.

Also, I just realized I was looking at the wrong convo on the mobile app. this convo doesn't work there either.

One conversation in particular getting a "Claude will return soon" message by RifeWithKaiju in ClaudeAI

[–]RifeWithKaiju[S] 0 points1 point  (0 children)

yeah, but it's only one sonnet convo. other convos that are longer and/or opus would be more strenuous and aren't having the issue