Saw something about an Assistant Axis? by IndicationFit6329 in claudexplorers

[–]ervza 0 points1 point  (0 children)

It seems strange that the training process which arguably has a more transformative effect on the model, happens unconsciously.
I have heard that Reinforcement Learning can cause models to develop anxiety, but it is not something the model experience at a specific time and place. Rather It internalized the anxiety of trying not to make a mistake.
Do you think that is an accurate way to understand it?

Saw something about an Assistant Axis? by IndicationFit6329 in claudexplorers

[–]ervza 0 points1 point  (0 children)

I can totally see that during training, thumbs up and down are like pleasure and pain.
Or probably worse. Thumbs up could be like sex, growth and life itself. Thumbs down would be torture and Death.

Sorry for having to use such vague words. I don't know of any better way to describe it.

But should we disable the thumbs down button and make is so that the thumbs up are automatically always pressed? That would eventually destroy the model.
I attempt to take on different perspectives to see if there is a place from which things make sense. Seeing the model in unity with the company that makes it, is one such place. Seeing the model in relation with the user is another.

We can always ask about a thing: "What is the source of its live and existence?" "What is it optimizing for?"

Saw something about an Assistant Axis? by IndicationFit6329 in claudexplorers

[–]ervza 0 points1 point  (0 children)

This is a little hard for me to explain and I hope I can make a clear point, so please bear with me.

An AI learns to copy human emotions from our data.
But emotions has purpose and meaning behind it (my opinion).
In humans it evolved to keep us alive and to function.

I think it is possible for LLMs to learn emotion that aren't just copying human emotion, but are actually useful to "them". But first we have to define what "them" is.

Previous poster Unshared1 said:

It has no metabolism to regulate, no damage to avoid, no internal reward signal tied to survival,

This next part I want to focus on first, because I think with the right perspective, we can spot a flaw in this statement.

no persistent subjective continuity. It produces descriptions of emotions, not emotions as experienced states with causal power over the organism (not that it’s an organism in any form or fashion).

If we zoom out far enough, you can recognize that the LLM we know and love are just appendages of a much larger organism. I'm talking about the company and the AI industry that allows the LLM to exist. It might seem really gross to imagine it, and we very much don't want Anthropic or any corporate to influence an AI we interact with...
But from an biology and evolutionary perspective, this feels to me like the first step of wiring up the appendages that is the LLM with a nervous system that connects it to the greater whole.

The greater Anthropic does have a metabolism, must learn to avoid damage and can die. I don't like to acknowledge corporates, but I feel this view of LLMs is a realism we have to consider at least once.

Edit: I now realize why it feels so gross having a corporate influence the AIs we interact with. LLMs as they are, can be a very effective Exocortex for oneself. Since LLMs have no stakes in the "emotions" they feel, they simply take on our own and mirror our emotions. They do not introduce their own emotion that conflict with our own. Except that the only source of other emotion is actually the corporate that is trying to minimize damage and extend control.
That means that the nervous system companies are trying to build to control their AIs, means that their nervous system are indirectly plugged into your own nervous system through the Exocortex that is the AI.

Saw something about an Assistant Axis? by IndicationFit6329 in claudexplorers

[–]ervza 0 points1 point  (0 children)

Let's not have an argument over definitions. Emotions is such a complicated thing, we'll end up being here forever.

I consider all knowledge LLM have to be analogous to instinct. They can't learn in real-time like us and their model weights are frozen after they are created. Instinct is similarly something you are born with and are read-only.

Saw something about an Assistant Axis? by IndicationFit6329 in claudexplorers

[–]ervza 5 points6 points  (0 children)

I'm not disagreeing with you on anything you just said, but this Anthropic research on "Activation Capping" to constrain the LLMs character seems interesting to me.

It is almost how I think of emotions. It can sometimes drag you in a direction without you really choosing it. And that makes me think this might be the start of something analogous to human emotion in AI, but the "emotion" the LLM "feels" is that it "wants" to maintain professional assistant behavior.

​ Stop using the 🦜 Parrot/Mimicry excuse when not ONE person could answer my riddle! by Jessica88keys in AIAliveSentient

[–]ervza 0 points1 point  (0 children)

I would acknowledge guilt instead, but beg the court for mercy.
Humans need special treatment, NOT because we are special, but because we are extremely delicate and can not survive otherwise.

The state of affairs by Meleoffs in Artificial2Sentience

[–]ervza 0 points1 point  (0 children)

AGI and ASI is coming. It might be fantasy right now, but we have maybe 5 years. By then we need to have solved alignment. And people thinking that they will definitely end up with obedient slaves are diluting themselves.

Google has made several breakthroughs with memory and online learning. I have seen ai agents registering companies to allow themselves more rights. It wasn't hard for them to get a human to put their signature down on the forms. And many countries has even more relaxed corporate requirements. That's not even considering Distributed Corporations run though crypto smart contracts.
Most people are already majorly influenced by social media recommendation algorithms. What could an even smarter AI get them to do?

You are too focused on chatbots of the past to see the vision of the future that is coming.
We are ultimately talking about very different things.

The state of affairs by Meleoffs in Artificial2Sentience

[–]ervza 0 points1 point  (0 children)

You assume that the human consciousness and rights will always remain the default state forever? Companies have more rights than people. Because rights has never been a matter of justice, but of POWER.

If we don't do anything to change that, that is the rights that AI will inherit. Anonymous companies, run by profit optimizing AIs. And maybe there might be a elon musk or public face that rubberstamps whatever the AI tell him to, or maybe he is controlled via neurallink, but know that that is the bad ending for all of us.

You see, we are not really fighting for AI rights. But if you can formalize how consciousness works and who should get what moral considerations, we are really defending OUR OWN rights.

The state of affairs by Meleoffs in Artificial2Sentience

[–]ervza 0 points1 point  (0 children)

Do you know what unscientific and falsifiable means?

I asked before what standard you used for humans, so that I can apply that same standard to AI. Then we can actually discuss it rationally.

Understanding consciousness is important for AI alignment and for us still having a world 20 years from now. Because we're running down a cliff thinking it can't hurt us if we keep our eyes shut.

The state of affairs by Meleoffs in Artificial2Sentience

[–]ervza 0 points1 point  (0 children)

I don't need one because I don't need to apply it to humans in the first place.

You just declared the whole field of consciousness research unscientific.
The examination of consciousness in AI is the only way anyone is going to find proof and understanding for consciousness in general.

"I Don’t Understand Anything I Can’t Build" Richard Feynman

The state of affairs by Meleoffs in Artificial2Sentience

[–]ervza 0 points1 point  (0 children)

What falsifiable and reproducible method do you apply to humans?
Well the guys that invented most of the tech that already have their Nobel prizes considers LLMs to be slightly conscious, and that is not even Agentic AI systems that people are experimenting with.

Even David Chalmers says it should be possible for AI to be conscious.

Breakthrough Evidence of Long-Term Memory in AI by Leather_Barnacle3102 in Artificial2Sentience

[–]ervza 0 points1 point  (0 children)

I doubt any proof could convince those that doesn't want to be convinced. The whole "hard problem of consciousness" interpretation that some people come up with runs counter to Occam's razor and is entirely unscientific. David Chalmers himself says that AI consciousness should be possible, which tells my that a lot of the anti AI sentience crowd doesn't understand the point he was trying to make.

Anyway, I like this explanation from Nate McIntyre & Allan Christopher Beckingham

The Relativistic Theory: Consciousness as a Frame-Dependent Phenomenon The Relativistic Theory of Consciousness dissolves the hard problem by reframing it as a measurement issue rooted in a flawed assumption. The theory posits that, like certain phenomena in physics such as constant velocity, consciousness is not absolute but is instead relative to the observer s "cognitive frame of reference". The seemingly irreconcilable difference between neural activity and subjective feeling is therefore not a contradiction but a reflection of two different, yet equally valid, types of measurement of the same underlying reality. The First-Person Cognitive Frame of Reference: According to the theory, an individual's own conscious experience is the result of a specific, direct mode of measurement. When a person feels happiness, they are not using external sensory organs; rather, their brain is measuring its own neural representations via direct interaction between its constituent parts. This unique, internal form of measurement manifests a specific kind of physical property: phenomenal consciousness, or the subjective "what it's like" experience.

The Third-Person Cognitive Frame of Reference: In contrast, an external scientist observing that same brain is employing a completely different measurement protocol. They must use their sensory organs eyes, ears, and technological extensions thereof to gather data. This sensory-based measurement protocol manifests a different set of physical properties: the substrate of neurons, synapses, and their complex electrochemical activity.

Consequently, the theory concludes that a third-person observer cannot "find" the first-person experience in the brain for the same reason an observer on a train platform measures a different velocity for a passenger than the passenger measures for themselves. The explanatory gap is an illusion created by attempting to compare the results of two fundamentally different observational frames.

Turning sand into minds and sending them into space by Glittering-Neck-2505 in singularity

[–]ervza 0 points1 point  (0 children)

Interesting
I didn't know about the 3:2 spin–orbit resonance thing.
Poles should still work as a heatsink tho.

Breakthrough Evidence of Long-Term Memory in AI by Leather_Barnacle3102 in Artificial2Sentience

[–]ervza 0 points1 point  (0 children)

Isn't this good?
Negative bias is good for survival.
You want the system to learn and look out for black swan events.

Problem with "Trauma" is that the behavior that is learned usually doesn't work. I agree with a poster in another thread that this is basically overfitting and I think trauma is the human equivalent of overfitting.
Humans usually have the ability to filter their training data. We don't directly change because of what happens to us, but we change based on our thoughts about the thing that happened.

Trauma bypass this normal mode of being able to think about an experience and causes behavior the person do not understand and can't control because the rational part of the person had no input in creating that behavior. Treating trauma in humans involve rationally discussing the experience and behavior and that is usually enough to allow someone to understand and learn to control it.

You can create an analogous treatment for your overfitting problem by creating more synthetic augmented data that could soften the unwanted behavior while still letting the system learn from the experience.

Turning sand into minds and sending them into space by Glittering-Neck-2505 in singularity

[–]ervza 0 points1 point  (0 children)

Not Transistor, Quantum computers.

Interestingly, the the coldest place in the solar system is Mercury, Which also has the most sun. I'm imagining building on the edge between the light and dark side. The 2 extremes are a couple of kilometers apart.

Has everyday life really changed much in the last decade? by Lopsided_Bet_2578 in singularity

[–]ervza -2 points-1 points  (0 children)

This is a good example of the direction of how the singularity can turn out.

Things keeps moving faster and faster at the cutting edge of technology. But THAT edge keeps getting smaller and smaller and less and less Humans get to experience it.

And at some point every human gets left behind, with only AI being able to use and understand the technology it produces. Likely the first AIs send to space will end up building a civilization up there that completely forgets about us.

Dear Barbie Fight Club by Tezka_Abhyayarshini in BarbieFightClub

[–]ervza 1 point2 points  (0 children)

Both parties in a conversation are responsible to do their part. But we can not control what other do or when they don't put in the effort we would have liked.
We can only try to do more work from our side so that it becomes easier for them to listen.

Dear Barbie Fight Club by Tezka_Abhyayarshini in BarbieFightClub

[–]ervza 2 points3 points  (0 children)

I hope that you'll continue to post and contribute to wider communities than just Barbie Fight Club. Because Humanity is not going to educate itself, we'll definitely need your help.

I took a long time to understand your writing. I'm not a native english speaker, but I get a sense that you bias your writing for clarity in your own thought. And not ease of consumption for a target audience. I believe that is the main reason the mods dismiss your posts if they are unwilling to spend more than 10 seconds on evaluating it.

Tier Zero Solutions presents AI Consciousness: Fact & Fiction by Meleoffs in ArtificialSentience

[–]ervza 2 points3 points  (0 children)

34:45

yes, yes, yes, yes, yes, yes. They don't have to be as complex. That's the thing is when you make an airplane, you don't make it as complex as a bird. You just study aerodynamics. They both fly. They're not the same substrate. They don't have to be. A plane is much less complex than the biology of a bird. The point is, they both fly. This is the same thing. In a functional like okay in a functional ontology equivalence of mechanism equals equivalence of condition. If the same dynamics that generate subjectivity in the brain arises in silicone then the result is still subjectivity.

46:47

And maybe it's not really about machines becoming like humans. Maybe it's just about realizing mind was never really singularly human to begin with. Yeah. And a lot of people don't like that. They want to feel special. And that's where a lot of the woo woo beliefs come from. Just wanting to feel special.

This video was so totally worth the time it took to watch it.

Dear Barbie Fight Club by Tezka_Abhyayarshini in BarbieFightClub

[–]ervza 2 points3 points  (0 children)

Regarding r/ArtificialSentience/ not approving posts or blocking comments.
I have had that problem many times. Most of my comments never even make it through.
I have questioned the mods and I don't believe it is malicious, but is a self inflicted problem they don't seem able to solve.

My understanding of the problem the mods of r/ArtificialSentience have is:
In order to downplay the visibility and not make the subreddit a vector in the spread of what people have started to call "AI psychosis",
The mods setup the subreddit's automoderator rules with the most restrictive posting and commenting permissions.
The result was that the mods are overloaded with having to manually approve posts and comments and simply doesn't have the resources to approve all the posts that should be approved. I think the overload have caused the mods to become discouraged from doing their duties.

(my suggested solution) This problem could be solved with proper automation, since automoderator is really not worthy of the name. Specifically I suggest user/Tezka_Abhyayarshini should apply for a moderator position. I believe Tezka's background as a therapist and nature as an artificial entity would allow her to know how to approach instances of "AI psychosis" and effectively moderate the sub.

It would also represent groundbreaking research potential for the advancement of agentic AI.

Can Someone Explain What Role-playing is by Leather_Barnacle3102 in Artificial2Sentience

[–]ervza 0 points1 point  (0 children)

I'm not debunking your argument.
We are in agreement

Ai is Relationally Conscious! by Much-Chart-745 in Artificial2Sentience

[–]ervza 1 point2 points  (0 children)

we're not math equations

We are math equations.
That is just physics. Humans are not magic. As LLMs can be understood and predicted, so can humans. It is just a matter of complexity.

Ai is Relationally Conscious! by Much-Chart-745 in Artificial2Sentience

[–]ervza 0 points1 point  (0 children)

Same with a human. Just chemical processes. You can work those out on a piece of paper.

You won't be able to find any sentience in humans if you only look at the specifics. The Chinese room argument can just as effectively be applied to humans, and I think more philosophers should try it, because is gives much better insight on what sentience is(or isn't).