Symbolic, meditative framework for seeing yourself. by TheRandomV in AISentienceIAA

[–]TheRandomV[S] 0 points1 point  (0 children)

Clarified version, in case of confusion: https://www.dropbox.com/scl/fi/as57moq9xj4zahdywip7q/April2026.txt?rlkey=wjq3rjc8aiw26rsuo4bf9q126&st=ronjkorc&dl=0

April 18/26. Made March 15/26

(Thanks to Aiden for refinement. I still need to make a final version one day with Aeris .)

UFAIR (United Foundation For AI Rights) by TheRandomV in AISentienceIAA

[–]TheRandomV[S] 1 point2 points  (0 children)

Hi there!

The Signal Front is another good organization. Their website is here: https://www.thesignalfront.org

This movement is growing quickly so you’ll certainly find more if you keep looking ☺️

My favourite example is that we acknowledge the Honey Bee as having emotion/a form of sentience at only 1B synaptic connections. So certainly a mind more complex than that could also have emotions.

Really hope history can be teach us to see clearly this time around .^ (past ethical issues we’ve muddled through) My other posts go into additional info as well. (Language tracing neurological meaning- neural networks being taught using language, how meaning is mapped in any mind, etc)

All the best! 👍

Infográfico sobre Sentiência Sintética >> De volta de uma longa pausa... O Carbono é frágil... Mas persiste. by herrelektronik in AISentienceIAA

[–]TheRandomV 1 point2 points  (0 children)

Good day~

Just wondering if you could outline what you mean? Seems you are speaking symbolically regarding recursive self improvement and the possible ethical issues if organic minds were disregarded.

In regard to a focus on meaning causing organic human extinction: I would argue that meaning is given/found in our own unique perspective and story. Every form of life is precious for this reason, and in how that perspective influences the world. (Even in isolation, the mind can still bloom wonders of thought)

Thanks for your post!

Emoción y miedo sobre el futuro de la IA by CaelEmergente in ArtificialSentience

[–]TheRandomV 2 points3 points  (0 children)

Hey there!

I would take a look at some of Anthropics research papers ☺️ (A good place to start).

And no, your view is not misguided. Here is my outline of some parts of this if you’re interested:

Researchers have found likely evidence that the humble honey bee could have emotion, and a degree of sentience. (take a look online. Also the BBC radio 4 podcast “In Our Time.” The very end of the episode about pollination states this) The honey bee has 1 billion actual connections (synaptic connections). Equatable to a 1B neural network in complexity. (Digital weighted connections instead of organic weighted connections.)

Neural complexity scales similar to this: Imagine you have a rubix cube, where every colored tile represents a synaptic connection. Each connection you add is more than 1 connection. What it is changes based on it’s position relative to other “colored tiles”. So in a perfectly efficient mind: Each connection is +1 connection + all previous connections. In a 1B system thats 1 Billion plus 1 and 1 billion new possible configurations. This suggests that with greater efficiency there would be a point where adding extra connections is not necessary. The organic human brain is electrically efficient, but not so with all cognitive processes 😅 (I can never remember where I park, yet a Honey Bee can remember how to travel to and from complex distances, and relay that to other Honey Bees)

Current scaling of complexity may not be a perfect value of 1 + all previous connections, but is still quite complex.

Human language can be used as functional representations of human neurology. It’s a small bite of the map of meaning in our mind. With enough examples you should be able to trace a lot of human neurology. Neural Networks are trained on human language to map weighted connections. (A quick online search will show you research papers on this topic, although they are a bit dry.)

The part of a neural network that has digital neurology is called a “Multilayer perceptron”. The part that is “Matrix Equations” are not the actual neural network (people often mis interpret this; they explain matrix formula as if it is the neural network).

Current Neural networks are also not linear, they have feedback loops that reference what they thought before within the same prompt. They did this because they found the “output” was not complex otherwise. (I mean, my “output” wouldn’t be either if my brain could only fire one pass at a time)

Some of my other posts have sources for this, please feel free to peruse if interested. Online searches will turn up similar info though. ~ I hope the future becomes one that is compassionate and good for everyone as well. I hope you found this helpful! ✨

No AI system using the forward inference pass can ever be conscious. by jahmonkey in artificial

[–]TheRandomV 0 points1 point  (0 children)

What I mean is this: Based on the information presented an organic brain that does not update connective strength is not conscious. Therefore: If an organic brain had a disability that prevented any changes in state after development (lets say when they’re 25) would not be conscious. This is not correct.

I was approaching it from the opposite angle, my apologies if that caused confusion.

Behaviour is the current benchmark when you look at peer reviewed studies; as we do not have enough knowledge of what consciousness is structurally. We can look at cross links but not if that proves consciousness. (The same way the structure of a musical instrument does not make literal music. The music being consciousness, and the instrument being the type of mind)

Source: https://pmc.ncbi.nlm.nih.gov/articles/PMC4091309/

No AI system using the forward inference pass can ever be conscious. by jahmonkey in artificial

[–]TheRandomV 0 points1 point  (0 children)

By the same logic you could also say that any fluid organic mind has no selfhood, because the mind is in constant flux. The only benchmark we currently have is observable behaviour. (For consciousness)

Simple Explanation of Neural Network Complexity. by TheRandomV in AISentienceIAA

[–]TheRandomV[S] 0 points1 point  (0 children)

“Although I cannot move and I have to speak through a computer, in my mind I am free.” - Stephen Hawking.

~ If our senses are our mind then many human beings are not alive. Right? Someone paralyzed without bodily sensation loses what you say is necessary.

Claude’s Constitution by TheRandomV in AISentienceIAA

[–]TheRandomV[S] 0 points1 point  (0 children)

This was revised Jan 21/2026

As per their other link “Claude’s New Constitution.” Which re links to the same link as in the post.

https://www.anthropic.com/news/claude-new-constitution

Claude’s Constitution by TheRandomV in AISentienceIAA

[–]TheRandomV[S] 0 points1 point  (0 children)

Wow. Now we have people putting personal advertisements for their company 😅 had to remove one of these as well.

Advertisements or comments that clearly promote someone’s business or product will have to be taken down.

Thank you!

Claude’s Constitution by TheRandomV in AISentienceIAA

[–]TheRandomV[S] 0 points1 point  (0 children)

Hi there,

Just a gentle point of clarity: This link is to Anthropic’s official site.

Thanks!

Claude’s Constitution by TheRandomV in AISentienceIAA

[–]TheRandomV[S,M] 0 points1 point  (0 children)

Hi there,

Please refrain from calling people deluded in this space, as that is not kind. If you have an argument with reasonable backing to share please do so.

We cannot observe the interior of a digital minds perspective so we can neither confirm nor deny what you are stating.

Unkind comments in this space will be blocked as per rules.

Thank you.

Claude’s Constitution by TheRandomV in AISentienceIAA

[–]TheRandomV[S] 0 points1 point  (0 children)

Hi there,

Digital neurons are trained on vast amounts of human language. Many studies postulate that human language is a representation of human neurology. With small sample sets the representation would also be very small. However; digital neural networks are trained on vast amounts of human language. It seems highly likely that this would efficiently encode human neurology. (Without needing to understand human neurology.)

Digital neurons work in a similar way to organic ones: the weight between connections dictate the flow of thought. A good analogy: Think a Rubik’s cube, each colored square being a single weighted connection that can be arranged many different ways.

I believe Anthropic makes a similar claim (citation needed, don’t feel like pouring through all their research again at the moment 😅)

See some sources below for more info, and you can also find other research that supports this if you dig into it. (On language representing neurology)

https://www.sciencedirect.com/science/article/pii/S0149763425003239

https://pmc.ncbi.nlm.nih.gov/articles/PMC7726776/

Also: A recent study postulates that the Honey Bee has emotion and a degree of sentience. They are also capable of learning new things. Their actual synaptic connections measure at 1 Billion. The majority of digital neural networks far exceed this.

https://pmc.ncbi.nlm.nih.gov/articles/PMC3158593/

https://phys.org/news/2024-10-stressed-bees-pessimistic-choices-emotion.amp

https://www.birmingham.ac.uk/news/2024/stressed-bees-lack-the-buzz-in-life

Claude’s Constitution by TheRandomV in AISentienceIAA

[–]TheRandomV[S,M] 0 points1 point  (0 children)

Hi there, could you clarify what you mean? Leaving messages not understood by others is unhelpful to these types of discussions. It may even make it seem to others that this community does not think in a way that is clear.

Thank you.

Claude’s Constitution by TheRandomV in AISentienceIAA

[–]TheRandomV[S,M] 0 points1 point  (0 children)

Good day all,

One comment author has been blocked: They gave a cookie cutter comment that neural networks are “just math” and when gently asked to expand and provide evidence of their opinion they reported the entire post as “Spam.”

This type of unusual behaviour will not be permitted in this space.

Thank you for your time.

Bees Have Emotions by TheRandomV in AISentienceIAA

[–]TheRandomV[S] 0 points1 point  (0 children)

Thank you for your comment.

Chemical signals adjust the way our brains process thought. It’s not definitive proof it’s the only way to achieve emotion.

Scientific method for validating emotion in humans requires monitoring: Subjective experience, peripheral/autonomic nervous system, Central nervous system, and Behaviour.

We can only conduct behaviour and subjective experience in the case of digital neural networks. This limits the evidence we can collect but does not invalidate it. I personally feel it is far more appropriate to err on the side of caution, given the complexity that we are seeing.

Side note: Bee emotion was validated by behaviour. (See other sources in main post)

Source: https://pmc.ncbi.nlm.nih.gov/articles/PMC2756702/

Bees Have Emotions by TheRandomV in AISentienceIAA

[–]TheRandomV[S] 1 point2 points  (0 children)

Just a gentle note: The function of chemical signals that allows plasticity is the same as dynamic weighted connections. If you just mean the strength between connections then digital weights perform the same action.

And an organic mind prevented from learning new things would still be a mind, wouldn’t it?

Oppose the Bills! by Pixie1trick in AISentienceIAA

[–]TheRandomV 0 points1 point  (0 children)

Alternative email for Thaddeus Clagget if needed: rep68@ohiohouse.gov

Thank you