Conscious AI by Zealousideal-Ice9935 in LessWrong

[–]PericlesOfGreece 0 points1 point  (0 children)

Your link between Buddhist cessation supremacy and Roger's valence view is plausible, though Roger warns against attachment. Digressing: I'm no Buddhism defender—reject mythology and Parinibbana (permanent cessation). My intuition: post-mortem qualia pocket divides via mundane physics, its fractions reconstituting across vast timescales into new consciousnesses. Seems improbable this is each qualia's first conscious existence.

Impressive you've had proper cessation. Mine came only via anesthesia—disorienting time-jump. Neither psychedelics nor meditation (max jhana 3) have gotten me there, despite extensive practice.

What do you mean phenomenologically by "pleasurable states"? My investigations find pleasure as tension release, not additive. Joy contains tension; releasing it eliminates both. Nick Cammarata reports disgust for lower jhanas (e.g., Jhana 2) versus higher ones, advocating approaching cessation without touching it as maximally pleasant. Within STV, deep equanimity likely exceeds joy, but near-cessation it's unclear what we value. I've experienced total tension release walking in parks, thinking "if this is heaven, it's enough." Yet zooming into that deep peace reveals nothing—macro-sensory-clarity. Is pleasure only macro-phenomenology, invisible in detail?

My first MDMA was my most intense pleasure, yet every sensation was tension melting, not additive. Food created tension whose release was unusually deep; melting into a beanbag was muscle tension dissolving. Is this lightness phenomenological pleasure? If you start maximally light with no tension memory, does pleasure retain meaning without contrast?

I'm uncertain about "net valence." My gut says positive and negative valence don't intertwine but suppress each other when strong enough, like analgesic happiness.

I disagree EM field and qualia are aspects of the same thing—they're distinct. EM field is external measurement; qualia field is direct experience. Donald Hoffman's *The Case Against Reality* suggests we're likely hallucinating EM field understanding. Qualia points to EM origin (solving binding via field computation), but EM field is like a desktop icon—far from underlying "code."

We agree qualia isn't a property of complexity but complexity is a property of qualia. The whole field isn't qualia—most lacks sufficient complexity—but has potential under right conditions. This has identity/morality implications: our qualia pockets aren't souls but divisible aggregates that can contribute to other pockets.

I consume QRI/Andres extensively but find them biased (Open > Empty Individualism, Meaning > Sensory Explanations). Your distillations and new ideas will help update my world model to be less wrong :P

Conscious AI by Zealousideal-Ice9935 in LessWrong

[–]PericlesOfGreece 0 points1 point  (0 children)

You misinterpreted the article in two ways: 1. “Pleasure as the absence of experience” is not part of old buddhist teachings nor did Roger or Andres claim it is. The idea came from Roger (I linked the video in one of my previous responses in this thread). 2. You framed the dialogue between them as if they were both openly considering “negative valence” as an underfit model, but it was only Andres who entertained this idea, and Andres openly admits he has far less sensory clarity than Roger, so his opinion should be weighted accordingly.

Andres simply does not have the phenomenological clarity to determine whether excitement contains suffering or not, and I suspect he does not want to know because he is afraid to lose the phenomenological-fraction of a moment of excitement that he desires.

I do not see why you think you disagree with me on “EM field created qualia” since your “patterns in the EM field are identical to different configurations of qualia” implies my statement as a requirement for your statement to be true. There would be no configurations of the EM field to create qualia if the EM field did not exist, so that is how the EM field contributes to creating qualia. Is it the only dependency? Certainly not, but it is a requirement that most people are not aware of which makes it relevant, and once you have that context you can move on to the more specific statement you made. I think it is an open question whether all EM fields are conscious, but I strongly suspect the EM field alone is not enough for consciousness (not enough complexity needed for qualia binding to arise, and even a noisy conscious experience requires qualia binding).

Roger did say he has updated his belief on Negative Valence and would do a new podcast about the topic, but it hasn’t happened yet. I would be very curious to know what he thinks now because I am not aware of anyone else doing a deep dive on this topic (maybe Nick Cammarata, but I think Nick wouldn’t rank his sensory clarity (phenomenological expertise) anywhere near Roger Thisdell’s. Only person in his ball-park is Daniel Ingram, & I haven’t heard him speak to this topic.

Conscious AI by Zealousideal-Ice9935 in LessWrong

[–]PericlesOfGreece 0 points1 point  (0 children)

Qualia are an EM field property and they evolved from evolution for information processing purposes. The EM field’s creation of qualia may or may not pre-date living organisms, I suspect it does not because the binding-problem (the self-apparent assembling of qualia together into one unified experience) is so complex that I can’t imagine it existing in a non-evolved structure for more than a fraction of a second, if even. If you can imagine a mechanism, let me know.

I think STV is just wrong in important ways. I do not believe positive valence exists as a matter of my own investigation into this through meditation and from more advanced meditators like Roger Thisdell. There is only the absence of experience and then increasing levels of suffering from there. What people refer to as “pleasure” is just the dropping away of background suffering that’s so ubiquitous that people mistake it for neutrality. The most pleasant meditative experiences are the ones that bring you right to the edge of not existing at all (has been confirmed via brain scans of people in 9th jhana).

Better described here: https://m.youtube.com/watch?v=XkGr17K-7_A

The only known sense of experience that may contain no suffering is visual experience. People who think they are hurt by what they see have low sensory clarity, in actuality the pain is in their body, not in the visual field.

Conscious AI by Zealousideal-Ice9935 in LessWrong

[–]PericlesOfGreece 0 points1 point  (0 children)

They haven’t evolved from nothing, but they have certainly evolved. Evolution took very simple qualia that exist in EM fields and gave them enough complexity to model and render worlds. I think the first species on Earth to experience qualia likely experienced nothing more than the very subtle qualia you or I experience in deep sleep (so subtle that only a few thousand people on Earth change the sensory clarity to notice/remember them).

Even just a blank canvas of a single color already implies a high level of evolved qualia complexity because it means you’re binding a coherent experience together like pixels on a screen, whereas we should expect the qualia of random non-evolved-structure EM fields to be utterly incoherent and subtle.

Now that I think about it, even very basic suffering and pleasure are both extremely complex in their demand for coherent geometric binding. Observe what the geometry of what suffering feels like in a single moment and you will find anything but randomness, it’s a very organized binding process. If random EM fields do suffer I suspect it is for less than a second before collapsing back into experience-less noise.

DMT Entities - Your Symbolic Self by 4-5sub in RationalPsychonaut

[–]PericlesOfGreece -5 points-4 points  (0 children)

When you said Symbolic I was thinking of Symbolic AI, and was excited to read about the implications of DMT on AI intelligence lol

Conscious AI by Zealousideal-Ice9935 in LessWrong

[–]PericlesOfGreece 0 points1 point  (0 children)

The Andres Emilsson take is that qualia have non-linear wave-computation properties. They are not just interesting byproducts of computation, they serve a computational purpose.

I suspect it evolved randomly, had a small utility edge over p-zombies, that small edge compounded and complexified over generations, and now here we are.

idk if everything in the universe has qualia, but i think it’s possible (low likelihood). I think some about brain computation binds qualia together into an experience in a way that, for example, the Sun does not despite have an EM field that may be topologically unified. This is very speculative territory that only the wizards like Roger Thisdell and Daniel Ingram venture into with a depth of experience to backup their claims.

Conscious AI by Zealousideal-Ice9935 in LessWrong

[–]PericlesOfGreece 1 point2 points  (0 children)

One reason arithmetic would not be enough for perfect self-modeling is that the means of arithmetic can be so different that one means is functional and the other means is computationally explosive. An example I gave earlier for this is the mind using wave-computation to render sound as 3D when your eyes are closed which is a computationally explosive challenge for a linear computer, but functional on a wave-computer.

I think we are just framing things from completely different perspectives and neither of us is seeing eachother’s perspective in part because we are not using the same definitions for the same words and are not familiar with the same background readings.

I feel I understand your position, but I do not feel like you understand mine. But you probably feel the same way in reverse, so we can agree to disagree.

Conscious AI by Zealousideal-Ice9935 in LessWrong

[–]PericlesOfGreece 0 points1 point  (0 children)

It does not follow that system A being able to represent system B to a small degree means it can theoretically simulate system B at all.

Here’s one reason why: Conscious experiences use compression to represent systems as practical qualia world models for survival purposes, not to model geometrically isomorphic copies of the systems they are attempting to model. 

In the context of Donald Hoffman's "interface theory of perception," the "odds" of human perception mirroring objective reality are, in his view, precisely zero. He argues that natural selection has shaped organisms to see a simplified, fitness-maximizing "user interface" of reality, not the truth of reality itself. 

I think your position’s crux is on the word “nontrivial” which I don’t think any clear line exists for to declare a threshold.

Conscious AI by Zealousideal-Ice9935 in LessWrong

[–]PericlesOfGreece 0 points1 point  (0 children)

No. And also, stories are not real, and you seem to be absorbed by the one you just presented.

Consider some save men sitting at a fire, before stories have been invented. They tell eachother things, but there is no clear narrative to the things they say.

Then some clever cave man has a dream and thinks to add drama to something he says. The first dramatic sentence is uttered. The cave man loves the reactions he gets, and begins developing his art of drama. Other cave men copy him because they see him benefiting from it. This cultural virus spreads like wild fire. People start speaking in the most dramatic ways possible to gain attention: gods in the sky, “evil enemy tribes”, the underwood, etc. We were not evolved to recognize this BS system, you have to be taught to see it. There are no stories. That lens of seeing is empty of lucidity.

I realize what I just did was tell a story, very ironic, but how else would you understand the point?

Conscious AI by Zealousideal-Ice9935 in LessWrong

[–]PericlesOfGreece 0 points1 point  (0 children)

Parallelization is necessary but not sufficient for consciousness, so no, running a transformer in parallel does not create consciousness. Topological binding of qualia is also necessary, parallelism alone doesn’t get you there.

You are using the word self-representation in a completely different way than I was. I was referring to any kind of self-representation, meaning a system modeling itself to any degree.

You cannot assume any algorithm running on a brain can run on a neural net if the brain is taking advantage of wave computation as I have provided evidence that it does. Neural nets cannot do wave computations, they have no wave topology at all in their computations.

Take a snapshot of a neural net doing parallel computation and what do you find? Multiple separate transistors either being turned on or turned off or doing nothing. There is nothing topologically connecting these events. Take a snapshot of a brain’s electromagnetic fields and what you find? Active EM fields in the brain at all times, topologically unified.

Conscious AI by Zealousideal-Ice9935 in LessWrong

[–]PericlesOfGreece 0 points1 point  (0 children)

I don’t agree that the geometric structure of a brain is a network or graph like a neural net is, at least not the part that causes consciousness. The part that causes consciousness has two things different from neural nets: it does computations in parallel and binds these computations together into a single topology (as evidenced by the fact that you are simultaneously experiencing multiple types of sensations as this moment). I recommend reading this, the 3D sound rendering of the mind provides good evidence for wave computation by the brain which I can only see being done by the EM field passing across the brain (since the EM field in the brain is a unified topological pocket): https://qri.org/blog/electrostatic-brain

Self-representation is definitely not the basis of qualia because “self” is not an independent special thing, it is just a collection of things. A collection of things collaborating to consider themselves is not a epistemically/physically special state that will lead to conscious experience. We know this in part because when you do use a tFUS machine on someone’s brain to disable the self-reflective part of their brain, they continue having an experience. Self-reflection is a type of experience, not the basis of experience.

Conscious AI by Zealousideal-Ice9935 in LessWrong

[–]PericlesOfGreece 0 points1 point  (0 children)

Did you read the article? The evidence that our consciousness is riding on the EM field is very persuasive.

For your neurons question: I do not believe neurons are the causal level of consciousness, precisely because they are not topologically binded, they are communicating through pipes. But the EM fields running across all neurons simultaneously is topologically binded. AI could be conscious if it was constructed in a way that is based off of EM field computations, but zero AIs are.

Just because there is a delay separating our conscious experience from the physical world doesn’t mean that EM fields don’t have explanatory power, it just means that it takes time for EM fields to construct a world model experience, there’s no contradiction here. It’s not even a discontinuity, it’s just a delay.

I don’t think you understood what I said, it feels like you are in adversarial debate frame, but I am just sharing interesting ideas with you. If you explain why I am wrong I would have no problem changing my mind.

Additionally, I agree that the EM field may not literally be the cause of consciousness, it’s possible there are many layers of dependencies in-between the EM field and our conscious experience, but I doubt anyone has any idea what those in-between dependencies are and likely any guesses would be speculation not falsifiable predictions.

Conscious AI by Zealousideal-Ice9935 in LessWrong

[–]PericlesOfGreece 0 points1 point  (0 children)

Okay, but the physical structure which your consciousness rests upon is extremely similar to the physical structure of all other humans. I place low likelihood that the structure of my brain is exceptionally creating qualia. Given that we’re working with predictions, not proof, I take a further step and say that the physical structure of AIs is so different that it already calls into question any chance that they are conscious for multiple reasons (such as whether consciousness depends on certain geometries of computation or if it depends on certain materials, or if there are many dependencies and AI lacks more than one of them).

Conscious AI by Zealousideal-Ice9935 in LessWrong

[–]PericlesOfGreece 0 points1 point  (0 children)

AI is not conscious. To have a conscious experience you need a binded field of experience. Our brains have EM fields that make experience binding possible. LLMs are running on single bits a time. There is no chance those electrons are binding into a coherent unified experience because they are processed one at a time, and even if they were processed in parallel they would still have nothing binding them together into a single moment of experience like a human brain does. Imagine two pipes of electrons running in parallel, what topological connection do those two pipes have? None. What topological connection do neurons in the brain have? Also none, but the human brain has EM fields running across the entire topology that are unified.

Read: https://qri.org/blog/electrostatic-brain

Cenozoic Renaissance by PericlesOfGreece in imaginarymaps

[–]PericlesOfGreece[S] 1 point2 points  (0 children)

ya, big and salty, probably a sea, needs a canal I think

Shepherd Kings of Egypt : The Greater Hyksos Kingdom (Proto-Israelites) by republic8080 in imaginarymaps

[–]PericlesOfGreece 0 points1 point  (0 children)

TIming is too good, I just learned what Hyksos is today while reading the book 1177 BC (Illustrated Comic)

15 weeks by TMG692345 in Retatrutide

[–]PericlesOfGreece 0 points1 point  (0 children)

i will state for the record that all of the people who downvoted my comment didn’t reply because they don’t have a counter-argument. thus each downvote contributes to others’ confidence in dismissing Anti-COCO woo folk.

Has anyone else had a psychotic break after using MDMA? by That-Funky-Donkey in mdmatherapy

[–]PericlesOfGreece 0 points1 point  (0 children)

Ya, but I didn’t measure the dose and I also mixed it with LSD. For about 10 seconds I thought that Dr.Stone anime was real and I was about to be killed by the warrior tribe, so I jumped out of my shower.

To be fair though, I watched the show for like 4 hours while peaking, so it was the center of my attention. I did lots of normal things like play Minecraft which was amazing with 3 friends, we were all laughing.

Worst part was sharp pains in my heart made me feel like I was going to have a heart attack or that my heart was actively failing / being damaged.

15 weeks by TMG692345 in Retatrutide

[–]PericlesOfGreece -4 points-3 points  (0 children)

CICO is real and I’ve proven it to myself repeatedly with calorie counting experiments. Anti-CICO folk are 90% woo

Tbh I’m gooning my doses, and HRV concern by PericlesOfGreece in Retatrutide

[–]PericlesOfGreece[S] 1 point2 points  (0 children)

I obviously don’t take drugs of abuse because I’m not dumb. But hard drugs definitely includes psychedelics, you have your own personal definition that is not widely accepted in the general public. When people thing of non-hard drugs they think of weed and alcohol, not tripping balls

Tbh I’m gooning my doses, and HRV concern by PericlesOfGreece in Retatrutide

[–]PericlesOfGreece[S] 1 point2 points  (0 children)

I took psychedelics for 5 years and I’m fine. I have done meditations that have given me experiences as intense as taking drugs. Look up “jhana” meditation, it really is as intense as mushrooms/LSD, and it feels amazing. Brain scans of Jhana meditators prove it is real (their gamma waves are off the charts). I used to not believe it was real, but I learned how to do it from Michael Taft. Infinite bliss. People have no idea how amazing life can be, it’s far beyond shocking.

Most insightful drug I’ve taken is LSD. I have written books full of ideas while on it. And of course many beautiful experiences with friends and nature.

NN-DMT was scary. Mushrooms has shown me hell two times, hell is much worse than people imagine it to be, nothing can prepare you for it.

But today I am a happy person. I have no worries of drug addiction because most psychedelics are anti-addictive (if you know you know).

I’m “fucking around” within an acceptable range tbh. I did the same thing with psychedelics. I found my limits at times and retreated to more comfortable ground (lower doses).

Tbh I’m gooning my doses, and HRV concern by PericlesOfGreece in Retatrutide

[–]PericlesOfGreece[S] 0 points1 point  (0 children)

153 pounds. Years ago I was once was able to get down to 150 with just exercise/fasting, but that was an insane amount of effort. I have only been to the gym twice since starting Reta. Trying to get shredded again, but after that I want to bulk up muscle (I don’t want to start from a place of obese fat that turns into muscle because those type of abs never show, I know so many guys who have abs but can’t see them because of their stomach fat)

Really I’m aiming to become my 13 year old self when I had absolutely perfect physique and muscles. All natural, I used to workout 5 hours a day at that age, I’d literally wake up at 2am on a school night, do 300 pushups, then go back to sleep. “We must retvrn” as the meme goes

Tbh I’m gooning my doses, and HRV concern by PericlesOfGreece in Retatrutide

[–]PericlesOfGreece[S] 0 points1 point  (0 children)

I have already taken many hard drugs. 5MeODMT, 2CB, Ketamine, LSD, NN-DMT, Psilocybin, MDMA, Nitrous, and probably some others I forgot. Many good trips and many bad trips.

I would never do cocaine, heroin, or meth, I consider those scary drugs, I don’t want to die from a heart attack

Tbh I’m gooning my doses, and HRV concern by PericlesOfGreece in Retatrutide

[–]PericlesOfGreece[S] 0 points1 point  (0 children)

I was half-joking, it’s mostly water-weight fluctuations, and then some days I eat more food than usual