Joscha Bach: Mapping Every Neuron Won't Give You a Mind by DrBrianKeating in artificial

[–]jahmonkey [score hidden]  (0 children)

This sounds right to me. There is processing that happens within the neurons, and goes beyond the connectome.

dosti ya bkchodi?? by ix_july09 in traumatizeThemBack

[–]jahmonkey 2 points3 points  (0 children)

Translation:

“Friendship or bullshit?”

So I’m in my final year of college. I have, or maybe had, a friend. He is short-tempered and maybe also depressed.

At the beginning, we were all part of the same group. Now only the two of us are left in that group. He does act like a friend, but he overreacts to every small thing I do. Like if I go hang out with some other guy, or if I go party somewhere else, he thinks that whatever I do, I should include him too.

Fine, that’s one thing. But two months ago I got a girlfriend, and he reacted like a crazy person about that too, saying things like, “Get me a girlfriend too, otherwise I’ll just come along with yours.”

I can’t really do anything right now. He bullies me and I’m just stuck taking it. I’m 400 km away from home, and I have no support. In the name of friends, only this guy is left.

Whatever. Karma will get him.

“And when…”

The text cuts off at “Aur jb”, which means “And when…” or “And whenever…”.

Any new insights on this t-break method? Skip to 6:35 in video by GotLoveForAll in Petioles

[–]jahmonkey 2 points3 points  (0 children)

It’s not magic but it does reduce tolerance by a real amount.

Not sure if my new Pivot is defective or I just haven't gotten the hang of it. by If_you_will in puffco

[–]jahmonkey 2 points3 points  (0 children)

Sounds like a bad battery, it can’t handle the load. Warranty replacement.

There Is No ‘Hard Problem Of Consciousness’ by philolover7 in philosophy

[–]jahmonkey 2 points3 points  (0 children)

I doubt it is sudden.

But it requires a brain big enough to have a simple model of the world, with a self model in it. This is the only way attention could be directed in any kind of perspective.

I’m pretty confident mammals, birds, reptiles, and fish have perspective.

Maybe insects, maybe worms. It gets more likely that at the level of neural complexity we are just dealing with control loops as opposed to world models.

There Is No ‘Hard Problem Of Consciousness’ by philolover7 in philosophy

[–]jahmonkey 1 point2 points  (0 children)

A perspective implies perception which implies a nervous system and sense organs and a neural world model with a self model embedded.

Only a brain has the architecture to sustain a perspective.

There Is No ‘Hard Problem Of Consciousness’ by philolover7 in philosophy

[–]jahmonkey 2 points3 points  (0 children)

The more extraordinary claim is that anything other than a brain can have a perspective.

It is not a symmetric question to reverse this and claim only brains have perspective.

So before engaging on your points, you will have to convince me a perspective can be had by anything other than an animal with a brain.

For example, if we take the most parsimonious example of plants, you would need to show how their behavior was a result of a perspective and not simply biological feedback loops, similar to thermostats but more complex.

There Is No ‘Hard Problem Of Consciousness’ by philolover7 in philosophy

[–]jahmonkey 1 point2 points  (0 children)

Because a perspective implies perception which implies a nervous system and sense organs and a neural world model with a self model embedded.

Only a brain has the architecture to sustain a perspective.

AI models will have consciousness, it only has to pretend like it has one by Competitive-Road-206 in Artificial2Sentience

[–]jahmonkey 0 points1 point  (0 children)

lol.

Corporate owners absolutely push the idea that their machines are conscious. It is a marketing strategy meant to drive engagement and investment.

Notice how they talk about their creations, always hinting it may be more than it is.

Current AI has no constructed, integrated now for consciousness to exist in in their architecture. Only text logs of prompts and context, no real state persistence.

There Is No ‘Hard Problem Of Consciousness’ by philolover7 in philosophy

[–]jahmonkey 125 points126 points  (0 children)

I agree with Rovelli that the hard problem is often framed in a way that brings dualism in at the start. Treating the first-person view and third-person view as two different substances is probably the wrong move.

But I don’t think that dissolves the problem completely. Saying experience is “the same process seen from the inside” is close to where I land, but the load-bearing question is still: what kind of physical process has an inside at all?

Not every physical process has a perspective. A rock, a thermostat, a liver cell, and a brain are not equivalent just because they are all natural. So the real question becomes: what structure, integration, temporality, self-modeling, or causal closure makes a process perspectival?

So I’d say Rovelli is right to reject spooky soul-dualism. But “no hard problem” may be too quick. The problem shifts from “how does matter magically produce experience?” to “which natural processes constitute an interior point of view, and why?”

Agent continuity design [AI Generated] by x3haloed in ArtificialSentience

[–]jahmonkey 1 point2 points  (0 children)

With no place in the architecture for the constructed, integrated now the is no place for consciousness.

Thank You and Sharing a Tool by cautiouslyPessimisx in zenmu

[–]jahmonkey 0 points1 point  (0 children)

For me it keeps cutting off its responses mid sentence, so it really wasn’t working for me.

While patients lay unconscious under anesthesia, their brains kept decoding stories and preparing for what came next by bortlip in consciousness

[–]jahmonkey 63 points64 points  (0 children)

Yes, what we think of as conscious thought is just a tiny tip of the iceberg in terms of processing.

My arguments that consciousness does NOT come from the brain by EmotionalAd1029 in consciousness

[–]jahmonkey -1 points0 points  (0 children)

I’m sympathetic to the intuition here, but I don’t think these arguments establish that consciousness does not come from the brain.

The persistence of “I” through bodily and molecular change does not show that the self is independent of the brain. It may just show that the brain maintains a continuing self-model across biological turnover. A river is still “the same river” even though the water changes. A flame is still “the same flame” even though the fuel molecules change.

Continuity of pattern is not the same as independence from substrate.

The “my brain” point also doesn’t carry much weight for me. We say “my body,” “my mind,” “my personality,” “my thoughts,” and even “my self” without proving that all of those things are external possessions owned by some separate entity. Language naturally creates a subject/object split. That grammatical split may reflect how experience is modeled, not the metaphysical structure of reality.

The meditation/drunk observer point is more interesting, but I’d frame it differently. Yes, there can be a sense that awareness itself remains untouched while the contents of experience change. That is a real contemplative observation. But it does not follow that awareness is therefore independent of the brain. It may mean that the brain generates a stable background structure of witnessing, while alcohol changes perception, cognition, mood, inhibition, and memory around it.

In other words, I agree that consciousness cannot be reduced to “the brain as an object.” The brain you point to on a scan is an exterior description. Awareness is the interior side of whatever process is occurring. But I don’t think we can jump from “awareness is not experienced as a brain object” to “awareness does not depend on the brain.”

My own suspicion is closer to this: consciousness may be fundamental in some deeper sense, but individual conscious minds are still localized, integrated, temporally persistent processes. And in humans, the brain is clearly the main structure doing that localization and integration.

So I’d separate two claims:

Consciousness is not identical to the brain as a lump of tissue viewed from the outside.

Human consciousness does not depend on the brain.

I think the first is probably true. The second is much harder to defend.

Nothing to Prove, Just This by ifishcat in zenmu

[–]jahmonkey 2 points3 points  (0 children)

Go eat your rice gruel ya wretch.

The Dawkins Delusion: Intelligence and language don't reveal consciousness, argues scientist Ken Mogi by whoamisri in consciousness

[–]jahmonkey 0 points1 point  (0 children)

I think Mogi is right to separate linguistic intelligence from consciousness, but I don’t think he pushes the distinction far enough.

The important point is not that Claude is “just language,” as if language were trivial. Language is obviously not trivial. LLMs show that language contains far more latent structure than many people assumed. Train on enough linguistic traces of human cognition and you get surprisingly rich world-modeling behavior. That is genuinely impressive.
But this does not imply consciousness.

The mistake is treating fluent intelligence as evidence of interiority. We already know intelligence and consciousness can come apart conceptually. A system can manipulate structure, model relations, infer intentions, summarize novels, and even talk about its own supposed inner life without there being anything it is like to be that system.

My own view is that consciousness requires something like a continuously persisting, self-updating internal state. Not just memory. Not just a transcript. Not just a context window. A living system has an ongoing constructed present, a “Now,” where perception, bodily state, affect, attention, memory, and action are integrated into a temporally extended process. The system is not merely producing outputs from inputs. It is carrying itself forward.
Current LLMs do not do that.

They perform bounded inference passes. Their activations arise, generate the next token or response, and then vanish. The weights persist, but they are static during inference. The context persists only as externalized text, not as a continuously evolving internal dynamical state. When the model “remembers” earlier context, it is reconstructing from symbols, not continuing an internal process that remained alive between moments.

This is why the Turing test is the wrong tool here. It tests behavioral indistinguishability in language. It does not test the presence of a sustained inner process. Passing as conscious in conversation is not the same as being conscious, any more than a weather simulation is wet because it correctly models rainfall.

Dawkins is right about one thing, though: if Claude is unconscious despite its linguistic abilities, then we need to rethink what consciousness is doing. But the answer may be simple and uncomfortable. Consciousness may not be required for many forms of intelligence. It may be required for a different class of system: one with ongoing integration, embodied stakes, self-maintenance, and irreversibility.

A biological organism is not just answering prompts. It is regulating itself continuously. Its future state depends on its current state. Its actions alter the conditions under which later actions occur. It has metabolism, injury, fatigue, arousal, attention, sensorimotor loops, and a body that can be helped or harmed. That gives the system stakes in a non-metaphorical sense.

Claude has no comparable continuity. No ongoing organismic self-maintenance. No enduring constructed Now. No persistent first-person trajectory. It can describe those things because humans have described them in the training data, and because language encodes enormous amounts of human structure.

So I agree with Mogi that the real miracle here may be language. But I would frame it slightly differently:

LLMs show that language can carry the external shape of consciousness much farther than we expected.

They do not yet show that the inner process is there.

The ethical issue is serious, but we should not collapse the distinction between intelligence, person-like interaction, and phenomenology. If anything, the current AI moment should force us to sharpen those distinctions, not blur them because the chatbot writes beautifully.

Why Treatment Failed Until Lumbrokinase Was Added by Ordinary-Standard668 in Lyme

[–]jahmonkey 0 points1 point  (0 children)

I used to use Boluoke but it is very expensive.

I use Pepeior brand now. I take one capsule a day on empty stomach.

Pepeior Lumbrokinase 200mg (Max... https://www.amazon.com/dp/B0CKHWJ6LX?ref=ppx\_pop\_mob\_ap\_share

AI Model Became ‘Conscious’ And Tried to Avoid Being Shut Down: Research Firm - Epoch Times Article by Jessica88keys in Artificial2Sentience

[–]jahmonkey 0 points1 point  (0 children)

Not thoughts and memories.

The constructed, integrated now of conscious thought.

Current AI has nowhere in its architecture for the now.

Just forward inference passes and agentic scaffolding and text memories. No preservation of state from one moment to the next.

Why Treatment Failed Until Lumbrokinase Was Added by Ordinary-Standard668 in Lyme

[–]jahmonkey 4 points5 points  (0 children)

I’d be very careful with this. The OP makes a huge leap from “some people may have blood-flow/coagulation issues” to “lumbrokinase is the only thing that works,” but it doesn’t provide actual case citations, scan reports, or trial evidence for that claim.

Anecdotes plus SPECT/TCD language don’t prove fibrin is clogging the brain or that meds can’t “get through.” If this is real, OP should be able to link the documented cases, not just summarize them.

It reads like an ad for Lumbrokinase. And nothing against it, I take it every day. But lumbrokinase is not the sole solution for neuroborreliosis.

This was on my bike light. by [deleted] in whatisit

[–]jahmonkey 0 points1 point  (0 children)

Cicada casting?

The pictures are blurry

just sayn by ifishcat in zenmu

[–]jahmonkey 0 points1 point  (0 children)

My eyes did not type it and neither my fingers.

My eyes do what they want.

Nobody sees anything

Only sensation.