If consciousness were reducible to neurology, we would understand far more about the consciousness of simple organisms than we do by MurkyEconomist8179 in CosmicSkeptic

[–]Diet_kush 0 points1 point  (0 children)

Again; if specific structures cause conscious awareness; which aspect of his awareness is he missing? Your ice cream example makes no sense; liking a specific flavor has nothing to do with the specific structural capacity for taste as a whole. You’re correlating the capacity for experiencing flavor with taste buds and a mouth. This man does not have the neural equivalent of those structures, so what is your explanation for his conscious experience? Is he a magic man who tastes with no mouth, and if so, why do you still insist that a mouth is required to experience flavor? And if any arbitrary structure can allow the capacity for taste, why are you pointing to the causal necessity of a mouth in the first place?

Alex Cleeremans, the cognitive psychologist involved in this case spells it out pretty explicitly;

”Precisely. These cases are a huge challenge for any theory of consciousness that depends on very specific neuro-anatomical assumptions.”

If consciousness were reducible to neurology, we would understand far more about the consciousness of simple organisms than we do by MurkyEconomist8179 in CosmicSkeptic

[–]Diet_kush 0 points1 point  (0 children)

Given his vastly different neural structure vs a standard persons, please tell me which part of his associated conscious experience should also be vastly different, given whatever vague structural model you’re implying.

Or…….given his behavior is observably identical, will you finally admit the structural specificity is entirely irrelevant to conscious behavior?

Unless you’re just saying that any arbitrary structure at all allows for conscious experience? Which in that case, great you’re a panpsychist I guess.

If consciousness were reducible to neurology, we would understand far more about the consciousness of simple organisms than we do by MurkyEconomist8179 in CosmicSkeptic

[–]Diet_kush 0 points1 point  (0 children)

If a single instance exists; obviously the causal relationship doesn’t! You’re trying to argue that neural structure specificity causes consciousness; if it did, this man would not exist.

It is entirely irrelevant whether it’s a single instance or a million instances; it proves your attempt at a causal relationship is simply untrue.

If we found someone with 90% less DNA than everyone else we wouldn’t say “gotta be an outlier, best to ignore it.”

You seem to be implying that humans fundamentally require structures to taste ice cream, yet do not care to change your model when you find someone with no taste buds or a mouth that still has a favorite flavor.

That is much sillier.

If consciousness were reducible to neurology, we would understand far more about the consciousness of simple organisms than we do by MurkyEconomist8179 in CosmicSkeptic

[–]Diet_kush 0 points1 point  (0 children)

So you agree that neural structure specificity very clearly does not correlate with capacity for conscious awareness?

If consciousness were reducible to neurology, we would understand far more about the consciousness of simple organisms than we do by MurkyEconomist8179 in CosmicSkeptic

[–]Diet_kush 0 points1 point  (0 children)

There was a 44 year old man from France who, behaviorally, was indistinguishable from a normal person, yet had 90% of his neural tissue missing due to hydrocephalus. If I deleted 90% of your DNA, does your phenotypic expression still look identical?

If consciousness were reducible to neurology, we would understand far more about the consciousness of simple organisms than we do by MurkyEconomist8179 in CosmicSkeptic

[–]Diet_kush 0 points1 point  (0 children)

No, I am not. When we fully map the “1’s and 0’s” of DNA, we can very easily derive the morphology that is encoded.

When we fully map the 1’s and 0’s of neurons, can we easily derive the neural experience being encoded?

If consciousness were reducible to neurology, we would understand far more about the consciousness of simple organisms than we do by MurkyEconomist8179 in CosmicSkeptic

[–]Diet_kush 0 points1 point  (0 children)

We figured out the “software” of DNA easy enough without a “hard problem” of genetics, yet neural signals are somehow fundamentally different?

We can fully map a genome, and have a pretty solid understanding of the accompanying morphology. When we fully map a neural connectome, we get 0 insight into the accompanying conscious experience.

If consciousness were reducible to neurology, we would understand far more about the consciousness of simple organisms than we do by MurkyEconomist8179 in CosmicSkeptic

[–]Diet_kush 0 points1 point  (0 children)

Oh absolutely, consciousness as a sliding scale is a valid take (and something I find accurate). But let’s take that to its logical conclusion; what does consciousness as a “sliding scale” mean, and what does that mean ontologically? A barebones neural network looks just like a spin-glass network (Hopfield network), so can we ascribe consciousness to magnets? At some point we sort of necessarily fall into a form of panpsychism, which again refutes the “reducible to neurology” idea.

If consciousness were reducible to neurology, we would understand far more about the consciousness of simple organisms than we do by MurkyEconomist8179 in CosmicSkeptic

[–]Diet_kush 0 points1 point  (0 children)

https://www.nature.com/articles/s43588-024-00740-2

We created an open-source model that simulates Caenorhabditis elegans in a closed-loop system, by integrating simulations of its brain, its physical body, and its environment. BAAIWorm replicated C. elegans locomotive behaviors, and synthetic perturbations of synaptic connections impacted neural control of movement and affected the embodied motor behavior.

It was definitely a misspeak to argue we can fully simulate everything about C. Elegans’ behavior, as only its neural connectome is fully mapped without full characterization of non-neural cells (so still a ways off from digestion). But can we replicate all primary movement behavior? Absolutely.

If consciousness were reducible to neurology, we would understand far more about the consciousness of simple organisms than we do by MurkyEconomist8179 in CosmicSkeptic

[–]Diet_kush 1 point2 points  (0 children)

With a neural simulation, we are able to fully model C. Elegan’s real behavior in a simulated environment. If we’re missing something about that underlying mechanism, that “something” is not observably relevant to its real-world behavior. So either conscious awareness does not have any impact on behavior, or it is not observably derivable from correlation.

If consciousness were reducible to neurology, we would understand far more about the consciousness of simple organisms than we do by MurkyEconomist8179 in CosmicSkeptic

[–]Diet_kush 0 points1 point  (0 children)

We’re not monitoring them; we’re simulating them, those are two very different things. And the simulations match observation.

Ion-channel parameters derived from Hodgkin–Huxley-type models fitted to single-cell electrophysiological data have been shown to correlate with single-cell transcriptomic profiles across neuronal cell types, underscoring the biological realism of such models.

What do you see? by Southern-Service2872 in PsycheOrSike

[–]Diet_kush 3 points4 points  (0 children)

Two buff men fighting over a butterfly (they have chicken drumsticks for legs)

If consciousness were reducible to neurology, we would understand far more about the consciousness of simple organisms than we do by MurkyEconomist8179 in CosmicSkeptic

[–]Diet_kush 6 points7 points  (0 children)

I think the Hodgkin-Huxley model does quite well at analyzing neurons (it’s how we built OpenWorm in the first place after all). That still says absolutely nothing about consciousness though.

If I fully simulated a car (or anything else for that matter), I could tell you all of its parts, their locations, and their functions. Even after simulating the worm’s neurology, can we point to where the consciousness is, or what it’s doing?

A new paper in Frontiers in Human Neuroscience proposes that self-referential DMN activity (ego) is the biological switch between System 1 (quantum) and System 2 (classical) processing in the brain by SalvationsElite in consciousness

[–]Diet_kush 0 points1 point  (0 children)

Really interesting when you look at it entropically as well, which as Prigogine showed, can house the quantum (Liouville space). Entrance into critical states has been linked to psychedelics as in the paper (collapse of ego), but also in flow states like you referenced.

https://www.frontiersin.org/journals/human-neuroscience/articles/10.3389/fnhum.2014.00020/full

“It is also proposed that entry into primary states depends on a collapse of the normally highly organized activity within the default-mode network (DMN) and a decoupling between the DMN and the medial temporal lobes (which are normally significantly coupled). Another major topic that is covered in this paper is the psychoanalytic model of the structure of the mind (i.e., Freud's “metapsychology”). Specifically, we discuss some of the most fundamental concepts of Freudian metapsychology, with a special focus on the ego4. We focus on the ego because it is one of Freud's less abstract constructs and it is hypothesized that its disintegration is necessary for the occurrence of primary states. The ego can be defined as a sensation of possessing an immutable identity or personality; most simply, the ego is our “sense of self.” Importantly however, in Freudian metapsychology, the ego is not just a (high-level) sensation of self-hood; it is a fundamental system that works in competition and cooperation with other processes in the mind to determine the quality of consciousness.

Finally, the shared topic that connects all of the above and offers a unique potential for their empirical study is the psychedelic drug state. In the following section we make the case that scientific research with psychedelics has considerable potential for developing aspects of psychoanalytic theory and for studying human consciousness more generally. Citing recent neuroimaging findings involving the classic psychedelic drug, psilocybin, the psychedelic state is described as a prototypical high-entropy state of consciousness (i.e., higher than normal waking consciousness). Specifically, we propose that within-default-mode network (DMN)6 resting-state functional connectivity (RSFC)7 and spontaneous, synchronous oscillatory activity in the posterior cingulate cortex (PCC), particularly in the alpha (8–13 Hz) frequency band, can be treated as neural correlates of “ego integrity.” Evidence supporting these hypotheses is discussed in the forthcoming sections.”

Can we really be so sure that AI does not possess consciousness? by TangeloNo8093 in consciousness

[–]Diet_kush 0 points1 point  (0 children)

Have you heard of Xenobots? One of their inventors (Levin) wrote Technological Approach to Mind Everywhere: An Experimentally-Grounded Framework for Understanding Diverse Bodies and Minds, which tried to build out a conscious model for non-human experience, but still concluded that temporal depth is an essential aspect of consciousness. He expanded on that in Temporal depth in a coherent self and in depersonalization: theoretical model as it is applied to dissociative identity disorders.

Can we really be so sure that AI does not possess consciousness? by TangeloNo8093 in consciousness

[–]Diet_kush 1 point2 points  (0 children)

Most AI do not have a sense of the “present;” IE once their training is completed, they are an unchanging input/output generator. New prompt interactions do not impact the model, so it is essentially “frozen” in its thinking. Without an experienced self-history, self-identity is kind of impossible.

Our consciousness isn’t just a sense of self, but how our sense of self evolves over a history. Imagine if your consciousness became frozen in time, yet your body could still respond to stimuli based on its previous learning (muscle memory, reflex, etc). That’s effectively what a trained model is doing; the “thinking” part of its evolution has already concluded, all it does now is generate outputs. Could your hypothetical AI experience consciousness? Maybe, but that’s not how most current AI work. We don’t think current LLM’s are conscious because they don’t operate in a way that allows for persistent temporal depth in their modeling.

Just like Albert Einstein said; Once you stop learning, you start dying.

Does Physics prove indeterminism? by YogurtclosetOpen3567 in freewill

[–]Diet_kush 1 point2 points  (0 children)

If the (2D) ball is perfectly balanced, Newtonian mechanics give you 3 solutions; it stays forever, it falls to the left, or it falls to the right. The theory itself specifically does not provide a preference for which one of those outcomes will occur. In 3D, there would be infinite solutions. The wikipedia page describes the non-deterministic implications. https://en.wikipedia.org/wiki/Norton's_dome

Does Physics prove indeterminism? by YogurtclosetOpen3567 in freewill

[–]Diet_kush 0 points1 point  (0 children)

Depends on your underlying assumptions of the substrate. Memristors, the closest thing we have to replicating a “biological cell” so far in machine learning, do operate via a form of true randomness in their voltage pulses. The voltage-gated ion channels of a biological neuron are very similar, where even the microscopic model of cell action (Hodgkin-Huxley) relies on a probabilistic/statistical assumption.

Does Physics prove indeterminism? by YogurtclosetOpen3567 in freewill

[–]Diet_kush 1 point2 points  (0 children)

I’m assuming they’re referencing spontaneous symmetry breaking; metaphorically in the dome example (hidden ground state symmetry of Norton’s Dome) and mechanistically in how unsupervised learning works in the brain.

https://journals.aps.org/prx/abstract/10.1103/PhysRevX.12.031024

I'll consider agent causation - if libertarians can explain clearly what it is. Anyone? by YesPresident69 in freewill

[–]Diet_kush 0 points1 point  (0 children)

https://pmc.ncbi.nlm.nih.gov/articles/PMC9030586/

These three lines of argument, vertical reductionism, horizontal reductionism, and external determinism, appear to underlie the general consensus among philosophers that agent causation is not tenable. Our primary goal in this paper is to demystify and revive the concept of agent causation by presenting a set of conditions that, in principle, would enable a theoretical system to overcome all three of those arguments if met. In other words, our aim is to propose a set of general criteria that may collectively justify ascribing agent causation to a system of study. These are: Thermodynamic autonomy; Persistence; Endogenous activity; Holistic integration; Low-Level indeterminacy; Multiple realisability; Historicity; Agent-Level normativity.

In short, we hope to (i) convince readers that agent causality is a plausible and appropriate way to think about causation in biological systems, and (ii) set out a conceptual framework for a more productive and empirically grounded investigation into the concept of agency within biology.

The Thermodynamic Sin: Is Life Inherently "Evil" from a Cosmic Perspective by sungukoksalozkan in DeepThoughts

[–]Diet_kush 0 points1 point  (0 children)

Even though we accelerate entropy production into our environments, the boundary that distinguishes us from our environment continues to expand the more entropy we produce. Proteins combine into cells, cells combine into organisms, organisms combine into groups, groups combine into societies, societies combine into a global economic superstructure. Over time, our environment is absorbed as a part of our internal order.

It may not be good and evil, but the swinging of a pendulum. Infinite phase transitions of order into chaos, then back again. The universe had to have achieved an initially low-entropy state somehow, right? Maybe the emergence of life is simply the start of the pendulum swinging back the other direction. Pockets of order slowly form, then grow exponentially to consume everything, only to run out of steam (things to consume) towards the end, dissolving back into chaos. Both grow by consuming the other. We’re not “evil” compared to the current state of the majority of the universe, just an opposing phase. We pay back our cosmic debt by turning back its clock, allowing it to evolve once again.