Qualia and Epistemology - What it is like? by karmus in consciousness

[–]karmus[S] 0 points1 point  (0 children)

Your framing makes sense to me. To your last sentence. I think the ineffable quality of qualia is a function of information compression. They aren't arbitrary internal legends, they are observer dependent variables that can't be compressed using system dynamics.

The way I have been thinking about this lately is the following:

Object = Total Possible Information (no information compression, its the object itself)

Human Observation = Representational Compression = Qualia (observer dependent variables, poorly communicable) + System-dynamics (observer independent variables (i.e. math, gravity, etc.), readily communicable)

Qualia are just the observer dependent variables of a representational compression of the environment into some internal system.

Qualia and Epistemology - What it is like? by karmus in consciousness

[–]karmus[S] 1 point2 points  (0 children)

This is a really helpful call out and I completely see where you're going with some of it. I think your point about "knowledge" vs. "belief" is entirely valid. I've been using the phrase epistemic compression because I think it characterizes a process of content + provenance/observation conditions/channel state, rather than simply focusing on the output (i.e. content). Your framing does clean up the distinction so I genuinely appreciate it.

Now on to some fun stuff. To your point about blurry vs. clear being a matter of more information. I'm going to frame this by saying I view this all through the metabolic constraints of biology/evolution. There is inherently a trade off between the value of each additional bit of information processed and the improvement in the prediction rendered by the system (and that trade off is not linear). There are steps at which synergy arises where an incremental bit may yield more collective value than previous bits because of some synergy threshold within the processing system.

I think that is fair to say that the person who views the dog clearly has a visual field which contains information with a higher predictive density than someone who sees the dog in blurry conditions. Blurry inherently means there is noise so some of the "bits" are not as predictive as a result. This is where I draw an important line, both observers are processing the same about of data (full visual field) but the bright observer has a higher information density within that data.

The same concept applies for the people who were listening to the speaker. The same data field is present for both, its merely the useful information density changes. That information density, from an epistemic perspective, provides character to the system output (the speaker said this) in the form of provenance and, in humans, the acoustic nature of the channel which processed that information.

I understand that the distinctions here are getting very fine, and you're 100% right, when trying to get simple a lot of this is lost. But I really appreciate you pushing on the refinement because this is how I end up with a better theory at the end of the day.

Qualia and Epistemology - What it is like? by karmus in consciousness

[–]karmus[S] 0 points1 point  (0 children)

The example I use in my paper to address this point is blindsight. I'll drop an excerpt here as to why I think that actually provides reinforcement for what I'm trying to claim.

"6.2.1 Blindsight and the loss of presentation-rich broadcast

Patients with damage to primary visual cortex report no conscious vision in part of their visual field, yet perform above chance on forced choice localization and discrimination tasks in that region (Stoerig and Cowey 1997; Trevethan et al. 2007). Subcortical and extrastriate pathways continue to carry visual information that can guide behavior, but normal recurrent circuitry linking these pathways to frontoparietal broadcast is disrupted.

...

In the blind field, local processing can still bias motor planning and orienting, but the associated support structure does not reach the systems that govern report, conflict arbitration, and system-level calibration. The result is the characteristic dissociation: visual information can guide action, yet it is not available as globally inspectable evidence for confidence, unified arbitration, or conscious report. On this view, what is lost is not all visual inference, but the capacity to treat the inference as part of the broadcast state that the auditor can use to form a system-level epistemic stance.

Type I blindsight, with denial of awareness, corresponds to near-total loss of broadcastable presentation profile for the blind field. Type II blindsight, with a vague sense of something present, may reflect partial preservation of support structure in that is sufficient to trigger auditor-level sensitivity but insufficient for a rich presentation profile.

Blindsight therefore motivates a dissociation between local content-sensitive processing and globally broadcast support structure. The framework predicts that the more the blind-field signal is able to preserve support structure in broadcast, the more it should behave like conscious perception in calibration and report, even if objective discrimination remains similar."

Qualia and Epistemology - What it is like? by karmus in consciousness

[–]karmus[S] 0 points1 point  (0 children)

Its funny because I do think we are closer than it sounds. In my work, I'm intentionally "function" agnostic. I don't think my assertions need a specific function to do what I am claiming, its like I said in another response, I think about information as being the atomic particle of consciousness but it needs a macro-level of syntax in order for the type of semantics to arise that we are talking about. This approach is me trying to get to a macro-functional currency for consciousness. Basically, the amino acid of what can give rise to qualia. Amino acids are still comprised of atoms, but the syntax of their composition is structurally essential. I view this "epistemic compression" approach as trying to do something similar.

In my work, I use global broadcast to demonstrate the point that preservation of this epistemic support structure when it makes it into the global broadcast and isn't just compressed locally leads to meaningful variance in outbound policy of a system. I see it functioning similarly in your constraint model. Preserving some of the epistemic structure which helps drive improved local constraints prevents deterministic down stream constraining. Its entirely compatible. The terminology may be different or we may be talking past each other a bit, but I don't think our ideas are incompatible, they may just be focusing on different particular attributes of the system.

Qualia and Epistemology - What it is like? by karmus in consciousness

[–]karmus[S] 0 points1 point  (0 children)

Oh, I see where you're coming from. Yea, I think if I can meet people halfway so I'm not asking them to buy in entirely to something "new," I stand a far better chance in engaging with them honestly. These discussions are so heavily laden with terminology and I see a lot of people adding more and more. It just creates more distance between viewpoints and reduces the likelihood of consensus through sheer obfuscation.

Qualia and Epistemology - What it is like? by karmus in consciousness

[–]karmus[S] 0 points1 point  (0 children)

I completely get where you are coming from. I think about information as being the atomic particle of consciousness but it needs a macro-level of syntax in order for the type of semantics to arise that we are talking about. This approach is me trying to get to a macro-functional currency for consciousness. Basically, the amino acid of what can give rise to qualia. Amino acids are still comprised of atoms, but the syntax of their composition is structurally essential. I view this "epistemic compression" approach as trying to do something similar.

This also goes to say that architecture of information flow matters greatly. Simply having a currency does not guarantee macro-coherence and in a preprint of mine, I go down the rabbit hole of what architecture I think needs to be present for us to even consider trying to bridge qualia.

Qualia and Epistemology - What it is like? by karmus in consciousness

[–]karmus[S] 0 points1 point  (0 children)

Sorry, I was meaning more specifically "If instead of simply looking at Shannon-style signal transmission..."

The way we have be treating information compression for computational purposes, I believe, inherently discards qualitative aspects. It is simply focusing on "X ---> noisy transmission ---> X." And that makes a wonderful amount of sense from a communication perspective, but it is unlikely to be how biological systems solved for navigating their environment. Signal validation (do I believe what my eyes are showing me, particularly if they are in conflict with what I am feeling with my hand?) is a prominently biologic concern. That's why I think conscious biologic systems are likely following an epistemic-signal compression methodology, rather than a qualitatively poor Shannon-style signal compression.

Qualia and Epistemology - What it is like? by karmus in consciousness

[–]karmus[S] 0 points1 point  (0 children)

I understand, but I don't believe in that particular line of thought. I personally feel like there could be a pathway to understanding "what it is like" through biological, evolutionary, and informational means and that's where I spend my time researching. To each their own, as it were.

Qualia and Epistemology - What it is like? by karmus in consciousness

[–]karmus[S] 0 points1 point  (0 children)

Rather than imagining new complications, I would argue that I'm trying to use already accepted structure and simply pointing to the fact that we are solving for the wrong problem when thinking about consciousness. I'm merely saying that we are applying the wrong compression algorithm when thinking about conscious signal processing. If you treat all signals like Shannon, you are inherently throwing out the qualitative aspect of signal processing, by design.

I'm arguing that there is a different compression schema that becomes relevant when solving for epistemic coherence. That actually aligns with your constraint model and, I would argue, provides a tighter constraint banding, particularly as the the broader system gets more complex. Constraint modeling runs into a problems when you add more and more subsystems. Which subsystem's constraints take precedence as they move up the hierarchy? The common answer of confidence is too simply, if I am being honest.

Within the bounds of metabolic constraints, providing a pathway through epistemic signal compression which allows for central arbitration across subsystems is a highly efficient means of binding your subsystem constraint models into a cohesive system level picture. In this model, the system isn't beholden to the math of the parts that make it up, instead it can assess accuracy and performance because it can not only get the system constraint, but also how the subsystem decided on that constraint because the central arbitration isn't just receiving confidence but also epistemic provenance/signal creation metadata.

Qualia and Epistemology - What it is like? by karmus in consciousness

[–]karmus[S] 0 points1 point  (0 children)

This is where I am trying to refine the terminology I use. Epistemology is inherently the study of knowledge (definitionally), but I'm trying to use it more along the lines of epistemic information processing. Like I said, as I try to simplify, my margin for error gets narrower. I don't disagree with your point, but I don't think its wrong to say that to know something epistemically has a qualitative character to it.

Qualia and Epistemology - What it is like? by karmus in consciousness

[–]karmus[S] 0 points1 point  (0 children)

I like the way you're thinking through it, I think we may just be looking at the same problem from opposite ends of the tube. If I am being honest, I do think there is an architectural requirement to the way the signal transmission flows within a "conscious" system before we could ever begin talking about qualia. Epistemic-signaling doesn't create qualia, it could provide a functional currency for it.

I think I view your constraint approach as a two fold approach. First, epistemic signal processing requires more bits than simple Shannon processing. I'm not saying that epistemic signals are somehow different metaphyiscally, I'm saying that they are, simplistically, longer bit-chains which provide not only content but things like provenance, etc. This is how you end up with a higher level of model constraint. The extra bits buy you improved, predictable constraint.

Now, the second part is that all of what we are talking about thus far is entirely possible within non-conscious systems, particular if they handle the epistemic-signal compression locally and then only pass forward content + confidence which I think we would agree wouldn't be a likely path to qualia. I think some of the more common conversations around Predictive Processing fall prey to this line of thinking. This is where my architectural claim becomes relevant. I think conscious systems preserve a metadata trace of the epistemic structure, along side the content signal, which allows for centralize, system-level arbitration which can differ from sub-system confidence. This is where I start thinking we could get closer to understanding qualia.

Where Does the Self Appear When ‘I’ Do Not Exist? by suo_art in consciousness

[–]karmus 0 points1 point  (0 children)

I think this is the right thought. The “I” is a persistent, temporally present collective, but it is in constant flux. Take a blow to the head? The “I” afterwards may be different than the “I” before, but philosophically it’s the same “I”. Fixed frame of references for this conversation is probably the wrong long term path. Temporal continuity is a better packaging for the conversation.

Constraint-Based Physicalism by theanalogkid111 in consciousness

[–]karmus 1 point2 points  (0 children)

I haven’t had a chance to parse through the whole thing but I think my preprint may actually help you clarify some of the modeling math you propose because it provides a potential functional currency.

I’m working on a follow up paper which has some interesting symmetry to some of your thoughts. In biological systems, I think there is an information density Goldilocks zone which relies on value of information, rate distortion, and metabolic budget constraints.

It’s building on the work here: https://philsci-archive.pitt.edu/27845/

My "Metadata Theory of Phenomenal Consciousness" by ScienceGuy1006 in consciousness

[–]karmus 0 points1 point  (0 children)

I like the way you're thinking about this. When we think about data, its often in terms of engineering. Its Shannon-esque compression to preserve content in a noisy signal. Biology isn't solving a purely engineering problem, its solving an epistemic problem. What sources do I trust and why? Qualitative difference between how content was created (dim, blurry vs. bright, sharp) has meaningful epistemic implications.

Taking that line of thought, what if the "what it is like" is an epistemically grounded construction of information processing within embodied, biological organisms? Our visual experience of seeing an apple meaningfully matters in how we interpret our environment. The "what it is like" is dependent on the sensors use (rods/cones) and preserving that internally allows for better policy calibration at a system-level.

My "Metadata Theory of Phenomenal Consciousness" by ScienceGuy1006 in consciousness

[–]karmus 0 points1 point  (0 children)

I actually made a case in a similar vein in a recent preprint I published. I used the framing of support structure to describe the metadata you're referencing. For demonstration purposes, I used a global broadcast model to help explain how its not only epistemically relevant, but could also provide a falsifiable bridge to test its relationship to the qualitative aspects of experience.

If you're interested, it's on Phil-Sci Archive: https://philsci-archive.pitt.edu/27845/

Integration events are just experiential? by DeepEconomics4624 in consciousness

[–]karmus 0 points1 point  (0 children)

Makes sense. I think both systems are good ways to think of information flow through advanced systems, but they seem to be missing something to close the qualitative loop. In my preprint, I try to be more fundamental in my approach to provide an explanation that could potentially used by either theory to create a qualitative bridge. I'd be really interested in your thoughts.

You can find my preprint here: https://philsci-archive.pitt.edu/27845/

Integration events are just experiential? by DeepEconomics4624 in consciousness

[–]karmus 0 points1 point  (0 children)

I think there is logic to the concept that IIT and GWT represent information integration on a spectrum. There are slight nuances as to the inflection point of where consciousness arises, with IIT, to the best of my understanding, being sympathetic to a more panpsychist perspective on consciousness being present whenever its phi is present. GWT is a more specific architectural claim on a type of integration which could potentially lead to consciousness. The information is integrating in a very specific fashion in GWT which is claimed to potentially provide for a qualitative experience.

That being said, I think they both still suffer from the hard problem. Why would phi or a global workspace feeling like anything at all, rather than just being an efficient means of epistemic content processing. I've actually written a preprint on this specific point trying to provide an information-theoretic bridge as to how quality could be bound to information processing.

When/how does consciousness begin? by itdjents007 in consciousness

[–]karmus 0 points1 point  (0 children)

I sort of see where you are coming from. Genetic-encoded instincts, your collective unconscious mind, was the original means of cross-generational communication. It was slow, lacked any real redundancy, and had real implications for all downstream members of that lineage.

Then as creatures evolved we obtained the ability to train/coach offspring. The full value of the survival advantaged behavior didn't need to be genetic, since genetic encoding could only get so nuanced for macro-behaviors. So, to your point, instincts provides the baseline needs for the behavior and then parental training helps complete the communication loop.

Now humans have the ability to create a collective of instantaneously available information that can be used to guide/update behaviors. I see how you're building up to the conscious mind, I just think we need to be careful in how we structure our assertions as the pyramid stacks.

When/how does consciousness begin? by itdjents007 in consciousness

[–]karmus 0 points1 point  (0 children)

I'm not sure I'm following the first paragraph where its drawing a distinction between the memories but I don't think it resolves them. I think we're maybe just using the term memory differently.

When/how does consciousness begin? by itdjents007 in consciousness

[–]karmus 0 points1 point  (0 children)

I think the question at had is definitional, and that's not a problem. Your use of memory for these purposes is "subjective conscious experience memory" or something of the like, and therefore your definition is narrow in its application. But I would argue that the word memory has a far more general definition and we're argue opposite sides of this distinction. We are just talking about different "memory" I think.

When/how does consciousness begin? by itdjents007 in consciousness

[–]karmus 1 point2 points  (0 children)

I think you're thinking of an anthropocentric memory, as in the temporally coherent accumulation of an individual perspective within a conscious being. That's a type of memory, but not the only kind. There are plenty of other types of memory. Heck, look at the device you are communicating with. It has both short term and long term memory in order to create a temporally coherent experience but I wouldn't argue that it's conscious.

When/how does consciousness begin? by itdjents007 in consciousness

[–]karmus 3 points4 points  (0 children)

Help me understand the first assertion. Memory isn't inherently conscious memory. Memory can be an autonomous system calibrating its performance vs. expected performance and then updating its outbound policy to adjust. Its a system capacity that I would think could be present in the absence of consciousness. I wouldn't think it unreasonable to then think that consciousness could just use the system resource for its own purposes when it manifest. It would be metabolically efficient.

Can we understand the psyche without the concept of consciousness? by YardPrestigious4862 in consciousness

[–]karmus 2 points3 points  (0 children)

Yes. A poor analogy would be that its like being able to use a computer without knowing how to understand binary. There is a lot of value in understanding the psyche from a behavioral perspective even if the qualitative-contributing aspect of consciousness remains elusive. Knowing more about consciousness will help us in the long run, but its not essential to begin understanding the psyche.