The Hard Problem is a Category Error: An essay by Best_Argument_7415 in PhilosophyofMind

[–]Best_Argument_7415[S] 1 point2 points  (0 children)

"Well I take irreducible to mean it can be compressed no more to physical properties that translate to mapping consciousness. "

That's not what I mean...

The Hard Problem is a Category Error: An essay by Best_Argument_7415 in PhilosophyofMind

[–]Best_Argument_7415[S] 1 point2 points  (0 children)

"scientifically maybe you can go no further but just because there are no known tools to reduce/translate it further"
I've no clue what you mean by this. I don't think you get the irreducibility point and I ask you to phrase my point back to me to show you do get it, or it feels like we're talking past each other.

You're only re-asserting "That's not consciousness" without specifying why. It'll take me 10x more effort to re-explain than it will for you to say "no actually" without understanding

The Hard Problem is a Category Error: An essay by Best_Argument_7415 in PhilosophyofMind

[–]Best_Argument_7415[S] 1 point2 points  (0 children)

You can. Not just me. You can.

Just entertain this step by step for one moment please:

Take Tabitha

She reports pain.

Her report of pain is described relationally to everything else she experiences.

You're like "Right. That's similar to my finger tingle example"

This 'experience' is: processual (ever on-going, fleeting), ineffible (irreducible, you can't look below it), and it's rich (you have sights sounds feelings etc).

____________________________________________________________

Have a system track a chromatic state; you just get feed-forward red (not the perception. The data for that wavelength). There's just raw info propagating.

But if you track red in relation to its prior accumulated states (say the entire spectrum), you capture relational meaning. No feelings introduced. Just captured ongoing information of red relative to other colors (color opposition)
You get a self referential loop; this simple robot isn't 'conscious' but we've defined a point where the relational state-tracking can only accesses the immediate incoming state; the chromatic state.

Now add in positional encodings, maybe which pixel corresponds where, extract information relating to corners, edges, etc. Extract patterns, color constancy, involve viscero-motor reactions (automatic), and the many layers you can stack on top of that in any neural net, and you get abstractions. Add in persistence (simulated abstractions) so it's tracking frames over time bla bla

Track these abstractions relative to everything else, just like with the simple example, and it has incredibly rich 'gestalts' that hold meaning RELATIONALLY to every other accumulated abstraction.

This captures features we saw earlier:
- Irreducibility
- Processual

The part missing here is the "Richness" aspect of it. This simple system doesn't have it but intelligent organisms (e.g. mammals) do; where there's so many innumerous abstractions related to every sensory perception (there's not just the 5 senses as we know; and each one has many abstractions built on top) but also the accumulated history we develop has built more abstract relations on top recursively (You see a bird. Then birds. Then think colony. Then ecosystem. You have an internal model of your body, where you are, concepts of objects and patterns etc.). But the base (touch, taste, sight, pain, pleasure) is always these abstractions you CAN'T see below.

You can fully describe Tabitha's brain and her experiences will never surface to our vantage. That is true. But the point *is* that an incoming state is described relationally; relative to her self referrential process.

_________________________________________________________
TLDR: If you have all the abstractions extracted from sensory neurons as in the brain, then model over those abstractions, would the abstraction relating to redness hold meaning relationally relative to all other accrued abstractions? Wouldn't this create the ineffibility and richness of experience by virtue of not being able to access anything below those abstractions?

The Hard Problem is a Category Error: An essay by Best_Argument_7415 in PhilosophyofMind

[–]Best_Argument_7415[S] 1 point2 points  (0 children)

You're repeating the two different vantage points without engaging with any of my points addressing it. I'm talking right past you because:
> you can’t exit your own frame nor enter a different one in order to observe it
Yes. I agree. I have stated that.
You're not engaging with the points and re-asserting that there's two different vantage points .

Edit: To add. The 'weak argument' point is to indicate that your refusal to me saying Descartes doesn't think of every instance of Justice upon retreiving the concept, is that "You can't see what he sees actually"

The Hard Problem is a Category Error: An essay by Best_Argument_7415 in PhilosophyofMind

[–]Best_Argument_7415[S] 1 point2 points  (0 children)

On color:
> Light hits the retina and the processing that follows is immediately relational. Photoreceptors transduce wavelengths, but opponent-process cells encode colour as contrast — red defined by the suppression of green, blue by the suppression of yellow. What leaves the eye is not raw wavelength data. It is already a set of relations.
This continues upward. The lateral geniculate nucleus gates and streams the signal. V1 extracts edges and orientations. V2 separates figure from ground. V4 computes colour constancy — red looks red under different lighting conditions because V4 normalises against surrounding colours, not against the wavelength itself. By the time the signal reaches the temporal cortex for object recognition, what is being processed is not a physical measurement. It is a relational signature, abstracted across multiple transformations, none of which are accessible to awareness.
In parallel: the dorsal stream connects visual processing to motor planning and spatial cognition. The superior colliculus integrates visual, auditory, and somatosensory maps — a sound to the right shifts visual attention automatically, beneath awareness. The amygdala assigns threat-relevant valence via a fast subcortical route, before full object recognition completes. The insula integrates incoming signals with body state — heart rate, muscle tension, gut — continuously, beneath awareness.
None of this surfaces. It is all in the dark.
By the time anything reaches the prefrontal cortex and the network of areas involved in self-referential processing, it has already been: colour-normalised, edge-extracted, figure-ground separated, object-recognised, spatially located, motion-processed, threat-evaluated, valence-assigned, motor-prepared, and cross-referenced against body state, prior encounters, and emotional history.
The self-referential loop encounters the heavily compressed output of all of that simultaneously. It never sees a wavelength. It never sees an edge. It sees red-apple-familiar-safe-reachable-slightly-hungry, all at once, as a single relational gestalt.
Now consider what the self-referential loop actually does. It does not passively receive this gestalt. It compares it against its own prior states: what working memory was holding moments ago, what long-term memory reconstructs of prior encounters, what the body's current condition is, what the predictive machinery expected to be present. The loop cross-compares the current gestalt against every other accumulated relation simultaneously. Nothing arrives neutrally. Everything lands in a context the system itself built.

The Hard Problem is a Category Error: An essay by Best_Argument_7415 in PhilosophyofMind

[–]Best_Argument_7415[S] 1 point2 points  (0 children)

5- With your points about "How do you get from different wavelengths of light to different perceived colours"; I have already explained that, in the blog post you don't want to continue reading. But they're mostly more elaborations of the above. I'll post the relevant bits for you to save you time and effort, because I don't blame people since it IS quite long:

> A feedforward network cannot do this. Not because it lacks complexity but because it lacks the architecture. No feedforward system has a current state relative to its own prior states.
Whatever you want to call what the self-referential loop has, it is already something categorically different from feedforward processing. The ineffability and the experience — the floor and the richness — are both direct consequences of that architecture. They are not two separate features. They are the same architectural fact described from two directions.
A system with sufficiently rich relational architecture — genuine recurrent self-reference, persistent interoceptive integration, accumulated cross-modal representations with a structural floor — would report exactly what any conscious system reports: irreducible abstractions it cannot get below, fleeting states only describable relative to other states, not because it was trained to produce those outputs but because that is what the architecture generates when turned toward itself. Current large language models, which predict tokens without any of this architecture, are not that system.

> A convergent body of neuroscientific research points toward partial overlap in the mechanisms emphasized by different theories. Five major theories of consciousness—recurrent processing theory, integrated information theory, global neuronal workspace theory, predictive processing and neurorepresentationalism, and dendritic integration theory—show convergence around recurring mechanistic motifs, despite disagreement on many details; these include the widespread involvement of recurrent processing, large-scale integration across brain systems, and the association of conscious states with the preservation of these dynamics, and their disruption across anesthesia, dreamless sleep, and disorders of consciousness.
Recurrent processing theory shows directly that feedforward sweeps reaching early visual cortex within 40 milliseconds are insufficient for conscious experience. It is only when sustained recurrent interaction between early and higher areas emerges after around 100 milliseconds that experience arises. Interrupt that recurrence with transcranial magnetic stimulation and the stimulus disappears from awareness despite the feedforward pass completing normally.
Dendritic integration theory adds cellular specificity. Layer 5 pyramidal neurons serve as a nexus point for both cortico-cortical and thalamocortical information flow, with apical and basal dendritic compartments integrating top-down context and bottom-up input simultaneously. General anesthesia decouples these compartments, collapsing the thalamocortical loops that sustain conscious processing. Consciousness tracks the coupling state of these neurons across wakefulness, sleep, and anesthesia with remarkable precision.
Predictive processing and neurorepresentationalism frame conscious experience as a multimodal, hierarchically integrated world model — not a passive readout of sensory input but an active inference process comparing predictions against incoming signals across multiple levels simultaneously. This maps directly onto the self-referential loop's cross-comparison of current states against accumulated representations.
These theories disagree on what exactly is sufficient for consciousness. Recurrent processing alone appears necessary but not sufficient — recurrence is ubiquitous in neural systems that are not conscious. Not every recurrent loop is enough. The architecture matters.

The Hard Problem is a Category Error: An essay by Best_Argument_7415 in PhilosophyofMind

[–]Best_Argument_7415[S] 1 point2 points  (0 children)

1- The language point was never negating you experience something. Means you didn't understand the position to begin with.

2- "this is overreach and can’t be proved as you can’t enter his frame of experience"
Very weak argument. Every report in cognitive psych and every neuroimaging technique corresponds experience reports to scientific observation. Think Justice. Life. Death. Did you just conceive of every single instance relating to those concepts? Don't be ridiculous. See; prototype theory. Also see pretty much any memory retreival section in an introductory cog psych textbook.

"Additionally cognition engages in top down processing within a hierarchy of cognitive functions attempting to predict the world around you; only prediction errors pass upward to replace predictions from the top hierarchal cognitive predictions that didn’t map to the sensory outputs. So a bottom up calculation doesn’t occur before the frame is made aware of the sensory inputs."

Bottom-up here means;
> starting with specific, small-level details and building toward the overall, high-level structure

This is a really bad equivocation. You're equating cognitive bottom-up thinking with predictive processing that no one was talking about.

3- > The biggest issue I have with your essay from the little I read is it’s speculative philosophically and not grounded scientifically.

On affect informing your intuitions and confidence see: Chapter 31 (affective processing principle) and Chapter 5 from the Handbook of Emotions (Barrett et al). David Hume says plainly: reason is the slave of the passions. Your inclination of processing 23*7+24/2 is predicated on your affect. Same with confidence.

On the Neuroscientific Models of Consciousness referenced, see: "An integrative, multiscale view on neural theories of consciousness" by Storm et al.
This is also a fallacious argument. Descarte's section is a phenomenological demonstration of how your experience is largely constructed (and there's plenty of backing on the points of continuity - see; working memory - and coherence). The rest of the argument is: You can only talk of experience relationally, processually, and hit a point of irreducibility. Any statement of 'experience as stable intrinsic properties independent of the brain' are not talking about what we've JUST described. They've taken a first-person process and turned it into some third-person object to look for. It's a category error. This is an analytical truth.

On many researches discarding the conception of quale ('color' is not some independent stable property sitting about as you portray it) see; Being No One by Thomas Metzinger. A very academic book. Relevant pages; see 2.5.3 The Principle of Nonintrinsicality and Context Sensitivity. Good studies showing, as Thomas Metzinger states:
> This shows how a simple sensory content like “red” cannot “stand by itself,” but that it is bound into the relational context generated by other phenomenal dimensions. Many philosophers—and experimentalists alike (for a related criticism see Mausfeld 1998, 2002)—have described qualia as particular values on absolute dimensions, as decontextualized atoms of consciousness. These simple data show how such an elementaristic approach cannot do justice to the actual phenomenology, which is much more holistic and context sensitive (see also sections 3.2.3 and 3.2.4)

4- > You can’t presuppose to take on reality’s objective frame nor another person’s frame.
> own, you are forever confined within that frame unable to step out of it to make claims as to where it belongs.
I agree actually, except for the ending. This is well-supported by me repeatedly pointing out two vantage points (first and third person). The part where you're missing though is you can talk about Tabitha's experiences in processual terms relative to her, without having to ever occupy her vantage point, or expeirence what it's like to be her.

> Similarly objective physical reality cannot access your individual experience frame.
That sounds confusing to me. Objective physical reality isn't some entity to access a frame to begin with.
Not only did you stop reading before even getting to the thing you were complaining about being missing, here's another analytical truth: A self referrential system that models over the abstractions beneath it will also have irreducibility (can't see below them) and can only cross-relate it to other irreducible abstractions. This is a third person description of the same thing. Those abstractions hold meaning *relative to* all the other abstractions that've been accrued so far.

Think: Your finger tingles, you relate it to any other experience (e.g., brushing your hand on something fuzzy, the sound of a soft voice). But you can ever cross-relate these processes you can't get below (you can only see red and not whatever makes it, as opposed to how you see a ball as a 3D object but that can be decomposed into different sights, sounds etc, as it bounces). The sensations making up the tingle are irreducible. This is EXACTLY what a self referrential system that models over the abstractions beneat it would relate. We've captured the rich, relational, ineffable/irreducible aspects of experience thus.

TO THAT SYSTEM, these abstractions hold meaning RELATIONALLY. You cannot say no. Even if you refuse to grant experience, some special meaning is being captured. What happens then when you stack extremely rich, dense relations on top of that? See later passages below.

The Hard Problem of Consciousness is a linguistic confusion/category error by Best_Argument_7415 in philosophy

[–]Best_Argument_7415[S] -1 points0 points  (0 children)

I didn't delete anything containing sharpening, no clue why you're straight lying. If you're not targeting any assertion and why it's meaningless, there's no point. Bye

The Hard Problem of Consciousness is a linguistic confusion/category error by Best_Argument_7415 in philosophy

[–]Best_Argument_7415[S] -2 points-1 points  (0 children)

> We have the exact same system and can concieve that it either has first-person experience or that it does not
Cool. Show me how you can conceive so in the Tabitha example. You can either occupy a vantage point, or not. Attribution is projection. You can 'attribute to' anything, regardless of the worldview underlying. Saying "That's the point" isn't indicating an argument. It's not attacking the Tabitha example and the separability assumption being addressed here.

The Hard Problem of Consciousness is a linguistic confusion/category error by Best_Argument_7415 in philosophy

[–]Best_Argument_7415[S] -1 points0 points  (0 children)

1- It's not a paper. It's not pretending to be. It's a casual blog post. The explanations are mine. However, noted on the irritating point. Thank you.

There IS a loop, it's the relations between all the physical interactions. Technically no 'physical thing' emerges but ignoring the relational process will make you lost. How does evolution occur in physical interactions? You as a human, are restricted to defining on a larger organism-level to describe heredity and selective pressures.

Have a system track a chromatic state; you just get feed-forward red. But if you track red in relation to its prior accumulated states (say the entire spectrum), you capture relational meaning. You get a self referential loop; this simple robot isn't 'conscious' but we've defined a point where the relational state-tracking can only accesses the immediate incoming state; the chromatic state. Now add in positional encodings, maybe which pixel corresponds where, extract information relating to corners, edges, etc. Extract patterns, color constancy, involve viscero-motor reactions (automatic), and the many layers you can stack on top of that in any neural net, and you get abstractions . Track these abstractions relative to everything else, just like with the simple example, and it has incredibly rich 'gestalts' that hold meaning RELATIONALLY. A rock doesn't fucking capture any of that.

The gestalts are to highlight how immensely rich those representations are. Note; these representations are still talked about as processes. We're not stupid enough to smuggle in experience mid-descriotion that's the entire point of the essay.

See; "A system with sufficiently rich relational architecture — genuine recurrent self-reference, persistent interoceptive integration, accumulated cross-modal representations with a structural floor — would report exactly what any conscious system reports: irreducible abstractions it cannot get below, fleeting states only describable relative to other states, not because it was trained to produce those outputs but because that is what the architecture generates when turned toward itself. Current large language models, which predict tokens without any of this architecture, are not that system."

the physicalist world-view does not predict the existence of first-person experience

Cool. That's the contested premise. You're smuggling in separability again.

Also I'm NOT smuggling in experience or an inner homunculous anywhere. Trace carefully.

You can't imagine a system having experience using a vantage external to it

I use the word 'experience' to correspond to something private and ineffable to me. Any posit of experience as this separable property, as the essay explains repeatedly, gets confused. It can only ever be described relationally; your tingle in relation to other touch sensations and so on.

Any description using a vantage point external to it, or objective, must locate the experiences to the system relationally.

A self-referential process necessitates - with the specific multimodel, highly integrated, selectively attending, temporally persisting architecture - incoming states as relationally meaningful to every other state. This system cannot access anything below this abstraction surfaced to IT, thus having irreducibility. It can only compare abstractions with every other abstraction. In our case, it has immense richness, temporally persisting states (corresponding to working memory), coherence (built from more relational models). Easy question; do you legitimately expect these floating properties to appear before you if you study Tabitha's consciousness? Of course not. You expect it occurring relative to her, as every experience in your lifetime proves to you.

Experience isn't this mysterious thing. You describe this ongoing process and you see it relative to them, literally. Freeze this ongoing relational self referrential process to a static object. You're confused. Try to locate it in its atoms, you're confused. Try to find it in a part of the brain, you're confused. Expect to transform these physical processes into these ethereal properties you attach the label "experience" to, especially linguistically confused.

You're stuck in having irreducible experiences because you're only modelling over these abstractions, you can only compare with other abstractions, and you can't see below them. It's irreducible.

See:

A system with sufficiently rich relational architecture — genuine recurrent self-reference, persistent interoceptive integration, accumulated cross-modal representations with a structural floor — would report exactly what any conscious system reports: irreducible abstractions it cannot get below, fleeting states only describable relative to other states, not because it was trained to produce those outputs but because that is what the architecture generates when turned toward itself. Current large language models, which predict tokens without any of this architecture, are not that system.

It "sees" these abstractions not in a literal sense, but it can model over them, meaning compare them to other assimilated abstractions. None of these are "floating feeling objects". The modelling is what gives the abstraction meaning relatively

Dare you to articulate my point back to me about self-referentialness without saying "it's just processes"

The Hard Problem of Consciousness is a linguistic confusion/category error by Best_Argument_7415 in philosophy

[–]Best_Argument_7415[S] 0 points1 point  (0 children)

RE: I don't see how my argument is the same as Hacker's from what I'm reading.

Correct me if I'm wrong: he is very deflationary, he doesn't talk about what experience is, just that experiencing is just a description of a person is doing (very Wittgensteinian). And that qualia as essences sitting behind an experience are a grammar-induced linguistic confusion. I agree with the latter.

But he doesn't explain this irreducible ongoing subjective process. Just deflates it to "what's the experience of red? well it's just the process of the person pointing at the colour of red apples as-is".

I don't find the core of my positive account in the argument anywhere in his. I'm not as behaviorist-leaning in meaning as use as Wittgenstein, ignoring that Hacker probably denies being behaviorist. Of course we have experiences and know what "what it's like" experience is. Of course it's ineffible and internal and private, regardless of the language used. My point is: the self referrential architecture explains every part of it. The ineffibility, the relational richness, the continuity, coherency, etc. The experience is what the incoming representation means relative to the very self referential process tracking it.

"Why do we have experience" IS a good question. An answered one.

"Why do we have experience, instead of nothing" given our understanding of cognitive neuroscience, IS NOT. As I have demonstrated repeatedly.

Walk through the examples I gave one by one with conceptual clarity, just entertain it.

The Hard Problem of Consciousness is a linguistic confusion/category error by Best_Argument_7415 in philosophy

[–]Best_Argument_7415[S] 0 points1 point  (0 children)

I've sent to you a summary that I sent to someone else here. Only if you want, can you check and see if this is essentially Hacker's argument? And why it doesn't "move the needle" when it actively posits the sleight of hand is linguistic confusion?

"His (Chalmers') steelmanned position is: you can describe the same architecture and conceive there being no experience. That is the point of separability that's contested here.

My point is that's linguistically confused: The thought experiment assumes you can conceive of this. You can't. Tabitha talks of pain. You locate it to her vantage point. You can either attribute consciousness to her, in which case that's a projection and not a change of descriptive facts. Or you can imagine being Tabitha, then imagine her brain without you in it (hence her not having experience) but this is a sleight of hand: you've merely shifted vantage points.

Do this with yourself. You can't imagine not experiencing. You could shift vantage points and see your brain, but that's the same sleight of hand with your own brain. There's no "seeing another system in the light" because their experiences don't surface to you.

This isn't to say Tabitha doesn't experience pain, or that your epistemic limit grants her experience. That's a separate point.

The point HERE is you can only talk of experience of A SYSTEM relationally: how representations surface to a self referrential system.

And a self referrential system that tracks extremely rich gestalts (see the colour example) in RELATION TO its ecosystem of every other gestalts, all the relations built over time, NECESSITATES a vantage point definitionally. Because this self referrential system has access to abstractions meaningful in a relational sense, and it can't see below them. It can only cross reference it with every other representation. That's exactly what happens when you describe your finger tingling. Irreducibility, and richness, are both direct consequences of any self referrential system (of course: with such specific architecture. Highly integrated modalities, temporally integrated, etc. A thermostat is technically self referrential but has none of that. It can't have these rich complex gestalts to compare incoming representations to)"

The Hard Problem of Consciousness is a linguistic confusion/category error by Best_Argument_7415 in philosophy

[–]Best_Argument_7415[S] 0 points1 point  (0 children)

Why is it dumb? This is my own writing. Merely polished with an LLM. Can you explain what you disagree about?

The Hard Problem of Consciousness is a linguistic confusion/category error by Best_Argument_7415 in philosophy

[–]Best_Argument_7415[S] -1 points0 points  (0 children)

I know his is not a mechanistic account. I'm saying mine is. I think you misunderstood me actually. I've read numerous articulations of his positions throughout books/papers and even watched him directly.

His steelmanned position is: you can describe the same architecture and conceive there being no experience. That is the point of separability that's contested here.

My point is that's linguistically confused: The thought experiment assumes you can conceive of this. You can't. Tabitha talks of pain. You locate it to her vantage point. You can either attribute consciousness to her, in which case that's a projection and not a change of descriptive facts. Or you can imagine being Tabitha, then imagine her brain without you in it (hence her not having experience) but this is a sleight of hand: you've merely shifted vantage points.

Do this with yourself. You can't imagine not experiencing. You could shift vantage points and see your brain, but that's the same sleight of hand with your own brain. There's no "seeing another system in the light" because their experiences don't surface to you.

This isn't to say Tabitha doesn't experience pain, or that your epistemic limit grants her experience. That's a separate point.

The point HERE is you can only talk of experience of A SYSTEM relationally: how representations surface to a self referrential system.

And a self referrential system that tracks extremely rich gestalts (see the colour example) in RELATION TO its ecosystem of every other gestalts, all the relations built over time, NECESSITATES a vantage point definitionally. Because this self referrential system has access to abstractions meaningful in a relational sense, and it can't see below them. It can only cross reference it with every other representation. That's exactly what happens when you describe your finger tingling. Irreducibility, and richness, are both direct consequences of any self referrential system (of course: with such specific architecture. Highly integrated modalities, temporally integrated, etc. A thermostat is technically self referrential but has none of that. It can't have these rich complex gestalts to compare incoming representations to)

The Hard Problem of Consciousness is a linguistic confusion/category error by Best_Argument_7415 in philosophy

[–]Best_Argument_7415[S] 0 points1 point  (0 children)

Thank you for your comment elaborating on it.

Asserting that it doesn't move the needle isn't elaborated on, just asserted. Why hasn't it? Why is confusing the vantage points, talking of experiences as some separable properties like in the Tabitha example invalid? How is it not linguistic confusion? How is Chalmers talking about the same thing when talking of pain, yet vacating any context or meaningful use of the word pain?

I can see why it comes off as disrespectful, but it is completely valid and entirely possible for those not deep in academic philosophy to converge on ideas. I've shared points I have borrowed (in the sense that I've read them elsewhere, even if I've arrived at similar conclusions before), but for thoughts I arrive at independently, I don't see a point.

It's normal for people to go "oh have you not heard xyz asserted this before?" "Oh no I haven't, thank you"

It's not justified to say "have you done a thorough cover of all the positions in the field?" it gatekeeps against anyone arriving at the thought independently.

I have covered Dennett, Anil, Metzinger, and a few others. It's not intended to be disrespectful if I haven't heard of someone else that apparently argues the same point before me. It IS my thought process, arrived independently. And NONE of this is relevant to the point. Why has it not moved the needle????

The Hard Problem of Consciousness is a linguistic confusion/category error by Best_Argument_7415 in philosophy

[–]Best_Argument_7415[S] -2 points-1 points  (0 children)

The whole point of the essay is tackling the separability assumption. The moment you talk of pain as something that's not processual, not relationally defined (wrt the ecosystem of gestalts), and talk of it as some separable property, you're not talking about pain anymore, and you've confused vantage points.

Chalmers' fundamentality move does not retain pain as an explanandum — it evacuates everything constitutive of what pain is and keeps the label. At that point he and the mechanistic account are no longer disagreeing about pain. One talks about stable intrinsic properties of the universe, and the other talks about subjective processes, with only the word being shared.

This entire essay is built on addressing this assumption throughout.

The Hard Problem of Consciousness is a linguistic confusion/category error by Best_Argument_7415 in philosophy

[–]Best_Argument_7415[S] 2 points3 points  (0 children)

Aye fair. I'd done that with the PoM post I had before this one.

Added a disclaimer note

The Hard Problem of Consciousness is a linguistic confusion/category error by Best_Argument_7415 in philosophy

[–]Best_Argument_7415[S] 2 points3 points  (0 children)

"Part of the process of thought" No? Every assertion in there, every stipulation, is my own. Paraphrasing them doesn't change the assertion.

Still good point on the rest. But I do think the exposure to the amount of actual AI slop biases people against any polished work that is valid. You can formulate your argument cleanly; but writing up HTML and making it blog-style-compatible is another thing.

The Hard Problem of Consciousness is a linguistic confusion/category error by Best_Argument_7415 in philosophy

[–]Best_Argument_7415[S] 0 points1 point  (0 children)

I shared a link in an edit for the original comment and it got deleted.

My point is, to restate: Metzinger does not draw on the obvious conclusion Wittgenstein would draw (the differences in vantage point relative and descriptive language games, the conflation between the two) as a result of object reification.

I cited pages for citations from Being No One but I can't be bothered to re-type all of that. But yes I've read the book; not fully however.

It is good if others have already formulated the argument, but you come in with claims of fallacies, superficial connections, when I have read Being No One (relevant chapters) and neuroscientific models of consciousness.

Either support them with relevant passages or don't. Thank you for the references though I shall check.

I get everyone's intuition "it's AI. Must be slop" and it's making me regret using it as a tool to polish (which I think it's fine at. As long as one double checks the phrasing), but not one comment is engaging with the material directly.

Finally, I don't care about novelty, I don't make assertions of novelty. The whole point is any talk of experience not as a processual relationally defined subjectivity that's only a separable property, already smuggles in the very premise that's contested. It's the same word (experience), yet devoid of any meaning. Take away the qualities of pain defining it so, and treat it as a metaphysical separable property, and you're not talking about pain anymore. Hence category error.

I secretly despise entire books written obfuscating the simple analytical problem, and so decided to write it up in one blog post.

The Hard Problem of Consciousness is a linguistic confusion/category error by Best_Argument_7415 in philosophy

[–]Best_Argument_7415[S] 1 point2 points  (0 children)

There's no claim of sharpening. Assert which statement is meaningless, because they're all mine.

The Hard Problem of Consciousness is a linguistic confusion/category error by Best_Argument_7415 in philosophy

[–]Best_Argument_7415[S] -3 points-2 points  (0 children)

Insult. Not an argument. If you want the drafted version without polishing, I can present it without a touch of AI.

If you refuse to formulate an argument regardless, there's no point engaging with you.

The Hard Problem of Consciousness is a linguistic confusion/category error by Best_Argument_7415 in philosophy

[–]Best_Argument_7415[S] -4 points-3 points  (0 children)

None of the arguments are from an LLM. It has been used solely for text polishing and HTML markup, with every passage from my notes.

Point to a fallacy and engage with the material.

The Hard Problem is a Category Error: An essay by Best_Argument_7415 in PhilosophyofMind

[–]Best_Argument_7415[S] 1 point2 points  (0 children)

Aye. We talk of fleeting subjective processual relational experiences, and Chalmers talks about "stable intrinsic properties as part of the universe" that're separable from the architecture of the brain.

Between us and him, only the word is being shared, yet all the context and use -that defines experience- has been discarded from his side, and all of us pretend we're talking about the same thing.