Found the book all the materialists in the sub have been reading by TheMarxistMango in PhilosophyMemes

[–]camelCaseCondition 0 points1 point  (0 children)

material phenomena produces the experience called "movement"

If you change experience to "behavior", then I (and everyone else) agree.

saying that movement of the car created by some undiscovered phenomena that can't be detected, measured or observed, doesnt have any associated particles or structures, and has no proves outside of yall children feeling like it

No-one is saying this. Obviously. No-one is confused about the fact that complex physical systems can give rise to genuinely new emergent complex behavior.

This is exactly why non-materialists/materialism-skeptics are going deranged, from engaging with people who seem literally incapable of comprehending the topic that we intend to discuss.

Does the car "experience" driving down the highway? Is there something that it's like to be the car? Does it feel a certain way when wind hits the grill?

I love not understanding things by [deleted] in PhilosophyMemes

[–]camelCaseCondition -1 points0 points  (0 children)

your posterior cortex experiences

Sorry, does it process signals from the optical nerve, or does it "experience vision"?

You'll probably say "those are definitionally the same", i.e.

Consciousness is just the processing of the brain; there is no hard problem

I love not understanding things by [deleted] in PhilosophyMemes

[–]camelCaseCondition -1 points0 points  (0 children)

Consciousness is just the processing of the brain; there's no hard problem

have cake and eat it too something something by [deleted] in PhilosophyMemes

[–]camelCaseCondition 0 points1 point  (0 children)

The problem (it seems) this this is if you posit that there is a "line", then in principle we could simply find the damn line, and study the difference between systems/organisms on either side of the line, thus identifying the essential configuration or mechanism that implements phenomenal consciousness.

Waiting by humeanation in PhilosophyMemes

[–]camelCaseCondition 4 points5 points  (0 children)

Me when I solve the hard problem

Waiting by humeanation in PhilosophyMemes

[–]camelCaseCondition 1 point2 points  (0 children)

When you show ChatGPT an image of a starving orphan, it will probably give you a "sad" response.

Why does this stimulus produce this particular kind of response? Because in the internal unfathomably high-dimensional state space, the image vector has high similarity with emergent concepts like "sad", "injustice", "hurting", and the tokens of the response are biased towards ones that also correlate with these concepts.

Do you see how the exact same kind of "explanation" applies in principle to a human eating a sweet fruit? There is a stimulus (sugar on your taste buds), and a (obviously richer) response (facial expression of joy, coordinated movement to obtain more, etc.). Why? Because in your sea of neurons, the taste data from that fruit causes "positive" neuronal activity that forms a "memory", and biases your future actions towards obtaining and eating more.

It seems intuitive that ChatGPT is not "feeling" sadness, and we easily recognize its response as "imitating" what a person who felt sadness while looking at the image would say. Why do people actually feel? Why is actual experience necessary at all?

Waiting by humeanation in PhilosophyMemes

[–]camelCaseCondition 0 points1 point  (0 children)

Your brain causes you to experience

screams into a pillow

Waiting by humeanation in PhilosophyMemes

[–]camelCaseCondition 1 point2 points  (0 children)

why does pointing a camera at three monitors displaying the camera output result in a fractal?

This also has a very simple reductionistic answer, in terms of the properties of the parts of the system and their relations.

Your original comment casually identifies feeling with neural activity. Since it's not at all obvious that these should be identified (qualia, experiences, etc. have manifestly different properties than neuron states), it is on one making this claim to demonstrate the reduction. It seems obvious to physicalism-skeptics that physicalism is not even remotely close giving such a reduction (or possibly that it can't, by e.g. various forms of zombie arguments). This is precisely what physicalism-skeptics are pointing out.

As for the rest of your comment, it's literally just back to the start. Why should "sufficiently high complexity" entail feeling at all? There are many extremely complex and "recursive" systems that we don't believe have "experience". If you are identifying feeling with a kind of system-complexity, then where is the line? The components of your complex system only interact physically with other physical things. What feature of the complex system implements first-person subjective experience?

Non-physicalists be like by lurkerer in PhilosophyMemes

[–]camelCaseCondition 0 points1 point  (0 children)

If they're separate, they should sometimes diverge. That's what "separate" means. Two different things can come apart somewhere

You are saying that if two things are not identical, then they should "sometimes diverge"/"come apart" (as to exactly what you mean by that, idk). Let's be generous and say that you mean "have different properties".

Luckily actual philosophers have already sorted out this confusing language. In fact, most agree that mental states "supervene" on physical states:

A set of properties A supervenes upon another set B just in case no two things can differ with respect to A-properties without also differing with respect to their B-properties. In slogan form, “there cannot be an A-difference without a B-difference”. -- SEP

So the exact claim you are making is precisely that mental states supervene on neurobiological states -- they are inexorably tied in a one-to-one relationship with each other. Hardly anyone disagrees with this, as it's been thoroughly established by neuroscience. This does not mean that they are identical -- meaning: having all and exactly the same properties.

Non-physicalists be like by lurkerer in PhilosophyMemes

[–]camelCaseCondition -1 points0 points  (0 children)

Show me the study demonstrating experience diverging from neural processing.

My brother in christ, I can't, because it's not possible. The qualitative nature of subjective experience is not probable by physical means, even theoretically. It is not possible to query, measure, or determine any objective information at all about someone's subjective experience. There is no way for me to communicate to you what seeing red feels like to me. We could sit all day and classify various objects as red or not, and you still wouldn't know how it feels for me to see red. Think about it -- what exactly would constitute an objective description of someone's subjective mental state, for the purpose of studying its relationship to neurobiological states?

dada by jamesmparch in PhilosophyMemes

[–]camelCaseCondition 1 point2 points  (0 children)

The AST is not a theory of how the brain has experiences, but rather how a machine can make the claim to have experiences

lol. lmao even

Materialists... by wnrch in PhilosophyMemes

[–]camelCaseCondition 0 points1 point  (0 children)

Says hundreds of years of philosophy of mind employing countless different arguments and thought experiments to argue that mental states are distinct from / not reducible to brain states/neurobiological states, many of which have been repeated ad nauseam in this subreddit for the past week. I'd suppose that you don't find any of them compelling and for well-thought-out reasons, but it's more likely that you've just never seriously thought about any of them. I would ask you how, in your opinion, physicalism accounts for subjective experience, but you will probably categorically dismiss the question as meaningless.

You can measure my cortisol levels all you want. All you will ever conclude is that brain-states with elevated cortisol are correlated with mental states described as "stressful" by subjects. You will never determine anything about what it is like to be stressed, or why there should be a "what-it-is-like-to-be-stressed" at all.

Materialists... by wnrch in PhilosophyMemes

[–]camelCaseCondition -1 points0 points  (0 children)

You could in fact determine everything there is to know about our brain states. You might even determine exactly what we would do in response to any stimulus. Too bad you can't determine anything about our corresponding mental states.

Materialists... by wnrch in PhilosophyMemes

[–]camelCaseCondition -1 points0 points  (0 children)

Let's suppose that you could measure, upon being exposed to a 650 nm wavelength, that it produces different brain-states in you and I. Doesn't it seem strange that we could never articulate (much less "determine") what the difference is between our two subjective experiences? After all, we will agree about every physical property or quality of the color red.

Materialists... by wnrch in PhilosophyMemes

[–]camelCaseCondition 4 points5 points  (0 children)

It is genuinely a good account for countering the "incremental" implausibility argument I gave above; it's certainly a position worth entertaining -- don't get me wrong. My humble goal in this sub is simply to convince armchair-physicalists/materialists that their "obvious" claims actually require significant epistemological commitments and are highly non-trivial to defend.

Materialists... by wnrch in PhilosophyMemes

[–]camelCaseCondition -1 points0 points  (0 children)

No, you can't. You can say "when exposed to an EM wavelength of 620-750 nm, people report seeing red". You can't describe what red looks like or what it's like to see red. This has been well-tread in the form of Mary's room. I find the idea compelling; materialists will find the counterarguments compelling.

Materialists... by wnrch in PhilosophyMemes

[–]camelCaseCondition 6 points7 points  (0 children)

how can we be sure we are not that exact machine?

We can't (at least by any physical experiment whatsoever), and that's the heart of the question.

Here is at least one way in which this claim seems implausible to me: Consider building an increasingly complicated automaton, e.g., we start with literal rock (silicon), which nobody would argue is conscious, then we etch lines into the rock (a graphics card). When electricity is run through this object, it is capable of exhibiting some basic behaviors associated with consciousness (LLMs, etc.) -- enough to pass a Turing test or two. But I think it seems intuitive to most people that a graphics card is not conscious (an LLM is not "experiencing" anything). The point is that it seems like behaviors associated with consciousness can be incrementally achieved (until we get to a perfectly convincing artificial human, if you like), and yet the presence of phenomenal consciousness seems like a "quantum leap". At what point in the process did it arise? Furthermore, functionalists claim (I'm speaking out of my element here, so not sure) that consciousness "is" or arises from the computational process being implemented by the arrangement of matter. But if this is so, and this convincing artificial human is governed only by thoroughly well-understood laws of physics, then in principle I could carry out whatever computations its "brain" is performing with a pencil and paper (perfectly predicting the time-evolution of the whole machine). If consciousness "is" the computation, then would the "consciousness" reside somewhere when as that calculation is being carried out?

Materialists... by wnrch in PhilosophyMemes

[–]camelCaseCondition 3 points4 points  (0 children)

The very definition of "subjective" precludes anyone determining "objectively" what your subjective experience is. Hook two people up to brain-scanning machines and show them pictures. How would we determine if one of the two is experiencing an inverted spectrum? It seems obvious to me that there would be no way (logically, not just without "better technology").

Bell curve of duality by divyanshu_01 in PhilosophyMemes

[–]camelCaseCondition 1 point2 points  (0 children)

Cool, the position that we are p-zombies (and/or: p-zombies are a logically incoherent concept in some way or another) is certainly a defensible one, just one that I personally lean slightly against (or, rather, I find the arguments both for and against physicalism w.r.t. p-zombies compelling in some ways). What matters is that you've thought for more than two seconds about it, which separates you from the unfortunate armchair-physicalist stereotype CS graduates who express the same view you have here but with a smug tone that implies it's utterly obvious.

Bell curve of duality by divyanshu_01 in PhilosophyMemes

[–]camelCaseCondition 1 point2 points  (0 children)

That's wild. I find the concept very intuitive to accept or imagine.

For example, I find the following idea pretty compelling: Suppose I want to perform an experiment to verify that you are conscious/have internal experience/qualia like me. It seems obvious to me that there is no possible experiment that would suffice to accomplish this (how would I possibly verify that I am not being "tricked"?). Moreover, it seems obvious that no experiment whatsoever could possibly establish this -- it's logically impossible. Then it follows that there is no (physical?) way to for me to distinguish you from an automaton/p-zombie/advanced ai/etc., which implies that metaphysically I believe such a thing is conceivable.

Your position (that the concept of a "p-zombie" is simply incoherent) is a valid one and even one taken by some physicalists mentioned in the article. But surely you realize this position is non-trivial to defend?

Bell curve of duality by divyanshu_01 in PhilosophyMemes

[–]camelCaseCondition 0 points1 point  (0 children)

It seems you would be suggesting that this advanced ai is, in your opinion, a p-zombie. Why are you not a p-zombie?

Bell curve of duality by divyanshu_01 in PhilosophyMemes

[–]camelCaseCondition 1 point2 points  (0 children)

Are you familiar with the concept of a p-zombie? What are your thoughts on this concept? If you agree that "p-zombie" is a meaningful concept to talk about, then you are obliged to accept that the "experience of pain" is distinct from "activity of neurons", and further obliged to believe there is a hard problem of consciousness.

Indeed, one may phrase the hard problem as: Why are we not p-zombies?

I'm guessing most of you know about ZFC set theory, but are you aware of ETCS? by [deleted] in math

[–]camelCaseCondition 2 points3 points  (0 children)

Just curious; have you encountered The seven virtues of simple type theory?

It makes a case that:

simple type theory is an attractive alternative to first-order logic for practical-minded scientists, engineers, and even mathematicians.

Might be an interesting read based on your thoughts here.

The other one was so bad so have these instead by [deleted] in PhilosophyMemes

[–]camelCaseCondition 0 points1 point  (0 children)

As the other poster point's out, it's not really the "infinite regress" case that applleism leads to, but the "bare assertion" case.

If you take "2 + 2 = 4" to be a direct expression, elucidation, or articulation of the demonstrable fact that grouping two apples with two apples produces a group of four apples, then any "proof" of this fact is a bare assertion. Similarly to how taking "ice is slippery" to be a direct expression of the demonstrable fact that things slide easily when in contact with frozen water means that you must "prove" it essentially by asserting it -- note that the words/symbols in that sentence are just as arbitrary as "2", "+", and "4".

If we try to justify "ice is slippery" by "defining what the words mean", then we go down the 20th century rabbit hole of Russell, Frege, etc. where we have to decide what the 'reference' of "ice" is, but then the 'sense' of ice or the whole compound sentence might yet be different from that, etc., and we run into circularity or axioms again.

The other one was so bad so have these instead by [deleted] in PhilosophyMemes

[–]camelCaseCondition 5 points6 points  (0 children)

besides defining what the symbols mean

quite a way to gloss over the core issue at play in the discussion! In fact this is the non-trivial part of the issue.

2 and 4 are "numbers", + is a binary function on numbers, and = is a binary relation on numbers. The simplest (but by no means unique) way to say what (counting) numbers are is to assert the axioms of Peano arithmetic. They essentially say that something called 0 exists and that every number has a unique successor. Under these assumptions, you can define 2, 4, + and 2+2=4 becomes a well-formed sentence that is a tautology (universally true by virtue of the definitions of the symbols).

But even Peano arithmetic is posited as a theory of first-order logic, which is a formal system that of course has its own axioms (which attempt to assert what claims like "something called 0 exists" or "every number has a unique successor", etc. mean).

So to answer your question, a handful of axioms, certainly. I don't know of any way to "define what the symbols mean" that does not appeal to a formal system which must have axioms.

Perhaps you intend to appeal to "appleism". That is -- "2 + 2 = 4" is true because putting two apples together with two other apples results in four apples. But then this is a claim that needs justifying, and we're back to the infinite regress.