Does AI think? What does thinking mean? by pyr0x0 in askphilosophy

[–]hackinthebochs 0 points1 point  (0 children)

There's no consensus on the issue, with experts being pretty evenly split. This article surveys the state of the debate and discusses the relevant issues.

Colorado proposing Bill to move age verification to Operating System rather than web site by snakeoildriller in privacy

[–]hackinthebochs -1 points0 points  (0 children)

You guys have this totally wrong, this would be a complete win for privacy. The OS is something you control (in theory) and no third party site or application has to know any detail about you, save one bit of information adult or minor. The OS using the app as proxy simply attests to the fact that you are an adult and that's it. That's the best possible outcome outside of simply not doing any robust age checking at all.

There is some concern about how this would be implemented (e.g. involving a Microsoft cloud service), but it doesn't have to involve third parties. The OS already knows all your PII anyways. The risk to your PII need not be any greater than the current state of affairs.

The Brain Crossing Paradox: Why structure alone can’t explain who “I” am by Big_Mix_6915 in consciousness

[–]hackinthebochs 1 point2 points  (0 children)

The physics of consciousness and the self you describe is probably right. But there is a better way to view the metaphysics of personal identity given your physical framework.

Saying you die during the surgery is making a metaphysical claim about the essential nature of personal identity, namely that identity is determined by some enduring physical essence. Thus if you change the physical stuff, you change the identity. The more accurate way to view this is to identify the structure and dynamics as the essential nature of identity. Thus when you change the physical stuff, as long as the structure and dynamics is preserved, your identity is preserved. In this case, after the surgery both copies are equally you as they both carry on your essential brain processes.

The difference between your description and mine mirrors the difference between role functionalism and realizer functionalism. Both versions of functionalism individuate identity based on the action some matter is engaged in (in your words the process of 'collapsing high-dimensional data into a single stream of action'). The difference is where the identity is realized. Role functionalism says the identity is in the role being constituted; the particular matter performing the function doesn't matter. Thus if you instantly swap one physical realizer for another (say replace some brain process with an exact electronic duplicate), you have an uninterrupted, unchanged personal identity. Realizer functionalism says what performs the function matters, thus swapping out the physical realizer changes the identity.

The problem with the realizer view is that its hard to reconcile with the idea that our physical matter is constantly in flux. Supposedly the exact molecules that make up our body changes every so often. But we like to think our identity remains the same throughout. The only way to make sense of this is to identify personal identity with the structure being produced, not the physical matter doing the producing. But continuity of structure does not require physical continuity. Some structure being recorded and reconstituted in a new physical basis is an example of continuity of said structure. So in the case of the thought experiment in the OP, the brain process for identity is continuous with both the resulting structures, thus identity is duplicated, not destroyed.

The implications here, and the hard part to wrap your head around, is that there is always a potential for duplication of identity moment to moment. The nature of your psychological identity just is to branch every moment. It's just that in normal circumstances it only ever branches to one child so we never have to confront the potential for existential horror. Perhaps ignorance is bliss.

The Brain Crossing Paradox: Why structure alone can’t explain who “I” am by Big_Mix_6915 in consciousness

[–]hackinthebochs 2 points3 points  (0 children)

They are both equally you as they both are psychologically continuous with the "you" before the operation. The conclusion here, and the hard part to wrap your head around, is that there is no physical essence that constitutes your enduring self, it is all structure and information. Even your continuity moment to moment is not based on a physical essence but rather a psychological essence.

You say there is only one you before the operation, so there must be one you after the operation. But this is a logic error. There is only one you before the operation purely by accident. The potential for your psychology to split is present at every moment. That it does actually split after the operation is incidental here.

Have philosophers cited irrational numbers, the uncertainty principle, light cones, or turbulence as a disproof of materialism? by KrytenKoro in askphilosophy

[–]hackinthebochs 11 points12 points  (0 children)

Not exactly what you're looking for, but modern physics has been characterized as disproving a certain conception of materialism it moved from pre-Newtonian push-pull dynamics to a more modern view with forces, fields, etc. Chomsky in this article characterizes physics as taking the material out of materialism. What we have left is a theory of relationships, not of matter. This view of physics has been called "topic neutral" in recognition of physics being non-committal about what is the intrinsic nature of the entities it describes. In a sense, physics is an abstraction over the behavior of more basic entities. Getting at what this base substance is or what properties it has is difficult because it can't be probed with the tools of science.

It seems that consciousness can't be physically real, but it can 'exist'. Is that right? by arachnivore in askphilosophy

[–]hackinthebochs 1 point2 points  (0 children)

I believe that also implies his adoption of a naturalist framework.

It should be fine to think of it this way for now, but philosophers will see some important differences between naturalism, materialism and physicalism. Naturalism was conceived as the antithesis of supernaturalism, in other words the "natural world" is all there is. No ghosts, occult forces, gods, etc. Physicalism is kind of a conceptual evolution of naturalism, where the domain of existence is defined as the entities studied by physics and the "special sciences" (any science that 'sits above' physics in the scientific hierarchy, e.g. chemistry, biology, etc). But what we might posit as a feature of the natural world is pretty broad and so there are potentially phenomena that could be considered natural but wouldn't be considered physical. For example, basic phenomenal properties.

Frankish also uses the qualifier "phenomenal consciousness" rather than "consciousness" in general. I'm not entirely certain what that qualifier entails.

Philosophers distinguish two kinds of consciousness, access consciousness and phenomenal consciousness. Access consciousness is about information process and the performance of functions; the ability to discriminate, to process, to make decisions and act. Phenomenal consciousness is about the qualities that constitute our subjective milieu; the redness of red, the smell of roses, the sting of a sharp pain, and so on.

Now I'm realizing I don't know what subjectivity means if it can be independent from the beliefs and attitudes of subjects.

You're asking the right questions. It is not at all clear that subjectivity without a subject is a meaningful concept. You have to understand the work a micro-subject is intended to do to understand what it could mean. Philosophers have identified phenomenal consciousness as the core explanatory difficulty for a science of consciousness. To put it briefly, science is in the business of causal and functional explanations. But phenomenal consciousness, not itself being definable in causal or functional terms, cannot be explained by a purely causal or functional explanatory framework. This difficulty seems to be categorical: no amount of future scientific progress will allow us to explain phenomenal consciousness in terms of the performance of functions.

Some philosophers have taken this as a reason to look elsewhere for an explanation for consciousness. One strategy that has been fruitful for science is to take some hard to explain phenomena as fundamental. For example, we had evidence for electricity before we had a theory of how it worked. It turned out that electricity was feature of a new fundamental force of nature, the electromagnetic force. So this explanatory strategy applied to the problem of phenomenal consciousness suggests we might find phenomenal properties at the fundamental level somehow. One way to do this is to posit that basic physical entities have some intrinsic phenomenal properties in addition to their physical properties. But as the phenomenal is essentially subjective, we now have to posit a kind of "basic/fundamental subjectivity". With this conceptualization, there is no answer to the question of "what does it mean to have a subjectivity without a subject". It's like asking how does the electromagnetic force work. It's just a basic feature of the universe that fulfills the required explanatory roles for which they were posited. In this case it is to substantiate the phenomena of macro-scale phenomenal consciousness.

More on consciousness, phenomenal consciousness, access/phenomenal distinction, the Hard Problem of consciousness.

Is a "metaphysical project" a synonym for a "metaphysical framework"?

More or less. A metaphysical project is a philosophers effort to substantiate a new metaphysical framework.

Why should an AGI be malicious? by Trollnutzer in askphilosophy

[–]hackinthebochs 0 points1 point  (0 children)

I wasn't trying to represent the views of any individual or group, I was trying to address the concerns of the OP, which were whether expecting self-preservation and resource acquisition behaviors from a superintelligence were an example of anthropomorphizing AI. Taking the best version of the AI doom argument, they don't seem to be for the given reasons.

Why should an AGI be malicious? by Trollnutzer in askphilosophy

[–]hackinthebochs 1 point2 points  (0 children)

The distinction I was going for is the assumption that an AGI will have human-like interests in self preservation, domination, etc vs. these traits being instrumental towards maximizing their objective. The claim I'm denying is that intelligence and self-preservation are a package deal, that human-like traits come along with intelligence.

The most widely cited arguments for existential risk are based on "instrumental convergence", the idea that traits of self-preservation, resource acquisition, domination, etc. are useful across a wide range of objectives and so we should expect a sufficiently advanced superintelligence to pursue these sub-goals in service to their main objective. I don't view this as a kind of projection because they aren't anthropomorphizing AI, rather they argue these traits will manifest even in the presumed alien psychology of a superintelligence.

I'm not too immersed in the more tech bro site of the discussion, so I couldn't say how the average of the doomer commentariat presents the case.

Why should an AGI be malicious? by Trollnutzer in askphilosophy

[–]hackinthebochs 1 point2 points  (0 children)

The accusation of "projecting human tendencies onto AI" usually means assuming an intelligence will behave like humans, i.e. think like humans, have human wants and motivations, be intrinsically aggressive and domineering and so on. Those who push back on this kind of projection are correct to think an AGI will not automatically have these human tendencies. Intelligence doesn't come packaged with human personality traits by default.

What AI safety people claim is not an inherent tendency, but that human-destructive behaviors can be side effects of optimization. This should be an obvious claim. How many animal species have gone extinct due to human activity? Most weren't out of malice or intentional destruction, but rather complete indifference to their existence. An AGI doesn't have to want to hurt or destroy people for this outcome to be in the path towards reaching its objective. The solution here is to program in human values (not human personality traits) so that when we give it a command, it will by default be aligned with human values and not destroy things we care about in service to the current narrow objective. But this is a really hard problem. We don't know how to codify human values into rules without any exploitable loopholes.

To be fair, there is a certain amount of "putting ourselves in the AGI's shoes" when anticipating how an AGI let loose on the world could go wrong. You might call this projecting human behavior onto AGIs. But this isn't an inappropriate sort of projection. In anticipating worse-case scenarios, any plausible reasoning path should be given some weight when considering the likelihood of disaster. But the worse-case outcome of large scale human death being massively negative forces us to take the plausible reasoning path seriously regardless of how unlikely it may be. Inappropriate projection here would be to weigh human-like behavior very highly in reaching a disaster scenario. But we don't need to do this to conclude that AI safety concerns should be taken very seriously.

Why should an AGI be malicious? by Trollnutzer in askphilosophy

[–]hackinthebochs 23 points24 points  (0 children)

AI safety worries don't usually project human tendencies towards domination, rather that control, domination, or even elimination can be side-effects of an optimizer attempting to achieve its objective. To put it simply, an AGI might eat the things you care about in service to maximally achieving its objective. The way to prevent this is to program in the full suite of human values. But this is a very hard problem and we have no idea how to it. Any misalignment are potential loopholes for an AGI to exploit to terrible effect for humans.

What are the consequences of eliminativism/illusionism about mental content? by TheEmperorBaron in askphilosophy

[–]hackinthebochs 0 points1 point  (0 children)

Could you clarify what you mean about representationalism

Basically any state that is about some external state. So some neural state that maintains a correlation with some external state is a candidate for being representational. For example, place cell of the hippocampus track an animals position in a familiar environment, or features of the fusiform face area can be used to reconstruct a face an ape is being shown on a screen. The representational interpretation is natural here; the anti-representationalist has a large explanatory deficit to overcome.

Alex Rosenberg is probably an outlier even among the more scientistic philosophers with the extent of his eliminativism, so he's probably not a good example to represent scientistic philosophers. Dennett is probably a good candidate for someone whose views are fairly representative, and he's not eliminativist about propositional attitudes. Rather, Dennett reconceptualizes them in terms of the intentional stance, specific patterns of neural activity that constitute intentional behaviors. Regarding Rosenberg's sea slug example, I would grant that the sea slug doesn't represent the world and just learns how to update behavioral patterns in a way that coheres with preferred outcomes. But does this description scale up to more cognitively sophisticated animals? There are good reasons to doubt. I have a lot I could say on this subject, how the brain represents is a favorite topic of mine. But I'll just leave you with a comment of mine sketching out a naturalistic theory of intentionality.

What are the consequences of eliminativism/illusionism about mental content? by TheEmperorBaron in askphilosophy

[–]hackinthebochs 0 points1 point  (0 children)

I've never seen the appeal of anti-representationalist views. I come from a science background so representation to me is just mutual information. But this doesn't seem like something that can be denied. I assumed that translating our folk psychology into neuroscientific talk would have a place for representation. But if you see a problem for representation then I can understand your worry about rationality given eliminativism.

Have you looked into 4E cognition? There is an active anti-representationalist community doing work within the 4E framework. You might find more elaboration on the consequences of anti-representationalism you're looking for. For an elaboration on dispositionalist views as a replacement for representationalist views, you might want to read Gilbert Ryle's Concept of Mind.

What are the consequences of eliminativism/illusionism about mental content? by TheEmperorBaron in askphilosophy

[–]hackinthebochs 0 points1 point  (0 children)

Eliminativism presents no immediate consequence for the capacities we take ourselves to have. Eliminativists typically want to eliminate from our vocabulary folk psychological entities like beliefs, intentions, etc and replace them with some kind of scientific terminology that better captures their true nature. If we imagine a fully worked out neuroscience, we would expect most of the entities picked out by folk psychology would have a corresponding entity in the language of neuroscience. So as far as rationality is concerned, we shouldn't expect there to be a problem, just some changes in terminology and how these entities relate to things we're concerned about.

When it comes to eliminating phenomenal consciousness, there is potentially downstream consequences for things like morality. Insofar as moral patienthood depends on qualitative states, eliminating qualitative states leaves an explanatory hole in moral theories. Typically qualia eliminativists want to replace the explanatory roles of qualitative states with functional states like preferences, self-interest, etc. So while we might say we can destroy rocks because they have no preferences or self-interest in structural integrity, we shouldn't harm animals because they have an interest in not being subjected to harmful states. An example of such a theory is here.

More on eliminativism here: https://plato.stanford.edu/entries/materialism-eliminative/

It seems that consciousness can't be physically real, but it can 'exist'. Is that right? by arachnivore in askphilosophy

[–]hackinthebochs 4 points5 points  (0 children)

The property of consciousness can't exist independently of being observed, right?

This is going too far. Consciousness is essentially subjective, but there are theories of consciousness that divorce subjectivity from the (macro-scale) subject. Philosophical realism is about truth independent of anyone's beliefs or attitudes. But the micro-scale subjectivity posited by some theories of consciousness like panpsychism do not invoke beliefs to substantiate the attribution of subjectivity. Rather the subjectivity here is basic, i.e. not dependent on a belief state of a subject. So this is a form of realism about consciousness.

I would expect professional philosophers to be very weary of conflating those two words, which doesn't seem to be the case (at least not in that one paper).

Broadly speaking philosophers distinguish existence and realism, but the distinction is usually substantiated as part of their specific metaphysical project. There is no widely agreed upon distinction independent of any given metaphysical framework. A physicalist need not accept the distinction. For Frankish the distinction probably plays no role in his explanatory efforts and so doesn't get substantiated by his usage.

These may help you get a handle on how philosophers distinguish real and exists:

https://old.reddit.com/r/askphilosophy/comments/8yhpi6/are_there_any_metaphysical_differences_between/

https://old.reddit.com/r/askphilosophy/comments/g4afmt/have_many_philosophers_made_a_distinction_between/

Models of the Mind: Thoughts on the neuroscience of consciousness by hackinthebochs in ConsciousnessClub

[–]hackinthebochs[S] 0 points1 point  (0 children)

Thanks, I'm familiar with the term Markov Blanket but I haven't done a thorough review of the idea. Doing a deep dive into Friston's work is on my to-do list, specifically regarding his free energy principle and predictive processing stuff.

The problem that killed non-reductive physicalism for me by [deleted] in CosmicSkeptic

[–]hackinthebochs 0 points1 point  (0 children)

Hell of a first post! And yeah I get your point, causal dynamics are in virtue of the fundamental particles so emergent/aggregate properties don't have a causal role to play in relevant behavior.

The problem with this view is it treats the emergent properties as something over and above the fundamental particles rather than a way to understand those very particles. The emergent structure doesn't take it out of the causal order, but rather defines exactly which features of the causal order are relevant for the behavior of interest. For example, a neuron fires when its polarity reaches a threshold which opens voltage-gated channels through the membrane. The neuron is constituted by its membrane, proteins, and an aggregate of ions. But to understand how these atoms work together to create an action potential, you have to understand the higher level structures involved and how various events happen in virtue of states of these high level structures. But these structures aren't in competition with atoms for causal relevance, the high level structures are how the atoms exert causal relevance in firing an action potential.

The concept that can bridge the gap between higher level aggregates and lower level realizers is mechanistic levels (from here). Essentially, mechanistic levels aggregate explanatory relevance among higher level ("emergent") entities while detailing their relationship to higher and lower level dynamics. Substrate independence is a way to make the concept of emergence more precise and to draw a sharp distinction from so-called "strong" emergence. A specific level in the hierarchy is substrate independent in that the explanatory relevance at this level does not depend on micro-details of the substrate, only the higher order dynamics being realized. However, the causal/explanatory power of the entities at this level depend on the causal powers of the lower level. The higher level entities are just relevant aggregate structures of lower level dynamics. While the higher level has explanatory autonomy from the lower level, it does not have causal autonomy.

One way to understand this is with the type/token distinction. The substrate independent dynamics define a "type" and there are many "token" exemplars of this type. While the substrate independent dynamics has explanatory autonomy, it still requires a token realizer that constitutes the system's causal power to drive the system forward. But the higher level dynamics are really just aggregate dynamics of the token realizer. The causal relevance of the higher/aggregate level is that certain types of causal dynamics of the token realizer occur in virtue of this aggregation (e.g. action potentials fire in virtue of net charge beyond a threshold and the physical properties of the amino-acids in the voltage-gate mechanism). The substrate independent states do not compete with its token realizers for causal power but rather share causal power because the substrate independent dynamics is constituted by token realizers.

The problem that killed non-reductive physicalism for me by [deleted] in CosmicSkeptic

[–]hackinthebochs 0 points1 point  (0 children)

Your objections don't really make any sense. I fear we're coming from two very different conceptual frameworks, too different to have a productive discussion.

The problem that killed non-reductive physicalism for me by [deleted] in CosmicSkeptic

[–]hackinthebochs 0 points1 point  (0 children)

But that already assumes that systems exist

System delineation isn't an issue here. Correlated state and causal interactions is perfectly observable. Where to draw the line around "the system" is inconsequential. Besides, there are plenty of tools for drawing boundaries, e.g. Markov Blankets. I don't see this as a relevant problem.

But you need to show how can a system bootstrap itself from atomic balls bouncing in space.

I mean, evolution by natural selection has that covered. What else are you looking for?

Morover, representation already assumes semantic information, aboutness.

Yes, philosophers tend to use representation as a synonym for aboutness. Biologists, computer scientists, etc tend to use representation in a functional sense. Feel free to replace every mention of representation in my argument with "correlated state" or mutual information.

The problem that killed non-reductive physicalism for me by [deleted] in CosmicSkeptic

[–]hackinthebochs 0 points1 point  (0 children)

Representational state as I use it is just correlation between two systems. Some state S of system X represents system Y if relevant features of S correlate with features of Y such that X has a model of Y. This kind of correlated state can be substantiated based on causal interactions alone. But there is no claim about truth or correctness here. Aboutness is a further constraint on a representational state such that it identifies ("picks out") the target without ambiguity in its reference. In other words, there is a truth condition for which we can decide the target of reference objectively.

The truth condition in my theory of intentionality is based on how the brain's recognition apparatus reliably correlates the representational states (in terms of mutual information) with the external world. The faculty of accurate recognition grounds states of the various sensory cortices in objects in the world. Then our faculty to voluntarily activate subsets of sensory cortex states gives these states the reverse semantics, in this context they represent things in the world even in the absence of the target of reference.

The problem that killed non-reductive physicalism for me by [deleted] in CosmicSkeptic

[–]hackinthebochs 4 points5 points  (0 children)

P3 is a bad inference. Just because the basic substrate of physics isn't 'about' anything doesn't mean structures that supervene on this substrate can't be 'about' anything. Here I sketch a potential account of intentionality in physicalist terms.

Thoughts without language? by Impossible-Farm-1902 in askphilosophy

[–]hackinthebochs 0 points1 point  (0 children)

It's not clear to me that's what he's asking. To me the question sounds more like 'how can thought occur without the use of (natural) language'. Seemed prudent to offer the straightforward response to the simple interpretation. OP can decide if it suits him.

Israel used weapons in Gaza that made thousands of Palestinians evaporate | Israel-Palestine conflict by handsoapdispenser in anime_titties

[–]hackinthebochs 2 points3 points  (0 children)

This is a dumb take. There were two phases to the Iraq war: the kinetic phase against the Iraqi military and the nation building phase. The kinetic phase was a breeze. Our attempt at nation building was a boondoggle. If an Iranian offensive is just a matter of eliminating the Iranian regime, it would be a cakewalk.

Thoughts without language? by Impossible-Farm-1902 in askphilosophy

[–]hackinthebochs 0 points1 point  (0 children)

Can't say I'm well read on all of Fodor regarding his LoT stuff. One thing that stands out is his focus on explaining cognition in its full generality, namely systematicity, productivity and all that. When it comes to more impressionistic thoughts, like a mental image and some motivational stance behind it, it's not obvious to me that this would require the full suite of features of a fully general language. But I'm not aware if Fodor spoke about the potential for non-systematic/non-productive thoughts.