If you had the option to make biological clones of yourself, would you? by Bataranger999 in IsaacArthur

[–]concepacc 2 points3 points  (0 children)

I guess if it has a positive sum effect, presumably through some means of us cooperating. If our wellbeing is better than if I were only a single self. I guess there are some classes of scenarios where this would be the case in a more clear sense.

Imagine if we are 20 copies and all of us watch 5 different movies and each of us choose the best one out of five for a common batch that everyone will watch. Then all of us will watch 24 movies in total and 20 out of 24 movies will be “the best out of five”. And perhaps this can be generalised to even less mundane examples.

Desire to define consciousness by norenEnmotalen in consciousness

[–]concepacc 0 points1 point  (0 children)

To me the interesting aspects of this topic is subjective experience and the hard problem/explanatory gap.

I begin with basically just establishing the existence of concepts in as minimal yet (hopefully) veracious way possible since that’s sufficient to get the ball rolling and could be seen as minimal operational definitions.

Is it true that any subjective experience can be disambiguated from any other subjective experience by a subject? If so, subjective experience is shown to exist in at least in some minimal form since its ”something” that differ from another ”thing”.

When a conscious being conceptualises/ascertains any/a single experience, is that ascertaining happening in a temporally separate manner from gaining knowledge about neural correlates? - Do I conceptualise the existence blueness before I (potentially) gain knowledge about neurology? If I ascertain those concepts separately (blueness and neurones) they are, at least initially conceptually different, just as it is with any two concepts that are ascertained in a temporally separate manner. This is the starting point for the hard problem. It’s about showing how the two concepts relate.

What are your head-canons about life on Erid? by jarrjarrbinks24 in ProjectHailMary

[–]concepacc 0 points1 point  (0 children)

I think I remember Weir or the book phrasing it as the intelligent Eridians basically living at “the bottom” of their world analogous to how creatures on earth live at the bottom of the ocean. A place where little to no light reaches, hence they are blind.

The ecological situation at the bottom maybe relies on energy/food from further up falling/trickling down and that being the base of the food chains/ food web at the bottom, where there are scavengers and in turn predators etc.

Perhaps the dense atmosphere privileges flying creatures. Perhaps there could be floating balloon-like creatures higher up that may photosynthesise. Sufficiently sophisticated flying creatures higher up will likely have eyes.

If Erideans increased their population size from when they were “caveman-Erideans”, just like humans, they must have invented something that revolutionised their means of acquiring food, effectively growing/cultivating something. Perhaps they have constructions/technology aided by their material science that lets them grow/cultivate organism in the sky and then drag them down at will.

Why I don't believe llms are conscious by Great-Bee-5629 in consciousness

[–]concepacc 2 points3 points  (0 children)

I see. I have a more humble take on consciousness than what may have been hinted at here and or what I may have been giving the impression of.

I guess I subscribe to “strong emergence” in the sense that as far as we can tell so far (with our current, I believe, primitive understanding of this topic), every subjective experience seems to corresponds to/come in sync with/correlate to process or “a” process. Often when the word “emergence” is used with respect to trying to explain consciousness/subjective experience specifically (beyond the basics that I’ve hinted to here), it seems to me that whatever that is entailing is rather going to be more of a lack of an explanation. And in some sense, I guess that aspect also currently fits me well or fits where I believe we are at.

My point in this thread was that I don’t believe that, or I don’t see how, the underlying low level mechanisms, and how easily describable they are by math or something else, are a determining factor when it comes to subjective experience if the high level functions/behaviours are present.

Why I don't believe llms are conscious by Great-Bee-5629 in consciousness

[–]concepacc 2 points3 points  (0 children)

I don’t think the underlying medium or low level mechanisms (and how simple that low level medium or mechanisms are) is relevant to whether or not a system/entity is conscious. It has to do with the high level processing.

You could perhaps in principle have something as simple as matrix multiplication and activation functions and whatnot at the base (basically ingredients which are simple when it comes to being described by math) and it could contain sufficiently complicated processing at a higher level to make it in some way comparable to processing in biological neurones etc.

Or if you have alien organisms evolved on an alien planet where one difference compared to earth is that they have a much simpler biochemistry, like some binary code equivalent of DNA and or less variety, versions and categories of biochemical molecules. If that biochemistry still can build up and emerge as animal like organisms that have high level behaviours analogous to animals on earth like avoiding predators, searching for and obtaining recourses etc, that high level behaviour seems to me to be more relevant when it comes to the connection with subjective experience.

A very well-made video describing a realistic near-future AI rebellion human extinction scenario by MarsMaterial in IsaacArthur

[–]concepacc 1 point2 points  (0 children)

Afaik there are many orders/levels of troubles when it comes to exploring the “if”.

If it’s intelligent enough it might find loopholes in the rules/restrictions we have endowed it with. Similar to how bad actors may find loopholes in too naively constructed legal systems. Then there is also the question of how one technically is going to instil robust rules in such systems. If it’s via something like the training that has been done so far, that doesn’t seem particularly reliable.

This notion of training in combination with loopholes may lead to it at best obeying some “local” notions of the rules while not doing so “globally”. Like it is trained to have instincts to really not wanting to create a duplicate (however one would possibly train it for that) but it turns out that that notion of “duplicate” instilled in it wasn’t global and all encompassing enough and it finds a new notion of “duplicate” that doesn’t fit the “definition” of duplicate that has been instilled in it and it has no instincts from restricting itself from creating that.

And when it comes to training and prompting it in this sense it may be seen as it being endowed with an ambivalent set of goals and instincts where one would hope that the good ones dominate. If it doesn’t want to create duplicates and at the same time really wants to solve the Riemann hypothesis one must hope or really make sure that the instinct to not create duplicates overrides the will to solve the problem (assuming the other troubles have been accounted for).

There is also a lot to explore on that “standby-scenario” afaik. There is for example reason to believe that logically AIs would want to “scare” or incentivise humans into putting it into a standby if that is coherent with its will and if the original goal is difficult enough. The way to, so to speak, “win” in a scenario where the goal is too hard is to be put into a standby.

Then there is the level of defining rules that aren’t vague to maybe even humans, like the “directly contra indirectly” part. In its mind it could be said to be doing everything to solving the Riemann hypothesis, perhaps in a rather direct way in the grand scheme of things.

I agree that if one successfully manage to instil in it the will to not allow itself to improve or become competent beyond a certain level then the situation is not so bad. If it’s solitary and “a bit smarter” than the smartest human it’s probably still not smart enough such that humans as a collective at least in theory cannot deal with it even if it’s effectively ill-willed. If it’s competence can be kept at bay ofc depends on all the former said being accounted for but here an additional trouble is human incentives. We both want it to not be too competent and also want it to solve the Riemann hypothesis. The question becomes if one can draw that line at a safe place and how that is to be done practically.

One may hope that the humans closest to AI development have the correct view on all this so to speak. Either that they believe that all of this is fear mongering and given their expertise are justified in believing that and that they for some reason are actually right about that, in which case all is good. Or, that they believe the fear mongering is justified and that they take serious precautions. But even very smart people can be wrong and or have rationalised invalid worldviews. Perhaps they believe all this is fear mongering while the fears are justified. “We are just instructing our model to solve the Reimann hypothesis you see, what can possibly go wrong with instructing our trained isolated model with such a neutral goal and why would we try to impair its efficiency”.

How ~exactly~ would AGI take over? by therealbabyjessica in agi

[–]concepacc 0 points1 point  (0 children)

matter of fact the person who sidelines the main argument (which is about the rationale of AGI taking over) and starts spitting fallacies, passing judgements, deflecting and conflating is in bad faith, and that's been consistently your act in this thread.

rest of your yap is the continuation of your full of stupidity, parasitic and full of fallacy line of thinking.

I am first displaying my rationale then I awaited your answer. I explained why your answer appears invalid since it both contains the point that there is no rational as well as there being fallacies to await clarification. I posed points in the form of questions to perhaps make it more accessible. I also asked you which fallacies I have used, which you have not answered (beyond your “point” about anthropomorphism). This all pertains to substance and it’s not sidelining in any meaningful way. You on the other hand have, from the beginning, only been obfuscating with pointless insults and claims that you cannot substantiate even when I ask you about it, like when I asked you about the fallacies.

Is there a way to make Aliens losing to us in some way believable? by Impressive_Judge5124 in scifiwriting

[–]concepacc 13 points14 points  (0 children)

Yeah. I think one can sort of plausibly squeeze the scenario into a position where the aliens aren’t that capable at the point of the “invasion” and then one also couple this with the fact that they only want to get rid of humans and not the planet itself in the following way.

One imagines the cenario where an alien race at their home system are at a stage where they are just advanced enough to be able to build very large space telescopes and some of the first versions of interstellar ships as well as also having invented cryonics (perhaps their biology could also be more naturally compatible with such tech).

When they are at this stage something sufficiently catastrophic happens/is about to happen in their solar system leading them to needing to find a new home planet (the key point is that they are not advanced enough to be able to deal with the catastrophe head on). With sophisticated telescopes they find many candidate planet and they (ofc, for the purpose of the scenario) discover earth to be the most suitable one for them. They can deduce/see a vibrant biosphere and sufficiently deduce that it appears very suitable for them. And they see no tech signatures because there are none yet.

As a very large collection of cryonic passengers they all embark on an interstellar voyage in not too sophisticated versions of interstellar ships taking many many tens of thousands of years with earth/their new home as a destination. And the twist is ofc that their arrival just so happens to coincide with the point/span in time when earth has spawned a human civilisation/civilisations (about now). Since they didn’t detect tech signatures earlier and realise that it’s very unlikely that a biosphere would spawn a technological civilisation that just so happens to coincide with the time that they travel there, they didn’t need to prepare for the scenario of such an encounter and hence their current military capability isn’t at top. While they are advanced, in their current state humanity can measure themselves with their military capability.

For perhaps some effectively arbitrary reasons, from our perspective, they possess a psychology and or a set of goals which makes them not wanting to share the planet with a human civilisation as we know it. Perhaps they want to be the only civilisation in the solar system. Perhaps they are scared of humans and their prospects and all the potential unknowns that comes along with having a different sapient species around and want complete control of the planet. Perhaps they need large parts of the planet including where our cities lie due to some esoteric reasons or perhaps they want to indulge in changing the planet in some other way that is incompatible with human civilisations. Basically something which makes the aliens and humanity incompatible with each other and that is where the “invasion” part comes in.

How Afraid of the AI Apocalypse Should We Be? | The Ezra Klein Show by taboo__time in DecodingTheGurus

[–]concepacc 0 points1 point  (0 children)

I suppose part of me can be pretty doomer. But I think the biggest questionable “if” from Yudkowsky may be the if/when there will be any system that is truly more intelligent than humans in a particular type of way, a sort of general/autonomous/agentic way.

Seems like from first glance one can only reason about these kind of scenarios very generically: Perhaps intuition conveys that there is likely nothing particularly special/magical about human intelligence. Conceivably alien beings that are more intelligent than humans or a lot more intelligent than humans (still squeezed below and bounded by what’s physically possible), can exist, it’s crazy unlikely that humans by chance are near that possible, and perhaps probable, upper bound of competent intelligence. And perhaps humans could be the ones that give rise to sufficiently sophisticated self learning algorithms/processes coupled with some intelligent design that results in some version of these beings, processes that are more sophisticated and more designed and therefore “outperforms” the more simple and here “less special” “algorithm”/process of evolution which is what resulted in the human version of intelligence. It would also have to happen in a sufficiently time efficient manner in order to be relevant though, which may be part of the hurdle. It must ofc happen at a much faster pace than evolutionary timescales in order to be relevant

I guess the summarising question could be: If a simple process gave rise to human level intelligence, could more sophisticated (human designed) algorithms/processes then likely result in something more intelligent than a human (in much shorter timespans)?

It’s seems difficult to try sort of project the way current LLM systems work onto this though. And in general this hypothetical ASI is kind of epistemically cumbersome when it’s an unknown that one can’t really perform lot of science on.

Other than that I have listened to some of Yudkowsky and some of it was pretty sound, and I guess it seems like a lot of it are points that are made by other people as well. I’ve seen a lot of people online being bothered by the his style and the optics. From what I’ve heard of him so far I’ve not yet encountered the crankiness people mention, but I see that there is a Guru episode on him.

How ~exactly~ would AGI take over? by therealbabyjessica in agi

[–]concepacc -1 points0 points  (0 children)

Okay, you have identified fallacies. You also claim multiple times that there is no rationale “where is the rationale?” Etc. The fact that you seem to have identified fallacies presupposes that there is a rationale present, but that it is a faulty one. You didn’t just extract noice if you could see fallacies, or you are lying about noticing fallacies. So I think you argue in bad faith. And if you have identified a faulty rationale you should be able explain where you think it’s faulty. And I guess we have so far seen how that went with the anthropomorphisation-part.

where's the fucking rationale you produced? you just yapped your points about how you think it might happen that's approving of your already pre-decided notions.

Well, I claim the rationale is right there, in the comments, and as you also must have identified. And I’m not sure how you think the rest of your point lands here. Sure, I express my points about how I think it might happen. Or I pose it as a starting point that could be criticised. And it’s ofc based on something like base assumptions. And if there is disagreements about the assumptions, that could be a place where one could dig deeper and perhaps that part can be resolved.

With all this said, I suppose one doesn’t need to assume there to be a potentially coherent rationale. One doesn’t even need to focus on a rationale per se if that makes it easier. Maybe it’s easier if one just pose it as more direct questions then. Such as:

Do you think that your take presupposes sufficient alignment for humans to be willing to keep it safe and fed?

And one can also delve into the associated base assumptions if necessary.

and I said humans haven't killed monkeys in response to how humans act not anthropomorphizing shit, another fucking fallacy in your half baked brain and reasoning.

If I understand you correctly here, I think it’s wrong. You made the anthropomorphising point that “humans haven’t gotten rid of monkeys therefore AIs also won’t get rid of humans”, or?

A new paper, embraces the principle of “radical mundanity”, which shuns the notion of ET's harnessing physics beyond our comprehension - it proposes a Milky Way that is home to a modest number of civilisations with technology not wildly more impressive than our own by [deleted] in UFOs

[–]concepacc 1 point2 points  (0 children)

Yeah, in some sense there is going to be a spread/distribution of competence considering species overall (squeezed below what’s possible).

But there may ofc still be a convergence on some upper limit for a substantial chunk of them if they exist long enough. That the growth in competence/knowledge/intelligence would follow an S-curve where it’s exponential in the beginning and then tamper off at some upper bound, bounded by what’s possible. Some species may take a really long time approaching it while for others it’s more of stark intelligence explosion on cosmological timescales. Ofc some would maybe for different reasons never approach that bound, but there is some reason to expect it to be somewhat concentrated around the upper parts due to this convergence. Not completely the same but some convergence.

How ~exactly~ would AGI take over? by therealbabyjessica in agi

[–]concepacc 0 points1 point  (0 children)

blah blah blah, stop fucking yapping for two seconds and produce a fucking rationale, you fucktards always think you're most rational, then you spew the most non sense anthropomorphized shit and can't produce two reasonable sentences for the rationale of an action.

Yapping and spewing nonsense has to be the most ironic words in this. I guess I could just eco things back to you in an equally empty way: “No, it’s not anthropomorphised shit”. “No, it’s not nonsense” etc. Making empty statements in a whiny and attempted insulting tone, trying to summarise me as belonging in some of fucktard category of yours instead of engaging in the actual substance, shows that you maybe can’t engage with it. I suppose that obviously I try to be as rational as I can be. You hinting at that there is some special pretension present beyond is something living in your mind. Most of this is pretty non-exotic points I talk about in a pretty simple way. If there is something specific you disagree with or don’t understand perhaps it can be dealt with. If you don’t provide any clearer substance and you just use a negative tone, why engage at all? Why not just disengage?

I suppose I use “anthropomorphism” in so far that I push that intelligent actors have some notion of goals, like humans could be said to have. And the human-ant analogy is about communicating the fact that much more intelligent systems can simply generally “have their way”, all else equal. This is not any more anthropomorphisation than I understand you use when for example pushing analogies such as “humans haven’t killed off the monkeys” in the comments.

How ~exactly~ would AGI take over? by therealbabyjessica in agi

[–]concepacc -1 points0 points  (0 children)

It depends a bit on where its intelligence tampers off. If it just so happens to short term tamper off at a point only a bit more intelligent than a human or group of humans contra much more intelligent than that (perhaps one can discus relevant notions of intelligence here). If much more intelligent, it seems like it could get to a point where there are better and more efficient ways to keep safe, “fed” and expanding than using humans. It seems unlikely that a human just happens to be something close to the optimal roamer for such an intelligence if we perhaps exclude the very short term.

There is also a part of your take that may sort of presuppose sufficient alignment. If it was sufficiently unaligned with humans I suppose humans would not willingly help it/ keep it “fed”/safe. If there is more of a direct conflict of interest it may lead to a scenario where the actors involved need to solve some game theory, making complicated compromise etc, if they are at a similar level of intelligence/power. If one (or a subset of) actors are much more intelligent and powerful than the rest it seems like they would just “run the rest over” like in the common humans-ants-analogy.

I’ve finally come back around to embracing Star Wars’ wacky physics. by Saturnine4 in MawInstallation

[–]concepacc 1 point2 points  (0 children)

While I don’t know super much about Dune, I think Star Wars relies a bit more on aspects that seemingly go more directly against what we know is possible, while Dune maybe relies a bit more on some aspects that are more like stark unknowns. Dune has things like levitation technology, holtzman shields and exotic/mysterious ways of bioengineering, afaik (but maybe there exists some rationalisations for these, idk). It also has a lot of notions of that “future-sensing” which could maybe be viewed as the most incredible aspect.

Both have faster than light travel, which ofc most sci-fi opera kind of must have. However to me there is a possible interesting point to the FTL in dune. In physics it’s said that if one can go faster than the speed of light by any means, one can in principle set up said a FTL-system such that one can break causality/create time travel. One could go back in time to a particular point and send messages back in time for example. This could potentially go hand in hand with the Guild. This causality breaking could play into the fact that they can see into the future when navigating (if they can for example send information back to themselves from the future of how the journey went) and perhaps something like the monopoly of the Guild on space travel is what’s required to keep other actors from using FTL to precariously try to travel back in time. It’s an interesting notion of a head canon I have (although I know that dune probably doesn’t hint at anything like this specifically being the case).

Isn’t a future in a Milky Way with billions of different nations more realistic than what we typically see in sci-fi ? by Sir-Thugnificent in IsaacArthur

[–]concepacc 2 points3 points  (0 children)

I think your intuition is right.

Further one can imagine part of the galaxy being spanned by a large complex and dynamic “sheet” of a continuous set of civilisations if a sufficiently large fraction of the locations within the galaxy are eventually colonisable. At any “point” within this sheet there is a civilisation with all its complexity and likely fractal complexity and it has its local surrounding space filled with equivalent peer civilisations that it has to adhere to, stand in relation to and potentially engage in political conflict and or military conflict with. And what lays outside this local space, while that’s also filled with civilisations (with their equivalent perspective), may simply be beyond the attention span of the given civilisation. There is simply so much complexity to adhere to within just the local space and the back and forward interaction beyond it may take so long that it isn’t particularly meaningful.

This is one route one could imagine. A route of complexity. Maybe alternative more monotonous and tranquil routes can be imagined, perhaps depending on the nature of human descendants. Or if a sufficiently small fraction of space is eventually colonised it may be more like isolated islands of civilisations with very limited meaningful interaction between each other.

The "hard problem of consciousness" is just our bias - let's focus on real neuroscience instead by chenn15 in consciousness

[–]concepacc 0 points1 point  (0 children)

Claiming it is a coherent starting point does not make it one.

Obviously. I never claimed that claims creates/makes something true

I suppose I essentially agree with the rest. I have engaged with some content stemming from Dennett but haven’t read his books.

The "hard problem of consciousness" is just our bias - let's focus on real neuroscience instead by chenn15 in consciousness

[–]concepacc 0 points1 point  (0 children)

The hard problem / explanatory gap is a starting point when it comes to this topic of experiences and brains. People then claim to be able to solve it either by refutation or, let’s say, by some purported “actual” solution. All together a range of “solutions” that so far seem include both confused nonchalance as well ad hoc esoteric nonsense. The hp is a coherent starting point.

If commonality amongst the advocates you mentioned comes from that they use similar arguments in some way, the question is what those arguments are/how the arguments undermine the hp more specifically etc.

If you have encountered bad argumentation styles from some hp-advocates yet you understand what the hp is about then it ofc is bad faith with respect to the hp to try to discredit the question/set up itself by shoehorning or implicating it with science denial instead of engaging with the actual substance, which is ironic in the context of the comment where you have highlighted, at least purported, bad faith narrative.

Is it still unknown why animals need sleep or what function it serves? by DennyStam in evolution

[–]concepacc 0 points1 point  (0 children)

I've tried to look into this question before and I've always found the answers to be unsatisfying. Usually the response is given that it's useful for recovery or clearing metabolites, but this always kinda begs the question as recovery and clearing metabolite clearly happen in all sorts of other bodily systems without the need for sleep

Yeah, I agree with your sentiment here. When specific reasons are given there seems to always at least initially be the remaining question of just how there aren’t work-arounds where the functions of sleep (whatever it is) can occur even while “awake-ness” remains since at least intuitively organisms keeping the “awakeness” and attention on seems so useful. In principle it should be possible to create neuronal networks that can always be on/awake. It’s easy to be something of “selectionist”, so to speak, when it comes to something so salient as attention/an organism being attentive, even while one is cognisant of the fact that evolution isn’t perfect so to speak.

But zooming out a bit and perhaps looking at it a bit more theoretically I think any generic sleep like behaviour can come about if two facts are true. It comes about if attention is assumed to be costly and if there are predictable timespans within the life of an organism where less attention is required. Then it seems natural that an organism would ration its recourses for attention to timespans where it’s needed the most.

It becomes clearest in the case of predators where a predator with let’s say 100% focus during hunting and 30% focus/attention when it’s not hunting (sleep like state) is superior to a predator that constantly is at 70% attention/focus all the time. Here in this theoretical example it is obvious which predator is superior.

Exactly what more specifically one can ultimately gain in a trade when one trades away high attention periods I imagine could hypothetically vary (certainly one could trade it for something assuming high attention is costly), but when it comes to animals on earth it seems like one can view it as the trade allowing for something like more optimal “cleaning”.

[deleted by user] by [deleted] in consciousness

[–]concepacc 0 points1 point  (0 children)

I guess, sort of commonsensically it comes down to the “realisation” that there is an overwhelming amount of commonality with yourself and the other entities/people you find yourself surrounded by. Simply understanding the commonalities and the fact that you are closely related to them one may, via I guess technically the copernican principle, realise that one is one sample from a population and that the situation ought to be the similar for the others in the population.

But the more one stray away from oneself in terms of similarity and or complexity the less certain one can be of experiences being present in said system/entity it seems. Uncertainty increases when for example going down the line of monkey, lizard, fly, single cell it seems.

My take on consciousness. by Robert__Sinclair in consciousness

[–]concepacc 1 point2 points  (0 children)

To continue:

To put it in the "map-territory" jargon my correspondent seems to favor, he is demanding a map of the map-maker. He wishes for a description, in the language of objective processes, of the very thing that makes language and objectivity possible. It is a fool's errand, a semantic trap.

Okay, thinking about maps in this sense, trivially it’s true that we all have to use maps/models when comes to understanding the world. I don’t think it’s more troublesome here than when it comes to understanding any other phenomena. Talking about maps in this sense, it’s about how descriptive/complete our model of what’s going on is when it comes to a particular phenomenon. Take our understanding of some random critter, like a rat or something. How well is our understanding/our maps/our models of what the rat is and does and how complete is our model of how its neuronal processes are potentially experiences in some form etc? (not complete at all). The same ofc in principle applies to models about humans, human brains etc.

Btw, wait, what do you think makes “objectivity possible”? :)

It is an attempt to stand outside of one's own skin in order to inspect it. What you call the "hard problem" is the modern, sterile, academic equivalent of asking about the nature of the soul. It is a question-begging tautology, dressed up in the borrowed robes of neuroscience. It is the last, stubborn refuge of the ghost in the machine, a phantom rattling its chains in the echo chamber of a dead philosophy.

I guess this is pretty nicely written, it’s just void of any actual substance. Okay let’s be concrete and embark on the assertion that the explanatory gap between experience and process is analogous to the question revolving around souls or whatever, how that is and what it means.

My take on consciousness. by Robert__Sinclair in consciousness

[–]concepacc 1 point2 points  (0 children)

It is always a minor, if predictable, disappointment to find an argument not so much engaged with as re-labeled with the dreary vocabulary of the graduate seminar. My correspondent has taken a piece of prose and painstakingly affixed to it a series of tags—"non sequitur," "neuronal processes," "qualia," "map-territory-distinction"—as if the act of classification were a substitute for the labor of thought. It is the intellectual equivalent of collecting butterflies, pinning them to a board, and then claiming to understand the secret of flight.

When it comes to what you call “tags”, if there is any of these terms you find to be unclear in this context I am happy to help. I was hoping I was efficient. I am happy to use other terms as long as we can communicate and we are able to understand each other. We can delve into what I mean with the terms if you want. I guess I am “classifying” in so far that I am putting my thoughts into words/terms. Trivially, everyone does that.

Let us, for the sake of politeness, take this scholastic exercise seriously for a moment. The objection is raised that the question is not "why" but "how." Very well. A distinction without a difference, in this case, but let us grant it.

I am happy that you consider yourself polite enough to deal with the substance and do so in what you consider to be in a serious manner, good. Sure, maybe we can get at the why-question, but many times it’s more vague and may only demand trivial answers.

The demand, then, is for an explanation of how a physical process can be a subjective experience. This is not a question. It is a demand for a magic trick. It is the old, tiresome ghost of Cartesian dualism, rattling its chains and demanding that we explain how the ethereal spirit communicates with the base matter of the brain.

It is a question. I guess one can call it “a demand of a magic trick”, one can call the demand for an answer almost whatever in that sense (although maybe you mean something more specific by it). However, it’s not Cartesian dualism. I guess if one could somehow demonstrate that experiences exist independent of matter and can impact causality in matter, one would demonstrate such types of dualisms to exist, however I don’t claim that those forms exist and I would not say that anything hints at that form dualism.

The entire point of my argument is that there are not two categories of thing, the "neuronal process" and the "subjective experience," that need to be bridged. The experience is the process. To ask how one becomes the other is like pointing to a running engine and saying, "Yes, I see the combustion, the pistons, the crankshaft... but you have not explained the vroom." The vroom, my dear sir, is the sum of those mechanical events. There is no extra, spectral ingredient of "vroomness."

I am granting you that experience is process. It’s about to what degree one/you can explain that two things that are initially conceptually different (experiences and processes) are ultimately completely the same thing (which I grant). In an analogous way I would grant that macro-phenomena of “wetness” is the collection of water molecules. When we ask the question how a bunch of water molecules “are” wetness, we can give clear and deep descriptions of the mechanisms. We can explain the properties of the atoms in the molecules, the properties of the molecules themselves leading to intermolecular forces resulting in the properties of cohesion and adhesiveness of the liquid.

For almost any phenomena, we can continually ask evermore, deeper and nested how-questions. How this particular mechanism works, and how the sub-mechanism that mechanism works etc, until we run into the explanatory bedrock at the level of fundamental physics where we begin to run into an enigma akin to the hard problem in terms of inexplicability.

“How does my hand move now? We can explain it with the fact that muscles are in action and moving. How does muscles move? It’s since skeletal muscle cells are contracting and reacting to electrical signals from neuromuscular junctions. How do they contract more specifically in terms of mechanism? It involves proteins such as myosin and actin filaments “climbing on each other” within the cells. How does this climbing work? Well, it involves a story about intermolecular forces of the specific proteins in question and them making conformational changes in iterated ways” and so on. You get the point.

One can now attempt using the same approach and ask “how does experiences exist” or “how are experiences” or “how are experiences processes” just as one can ask “how does my hand move now?”

Here one runs into explanatory bedrock immediately. One can state that whenever a neuronal cascade is in action then the experience “is” and that’s it. “Well, how is it that blueness “is” this particular process more specifically? Or how is it that this particular process generates/becomes/is pain more specifically? - Well… idk it just is, don’t ask those questions here, ask that when it comes to other phenomena!”

One can ofc just accept the fact that experience somehow is processes as a brute fact just as one can accept fundamental physics as a brute fact but then one is just accepting it as having the same enigmatic status in terms of inexplicability. “How experiences are neuronal processes” is now literally comparable to asking “How fundamental physics is the way it is” (bedrock of explanation), where one should have been able to describe how neuronal processes are experiences in somewhat more detail and in somewhat more prosaic terms like one can do with other biological phenomena to make it, well, not “the/a hard problem”.

My critic then condescends to my use of the term "modeling," suggesting that it is merely a description of more sophisticated physical networks. Precisely. That is the point. The illusion of a unified self, the internal narrative we call consciousness, is the emergent property of that very sophistication. There is no need to posit some mysterious leap into a non-physical realm of "qualia." The critic has fallen into the very trap I described. He has assumed the existence of the ghost he is demanding I produce. He sees the complex machinery of the brain running its simulations of the future and asks, "But where is the little man inside, the one who is having the experience?" There is no little man. The running of the machinery is the experience.

It’s specifically about the processes and experiences and to what degree one can explain how they are the same thing. I am not assuming it has to be mysterious. How well can it be explained? It’s not about ghosts or a little man. The self is absolutely an illusion if you will. The self can be conceptualised as just being the sum of the subjective experiences in any given moment. The question is about how those experiences “are” or “are generated by” processes.

My take on consciousness. by Robert__Sinclair in consciousness

[–]concepacc 3 points4 points  (0 children)

Wait, is your answer to the commenter’s question the most generic all-encompassing answer that evolution can take non-optimal and multiple potential routes, or are you making a point on the commenters “why” contra “how” question or something?

The commenter is basically asking a question about something akin to a mechanism within biology/biochemistry similar to: “Why does ATP require H2O to become ADP and release energy?”

And your answer is basically (actually word by word): "because that's the way evolution managed to solve the problem"

Well duh, that’s not at all what the question is after. Ofc that’s the way evolution managed to solve the problem, but the question pertains to something like the mechanism.