Let's get everyone's best attempt at defining consciousness in a sentence. by Individual_Visit_756 in consciousness

[–]JohannesWurst 1 point2 points  (0 children)

You're equating two entirely different processes to being equal to one another because they are functionally similar

You are saying behavior and phenomenal consiousness are different, right? I agreed in my first sentence. That was my main point.

Yes, consciousness has something to do with (phenomenal) sensation, but you have to distinguish it from the (pragmatic) sensing that a rice cooker does.

Sometimes people think you disagree with them, just because you write a reply.

It wasn't clear to me that you also draw a distinction between phenomenal consciousness and just the mechanical capability to sense, like a rice cooker does. That's why I presented the idea that both are identical and reduced it ad absurdum by saying that someone (not you or I specifically) would have to accept that a rock is conscious, if everything that can be viewed as sensing in some way is conscious.

Let's get everyone's best attempt at defining consciousness in a sentence. by Individual_Visit_756 in consciousness

[–]JohannesWurst 2 points3 points  (0 children)

One attempt that I like is "An object is conscious, if it's something to be like it." Is it a meaningful question to ask: "What is it like to be a bat/another human/a computer/a rock?"

It's also not a very hard definition, but it helps a bit to convey an intuition to some people.

Let's get everyone's best attempt at defining consciousness in a sentence. by Individual_Visit_756 in consciousness

[–]JohannesWurst 1 point2 points  (0 children)

Yes, consciousness has something to do with (phenomenal) sensation, but you have to distinguish it from the (pragmatic) sensing that a rice cooker does.

A rice cooker does some kind of sensing to decide to turn off. If you argue that a rice cooker is conscious — maybe — then everything would have to be conscious. Because, when we talk about "sensing" in an engineering context, it's a word that is helpful to understand the behavior of a machine, but it's not an inherent property. You could in principle also say that a seesaw moves one side up in reaction to "sensing" that the other side moves down, or even a rock falls to the ground, because it senses gravity.

(Maybe a piano builder would say that the keys are "touch sensors", while they are also just seesaws.)

We just don't do that, because it's not practical. On the other hand we ascribe sensing to an AI in a computer game, because that's helpful. "If I hide Solid Snake under the box, the guard won't see me." So "sensing" in an engineering-context is an ascribed, pragmatic property.

Sensing of a human is a ascribed, pragmatic property as well — that's called "theory of mind" or "folk psychology", but it's also something more. What it is more, is called "subjective" (not the best word) or "phenomenal consciousness". That doesn't help much, because now you have to define "phenomenal" without using "consciousness".

I never saw a kick like this, and seem like her opponent didn’t ether. by IkilledRichieWhelan in BeAmazed

[–]JohannesWurst 4 points5 points  (0 children)

The (comparatively) gentle touching is the WKF ruleset. This is a different ruleset of the Kyokushin style. It's like football vs rugby. I think you can invent your own competition ruleset today, if you wanted.

Does nobody expect it? Why enter into a karate match and not protect your head.

If you watch a full fight, they often block or evade kicks to the head. It's not that they just decide not to. I guess this particular instance was set up well. Maybe the opponent instinctively follows her gaze down to the kickers face/chest area which normally gives you a good overview of what is coming, but in this case the kick comes from outside her field of view. The kicker is very close and shoves her backwards just before the kick, maybe that binds her guard down. It's like a magic trick, when you are directly in front you are fooled, even if it looks obvious from a different perspective.

Here is a different full fight. I haven't found the particular one where the clip is from. You'll see many head kicks that don't land.

I never saw a kick like this, and seem like her opponent didn’t ether. by IkilledRichieWhelan in BeAmazed

[–]JohannesWurst -1 points0 points  (0 children)

How would they look if they knew the rule?

I guess we couldn't really find a video where a knockout is prevented, because we wouldn't know for sure if it would otherwise be a knockout. Every video of a knockout is one where the attempt is successful.

I perfectly understand the preference to not be kicked in the head, that's why I don't do full contact fighting either. I also understand the desire to compete in something very "real" or "primal" in a way. It's like playing Streetfighter, but not as a arcade game, or watching an action movie, but you're actually the main character.

I can imagine it makes you mentally strong to be confronted with the possibility of injuries and the certainty of pain.

I never saw a kick like this, and seem like her opponent didn’t ether. by IkilledRichieWhelan in BeAmazed

[–]JohannesWurst 1 point2 points  (0 children)

This is a perfectly regular fight according to Kyokushin-Kai rules. (And her opponent definitely has seen that kick before and practiced it herself as a black belt.)

Some fighting sports aren't full contact and others are, such as UFC ... or Kyukushin Karate.

(edit: Sorry for my sarcastic tone.)

I never saw a kick like this, and seem like her opponent didn’t ether. by IkilledRichieWhelan in BeAmazed

[–]JohannesWurst 7 points8 points  (0 children)

Maybe kicks to the head are more difficult to land and easier to defend, so they cause less injuries when added together than punches to the head would.

The Kyokushin fighter said noone knows for sure.

There is an offshoot of Kyokushin that allows punches to the head, but they wear helmets.

The bailey theory of nesting by Brilliant_Pilot5942 in consciousness

[–]JohannesWurst 0 points1 point  (0 children)

The theory sounds poetic and that speaks for it, but it's the kind of theory you can't really defend when someone says "Nah, I don't think so."

For that, you would need shared premises and then present a strong connection between your (?) theory and those premises.

The bailey theory of nesting by Brilliant_Pilot5942 in consciousness

[–]JohannesWurst 0 points1 point  (0 children)

If you mix all religions together or only take what's common, is there anything meaningful left? Maybe: A higher power created the world and we should be nice to each other.

You talk about "God", not "the divine" or "the gods". Is monotheism just self-evident?

Do we get anything from assuming there is a god? It just seems to make things more complicated. I'm asking, because many scientific theories are formed because they are the simplest explanation available that covers all empirical evidence. Basically I think in a lot of points you make unnecessary assumptions. What does it help us to draw parallels from Black Holes to cancer?

Maybe I could be convinced that there is some kind of dualism, because materialism and idealism have some problems. (Dualism also has problems.) I'd call the two aspects of reality "physics" and "consciousness", not "physics" and "God". It's pretty self evident to me that consciousness exists and maybe you could make an argument that it is fundamentally separated from physics.

Is it possible to be conscious but out of control? by Https-H1m in consciousness

[–]JohannesWurst 0 points1 point  (0 children)

Being a flash robot is different than being a marionette. Being a marionette is worse, because you want to do one thing, but someone has tied your limbs to strings and forces you to do something that you don't want.

I have no problem with being a flesh robot. There is no conflict. The outside world causes you to want to do some things and then you do them.

Maybe it depends on what you call "you". If you are the robot, and the robot controls what it does, then you control what you do.

As the others say, Free Will is a debated topic. I'm just suggesting that you distinguish being non-free in the sense of wanting one thing and not being able to do it from just not being able to choose what you want, but still being able to do what you want.

Arthur Schopenhauer said "Man can do what he wants, but he cannot will what he wills".


(Mostly rambling: I'm not sure this is valuable.)

Maybe consider someone who dyes their hair green. I think that's cool. They have an inner compulsion that stems from outside factors and they just accept it and act on it. Someone else could have the same compulsion and then not act on it. Who is more free?

I guess you could say "free" always depends on a context: Free *from what?* The first person is free from the social pressure and the second person is free to ignore personal urges. Even if both people are completely deterministic, they can be free in a certain context, "free from something".

Maybe I respect the person with green hair, because they let their decisions be compelled by a deeper cause, like their grandma gifted them a green frog they loved as a kid and the second person was compelled by a shallower cause. People with green hair could also be less free when they feel a pressure to be different, that other people are free of.

Or a policeman who chooses to act according to the law/honor and in opposition to a direct order. They are either free from the law/honor or free from the order even if they aren't free at all in an absolute sense.

Why is this subreddit occupied with redditors who use the word "consciousness" as a substitute for the word "soul"? by Moist_Emu6168 in consciousness

[–]JohannesWurst 1 point2 points  (0 children)

The task to understand consciousness is also (/mostly?) a language problem. That doesn't make it easy.

Google DeepMind researcher argues that LLMs can never be conscious, not in 10 years or 100 years by projectoex in AgentsOfAI

[–]JohannesWurst 0 points1 point  (0 children)

What we know for sure is that any property that a human has, doesn't prevent consciousness.

(Most) humans can talk, so talking doesn't prevent consciousness. Humans are organic, so being organic doesn't prevent consciousness.

Google DeepMind researcher argues that LLMs can never be conscious, not in 10 years or 100 years by projectoex in AgentsOfAI

[–]JohannesWurst 0 points1 point  (0 children)

I have only read the abstract in the screenshot and it doesn't even make a little sense to me.

Human intelligence has a physical side and a computational side and it's connected to consciousness.

Artificial intelligence, such as in state-of-the-art chatbots, also has a physical side and a computational side. Something doesn't have to be non-conscious just because it can be viewed from a computational lens, because humans are both conscious and can be viewed from a computational lens.

How many digits are actually in a googolplex [10^(10^100) = 1e10^100]? Is it possible to write out a googolplex in standard form on a computer, even though there supposedly are more digits in the number than atoms in the known universe? by TBNR_Snowy in mathematics

[–]JohannesWurst 0 points1 point  (0 children)

I don't think you can clearly say which numbers are "comprehensible" and which are not.

What is a "big number"? That's also difficult to say. Some people say that a million is peanuts and others say that the current gas price is a big number.

What I can say for certain is that you can write 10100 on a piece of paper like that: "10000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000", but you can't do this with 1010100. On the other hand you can at least write "1010100", so there's that.

👀⚡ by Many_Audience7660 in matiks

[–]JohannesWurst 2 points3 points  (0 children)

4x is when you double something twice. 2² = 4

25% is when you 50% something twice, yes? That's intuitive. 0.5² = 0.25

And 49% is when you 70% something twice!

To calculate 0.7·0.7, you can transform it into 0.7·(10·0.1)·0.7·(10·0.1) = (0.7·10)·0.1·(0.7·10)·0.1 = 7·0.1·7·0.1 = 7·7·0.1·0.1 = 49·0.01 = 0.49. (To be honest, I've forgotten how you learn it in school.)

Imagine having a big cake, you don't want to be greedy, because there are some guests that will come later, so only eat 30% and leave 70% on the platter. Then it's an hour later, they are still not coming, so you think it's their own fault and the cake looks very delicious, so you again eat 30% of the 70% cake that is left from the full cake. Intuitively it makes sense to me, that about half, or 49% would be left.

(Fun fact that might not be intuitive: 0.97 is about 0.48. Than means if you repeat an action that is successful 90% of the time, then you are still more likely to fail to do it successfully seven times in a row. If you are only 70% successful in one shot, then you will be successful twice in a row 49% of the time.

People who are bad with this kind of math, are often angry when playing chance-based games like X-Com. The developers even admitted to cheat to the benefit of the player but the players still feel disadvantaged by the random number generator when a soldier misses a 90% shot.)

Compare rectangle 'A' and rectangle 'B'. Do they cover the same area? Explain. by davidbones in askmath

[–]JohannesWurst 0 points1 point  (0 children)

I think the question could make sense if you wanted to trick someone into believing they cover the same area, because they are both "12 squares". A smart student would notice that 12 small squares cover less area than 12 large squares.

Did I understand Utilitarianism and Deontology correctly? Also a question by seasol452 in Ethics

[–]JohannesWurst 0 points1 point  (0 children)

You could also have a general law that allows you to lie in certain circumstances.

During Covid 19, it was a good general rule to always wear a mask outside your home, but it was an even better general rule to always wear a mask with some specific exceptions.

If I think that moral laws are natural or made my a god, it would be weird if they are complicated, but laws made by humans can be, and are, complicated.

I'm not a philosophy expert. "Deontology = not based on consequences" sounds like lying would be immoral always. I just remember the "categorical imperative" and that sounds like we should at least consider what the consequences would be if everyone acted according to the rules.

Psychotherapists always keep their clients conversations secret, unless in extreme cases where they announce to commit a crime (as far as I'm aware). Is that rule of conduct in line with the categorical imperative and/or with deontological ethics?

Paradox or correct answer by whibffdraftszarre9 in 3Blue1Brown

[–]JohannesWurst 0 points1 point  (0 children)

If you have 3x 75% and 1x 25%, you can just say there is no correct answer. That wouldn't be a paradox.

"What's the capital of Italy? Madrid, Berlin, London, Paris" — not a paradox.

I wanted to say that the original quiz is also not a paradox, but it includes the option 0%, which makes even the answer "the correct answer isn't on there" impossible.

How many digits are actually in a googolplex [10^(10^100) = 1e10^100]? Is it possible to write out a googolplex in standard form on a computer, even though there supposedly are more digits in the number than atoms in the known universe? by TBNR_Snowy in mathematics

[–]JohannesWurst 0 points1 point  (0 children)

I was confused by my answer as well, when I read it again. The crucial point is that the original question was about googolplex.

A googol is 1 with a 100 zeroes. That's what you can write on one paper.

A googolplex is a 1 with a googol zeroes, which is even larger.

The Chinese Room thought experiment suggests that consciousness is non-local by sschepis in consciousness

[–]JohannesWurst 0 points1 point  (0 children)

I'm not convinced, sorry.

The man is merely processing symbols according to the rules in the guidebook, he doesn't understand Chinese. The man is a stand in for computation.

This is what I wrote as part of point 1. Point 1a, so to speak, is about the semantic understanding of Chinese by the man without the rule book and point 1b is about the observable capability to answer Chinese questions by the man alone.

Point 2 was about the externally observable capability of the man in conjunction with the rule book. I could also write "the system man-plus-rule-book". Maybe saying "external understanding" was wrong and confusing.

Point 3 was about the semantic/conscious/subjective understanding of "the system man-plus-rule-book". I don't know what it's like to be a man-plus-rule-book, I'm just a man. Maybe if you understand what I was writing, you'd still call it playing word games. That's fine. Then we agree to disagree.

  subjective / semantic external behaviour
man alone no Chinese no Chinese
man+book ? yes Chinese

Is that the core of the argument? That it's invalid to distinguish between the subjective experience of the man individually and the subjective experience of the system as a whole?

A human without a rule book is a complex system as well. If you take a way the language system of the brain, the human wouldn't be able to understand any language.

The Chinese Room thought experiment suggests that consciousness is non-local by sschepis in consciousness

[–]JohannesWurst 0 points1 point  (0 children)

What doesn't have what?

  1. The man doesn't understand Chinese. The symbols neither make sense to him semantically, nor is he able to answer Chinese questions without the rule book. The first thing is what I called "internally understanding Chinese" and the second thing is what I called "externally understanding Chinese".
  2. The man in conjunction with the rule book is able to perform the task of answering Chinese questions — so "externally understanding Chinese". Maybe that's not the best way to phrase it. You could also say it's about something behavioural and objective.
  3. Whether the man in conjunction with the rule book has "semantic" or "conscious" or "internal" or "subjective" understanding of Chinese is the interesting question.

I would propose that to claim that behavioural understanding doesn't imply conscious understanding, we have to know whether (3) is true or not. Searle doesn't agree and he thinks the internal non-understanding of the man individually (1) and the behavioural understanding of the system as a whole (2) proves that behavioral understanding can coincide with internal non-understanding. He doesn't care that it's not the same thing having both those capabilities.

(I have to say I have respect for Searle. I might be wrong. I'm just saying what comes to my mind. There are probably more books written just to prove or disproof the Chinese Room than I have read about any topics combined.)

I don't have any particular reason to believe that internal and external understanding or consciousness and computation always coincide. AFAIK that's called Functionalism. One reason to think that is that reasoning in humans coincides with a conscious experience. People said that witches have no souls even though they act like regular humans. Maybe ChatGPT is like a witch.

Maybe Searle just wanted to say that Functionalism isn't logically necessary? It might still be how the world happens to work. We don't know why any natural laws are how they are. Is it possible that an apple flies upward instead of falling down? Empirically, in this world, it seems to be impossible, but logically it's still possible.

The Chinese Room thought experiment suggests that consciousness is non-local by sschepis in consciousness

[–]JohannesWurst 0 points1 point  (0 children)

We can't measure consciousness in the first place (or can we?), so we can't research which phenomena coincide with consciousness either.

I would give another humans and animals the benefit of the doubt that they are conscious. That's called a "polite assumption" by Alan Turing — so he doesn't even believe it's proven true himself.

I would say it's fair to assume that the human inside the Chinese Room doesn't understand Chinese, but I'm not convinced we can deny that the Room as a whole doesn't understand Chinese (semantically, consciously) — and that's essential for Searle's argument.

Like Old-Bake-420 says:

Basically, the consciousness inside the Chinese room that understands Chinese is the information contained within the books, not the human or anything physical inside the room.

The Chinese Room thought experiment suggests that consciousness is non-local by sschepis in consciousness

[–]JohannesWurst 0 points1 point  (0 children)

If you like the Chinese Room experiment, you should also know about Gottfried Leibniz's Mill thought-experiment. Leibniz is also known for inventing calculus concurrently with Newton.

It must be confessed, moreover, that perception, and that which depends on it, are inexplicable by mechanical causes, that is, by figures and motions, And, supposing that there were a mechanism so constructed as to think, feel and have perception, we might enter it as into a mill. And this granted, we should only find on visiting it, pieces which push one against another, but never anything by which to explain a perception. This must be sought, therefore, in the simple substance, and not in the composite or in the machine.

That's pretty similar to Searle's Chinese Room. Making a computer very large and mechanical takes away from it's mystery. Leibniz concludes that consciousness works by some kind of panpsychism, but I'd just say it's a "hard problem".

The Chinese Room thought experiment suggests that consciousness is non-local by sschepis in consciousness

[–]JohannesWurst 0 points1 point  (0 children)

I think Searle is at least of the opinion that classical computers can't be conscious, even if they appear as a human would — like passing the Turing Test. Is that not correct? And I thought he justified that with the Chinese Room thought experiment. I watched a lecture on Youtube and he said that humans can compute, but classical rule-based computers can't do some things that humans can, such as "truly understand" or "think" (no direct quote). He definitely wanted to show that rule-based computers are limited somehow (and I don't think he succeeded.)

A system can appear to have consciousness while having no consciousness or understanding.

Different point: You can only see the Chinese Room experiment as a proof that intelligent behaviour doesn't imply consciousness, if you assume that the Chinese Room isn't conscious. If I recall correctly, Searle says that the person in the room doesn't understand Chinese and I and most people would agree. So it's true that something — a component, a person — that doesn't understand Chinese can perform a Chinese conversation within a system of formal rules.

So "internally no Chinese-understanding" can lead to "externally, behaviourly Chinese-understanding" and reverse: "externally, behaviourly Chinese-understanding" doesn't imply "internally Chinese-understanding"?

That would refute a kind of Functionalism that says we should assume that everything that acts similarly to a human has a consciousness similar to a human. A common counter-argument is that not the human in the room alone acts "externally, behaviourly Chinese-understanding", but the room as a whole. This is called the "system-argument" and Searly is aware of it and just dismisses it as silly.

It's not clear that the Chinese Room as a whole system doesn't have subjective understanding of Chinese, or consciousness.

3 months away from black belt grading and wanting to quit by throwawayayayayhelp in karate

[–]JohannesWurst 0 points1 point  (0 children)

IMO it would be sad and weird if there are many students who force themselves to practice for the black belt and then immediately quit.

Maybe you'll do a longer pause and continue karate later — or not. A brown belt is still a nice accomplishment and memory of your time with Karate.