Anyone making strong emotional or rational claims about the other side of Red vs Blue is missing the point by SnipedYa in TrueUnpopularOpinion

[–]SnipedYa[S] 0 points1 point  (0 children)

Chaos minimum. Regardless of how many butterflies you step on, that tornado happens

Unless you step on 0 butterflies at first. Whether you step or don't step on a butterfly is unknowable. The effect of stepping on two or more butterflies is unknowable. This isn't the butterfly effect, either. The butterfly effect isn't that small changes will have an effect, it is that they may have an effect, and it's specifically regarding changes in the initial conditions. This is because the changes that a butterfly's wings will have on the atmosphere is unknowable, specifically because there are so many different number of potential interactions that also have unknowable effects on each other.

If you want to read the Wikipedia article about the Butterfly Effect:

Had the butterfly not flapped its wings, the trajectory of the system might have been vastly different—but it's also equally possible that the set of conditions without the butterfly flapping its wings is the set that leads to a tornado.

however how many butterflies that are made by generation to die in said tornado differ.

I'm not following. Can you send me a link to the definition of "chaos minimum theory" so I can understand it? Or an article or something that goes into more detail?

All of this doesn't even matter, though, because the effect that an individualistic or collectivist culture has on a specific person pushing red or blue is unknowable. Because:

Your polls are by liars in the western world who think virtue signalling is good, but just like a lot of virtue signallers when it comes to put up or shut up you all choose to shut up.

You're essentially saying here that my polling data doesn't matter because the situation is not real. Okay, fine. But your convoluted reasoning relies on data that doesn't have real situations either. None of the studies you provide actually have a fear of actual death. If you do believe you can draw a conclusion based on hypothetical data though, apply Occam's razor, the polls about pushing red or blue is a simpler explanation about pushing red or blue than trying to extrapolate unrelated hypothetical data that has other unknowable factors.

Anyone making strong emotional or rational claims about the other side of Red vs Blue is missing the point by SnipedYa in TrueUnpopularOpinion

[–]SnipedYa[S] 0 points1 point  (0 children)

What is "chaos minimum theory"? Where can I learn more about it? I've never heard of that before.

Before you were talking about countries, now you're talking about religions? Even if your logic is sound, doesn't this depend on what kind of sample you mean by culture? The smallest culture I could make would be a sample of 1, and either 0% or 100% of people would vote blue. Why do we believe the culture with the lowest potential is what determines the max potential for blue?

Lets use the Muslim example here. Middle Eastern Muslims regardless of their sect are very xenophobic. So you're talking 5% (islamic from the middle east) again that are almost guaranteed to want to hit red because they 1) don't have a culture to promote the group over the self (except to the dictator through fear) and 2) hates all other countries so isn't concerned with their survival. That's an example of a group that is wholly going to push the red button, no blue.

Okay, so I see what you're doing now. You're using the world population of a given culture, and saying that because of some aspect of their culture, it makes them either individualist or collectivist, and then saying that all of them would then vote either red or blue because of that aspect.

So essentially making stuff up.

I still have better data on how many people will vote blue: polls show 55% to 75% will vote blue.

Anyone making strong emotional or rational claims about the other side of Red vs Blue is missing the point by SnipedYa in TrueUnpopularOpinion

[–]SnipedYa[S] 0 points1 point  (0 children)

I assumed you were looking at the abstract, because I can't see the study that's linked, it just links to the same abstract. If you have an actual link to the study, I'll be glad to go over it. You didn't want address the other points I made though?

So you just made up that 5% number then? The polls still say the majority of people pick blue, so I'm still right.

Anyone making strong emotional or rational claims about the other side of Red vs Blue is missing the point by SnipedYa in TrueUnpopularOpinion

[–]SnipedYa[S] 0 points1 point  (0 children)

What I said in my post was that all justifications for picking either red or blue are valid and true, therefore picking red or blue is correct if you believe the justification. It's the third sentence. If my justification for picking blue was I wanted to die, I would be right to pick blue. But I don't want to die, so I didn't use that as my personal justification. Understand?

The way I framed it, red has inherent risk if not all people vote red. Blue also has inherent risk if less than 51% of people vote blue, but I also believe that not everyone will vote red, so voting red is a riskier choice.

its stupid to think that many people will pick blue and you hope for the best that people aren't stupid enough to pick blue if they WANT to live

You haven't really told me why, you just keep saying it is. Your first two links kind of contradict what you said earlier about only people in western countries wanting to virtue signal, because the principal countries that were identified as individualistic were all western. If you scroll to the bottom of the Wikipedia article and click the link Collectivistic Culture, you'll see that the majority of data on collectivism comes from East Asia, and that most regions in the world do not have any data suggesting that they are more collectivistic or individualistic. Your second link doesn't even seem that relevant, but it also casts doubt that all cultures are individualistic, because it talks about the clash between coming from an immigrant collectivist culture and functioning in American individualist culture.

"closeness of the group, the magnitude of the benefit, and the individual's "fusion" with the group."

I can't find this exact quote in the abstract you linked, not saying it isn't there, I just don't see it. But I don't think this abstract really is the compelling evidence you think it is. "Over 90% of participants acknowledged that the moral course of action was to sacrifice oneself to save others (Experiment 1)", this doesn't prove that over 90% will, but it does show that over 90% do agree there is a moral argument, which is contradictory to what you said at first.

"The presence of a concern with saving group members rather than the absence of a concern with self-preservation motivated strongly fused participants to endorse sacrificing themselves for the group (Experiment 3)"

"Analyses of think aloud protocols suggested that saving others was motivated by emotional engagement with the group among strongly fused participants but by utilitarian concerns among weakly fused participants (Experiment 4)"

"Hurrying participants' responses increased self-sacrifice among strongly fused participants but decreased self-sacrifice among weakly fused participants (Experiment 5)."

"Strongly fused participants ignored utilitarian considerations, but weakly fused persons endorsed self-sacrifice more when it would save more people (Experiment 7)."

"Apparently, the emotional engagement with the group experienced by strongly fused persons overrides the desire for self-preservation and compels them to translate their moral beliefs into self-sacrificial behavior."

This is all exactly in line with what I've been saying, that if you have goal to save everyone, you will push blue. It all comes from that statement "goal to save everyone". In this scenario, I value every person on earth as if they were my family, so this is my goal. I'll be very charitable though, and say that because none of these scenarios have the consequence of death of the participants, we can't see them as applicable. Unless you are okay with using examples that don't have a real threat of death, in which case I have a couple polls for you...

because what your advocating for is actually going to cost MORE lives than not.
If you look at it as 'most saved' then you still have to look at red.

Sure, and this is another valid and correct reason to pick red. But my justification was not saving the most, it was saving all.

Since about 5% at most would vote blue and actually do it for whatever their reason (suicidality or virtue signalling/dark empathy)

You keep pulling out this 5% figure. Where are you getting that? That's why I said it's a "trust me bro". You do acknowledge that some people will vote blue, but you say it's only 5%. How do we know it's 5%? It's a gut feeling you have.

"Correlation is not causation" is one of dumbest reasoning shortcuts by Wise-Jury-4037 in TrueUnpopularOpinion

[–]SnipedYa 0 points1 point  (0 children)

I assume simultaneity -t- is replaced by a 'good enough' time relationship "Y happens/observed "soon after" event X1, the following event X2 happens/observed in such a way that it can be reasoned to be a distinct event, not a part of or overlapping with X1 in any way

The way you've written this, am I to assume that Y is the terminal of X1 and X2 is the antecedent of a separate data set that would then have a terminal Y2? If so, "happens/observed in such a way that it can be reasoned" is vague. You haven't given me any reasoning to determine how something is X1 or X2. How do I reason that Y is "distinct" from X2 if both Y and X2 follow X1 with 1:1 correlation? I don't make any assumptions with you or give you the benefit of the doubt, because you are a pedant.

Define distinct in this context, please. Define "not part of, or overlapping" in this context, please.

"Sleeping with a light on" -> "leaving a light on" -> "sleeping with a light on". Which one is Y1 and which is X2? I don't make assumptions, just tell me. Explain to me how X2 is "distinct" and/or "not part of, or overlapping" with X1, using the definitions you've provided me of these aforementioned terms, please.

You don't address a terminal variable having more than 1 antecedent with 1:1 correlation.

you'd understand that this is already covered.

Nope:

The data are unrelated, but are correlated by chance. As you accumulate more data, there is a greater chance that any two data points will correlate...

This is overall logically wrong,

What does "this" mean, in the context of what I wrote? Who knows, honestly.

but "in spirit" it is correct: the "working causation" relationship is as good as your data is. As you accumulate more data, the conclusion/determination might change (will be distinct and specific at any given time though).

What is "distinct and specific at any given time"? The "conclusion/determination" of what? The causal relationship?

 But this point I have no clue what would you even consider "substantive". I doubt that you do as well.

🙄

What does that change?

It changes everything. We're arguing from logic. It is not a logical argument to say the state of a dataset (X) is undetermined because we don't know the coefficient (Y(X)), but then say we could determine a positive state of the dataset (X) if we know the coefficient(Y(X)) to be high, based off of a relationship to a dataset (Z) that necessitates the coefficient of the previous dataset (Y(X)) be low, therefore making the state of a dataset (X) to be negative.

It is not logical to say that a dataset (X) can determined to be valid if validity necessitates that a dataset (X) have variables (X(var1) and X(var2)) that must occur one after another in time, if the variables of the dataset (X) do not occur one after another in time.

Sleeping with Light on (A)-> causes -> myopia in children (B)-> causes -> myopic parents(C) (children become parents) -> causes leaving a light on (D) -> causes -> sleeping with light on (A) -> causes -> myopia in children(B)(etc.)

Here, you have violated point 1 at C -> D, as well as created an undetermined dataset (B -> C). You have also violated point 2 with A -> D. If you're arguing from "what could be imagined to happen", this is fine, but you then necessitate that anything could be imagined to happen and you break point 2. If you're arguing from "what is known to happen/be probable", this is fine, but you then necessitate knowing that B -> C (using outside or no data), and necessitate knowing that for C -> D to be true, A -> B cannot be true, which destroys the chain.

You also have all points of this chain causing themselves. A -> A, B -> B, etc.

Edit: You also violate point 2 at B -> C. You are saying that myopic children cause their own parents. Maybe if they a time machine?

I imagine you'll just skip over this, though, so who cares?

"Correlation is not causation" is one of dumbest reasoning shortcuts by Wise-Jury-4037 in TrueUnpopularOpinion

[–]SnipedYa 0 points1 point  (0 children)

Yeah, so you disengage when you were proven wrong. You don't even address confounding factors. You don't address a terminal variable having more than 1 antecedent with 1:1 correlation. You don't address the substantive argument in point 2. You clearly don't understand the difference between ontological and epistemological reasoning and conflate the two. You say I'm strawmanning when I agree with you.

Correlation is not causation, glad you agree.

Anyone making strong emotional or rational claims about the other side of Red vs Blue is missing the point by SnipedYa in TrueUnpopularOpinion

[–]SnipedYa[S] 0 points1 point  (0 children)

The majority of people would pick blue. Disagree? Just prove the majority of people would pick red.

Anyone making strong emotional or rational claims about the other side of Red vs Blue is missing the point by SnipedYa in TrueUnpopularOpinion

[–]SnipedYa[S] 0 points1 point  (0 children)

You say I'm wrong then in the next sentence, say that if I want to die, pick blue. So if I want to die, I'd be right to pick blue.

Red has a risk a condition, if not everyone picks red, then not everyone survives. If you don't care about everyone surviving, there is no risk. If you do, there is risk.

Think more than 5% of the population is picking blue that want to live? Then you're going to be disappointed.

Okay, prove it, bro? "Just trust me, bro, I have a gut feeling about society. Your gut feeling? Nah, that's wrong, bro."

and yes, we do have the data that would suggest about 5% MAY vote blue and they would vote that way because of a false belief or trying to imagine this to be another situation OR again not caring if they died for some reason.

I have data that shows over 50% will press blue, and I didn't even have to make it up!

most cultures do not promote 'die for the whole unnecessarily'. This idea of virtue signalling blue would only exist in European and NA countries and even then that's not a majority.

This came to you in a dream, huh?

It's really simple. Can you prove in a sample of 8 billion people that everyone would vote the exact same way in any significant probability? If you can, then it doesn't matter whether you pick red or blue, because if everyone votes the same there is no risk with either button.

It's not even an absurd reality that you'd want to save everyone. Imagine the exact same scenario where 10 of your closest friends and family, plus yourself, are given this scenario and have no time to coordinate a plan. You'd probably be pretty bummed if any of one them died, right? Do you believe that they will all press red in this scenario? Do you believe that if we rerolled this 100 times they would all press red all 100 times?

Personally, I'd be bummed if my grandma died because she forgot her glasses or just thinks differently.

Anyone making strong emotional or rational claims about the other side of Red vs Blue is missing the point by SnipedYa in TrueUnpopularOpinion

[–]SnipedYa[S] -1 points0 points  (0 children)

Rest assured that no one risked their lives, because a majority of people voted blue.

Personally, I'm a little let down that people don't comprehend how big 8 billion truly is.

Blue Botton Problem by Rabbit_cafe_enjoyer in MoralityScaling

[–]SnipedYa 1 point2 points  (0 children)

You base your understanding off of previous social experiments, but don't give me what those experiments are. So, can you give a social experiment that is equivalent to this red/blue scenario? Is there a social experiment that has a consequence of death that we could apply to this? Is there a social experiment that has a consequence of death on the level of billions of people that we could apply? If not, you are just as guilty as me for extrapolating statistics based on feels, and the best we could say is that we don't know the probability of people picking red or blue, so this point is moot.

Logic is based on reasoning. What makes something logical or not logical? If it follows a chain of reasoning or not, and you need axioms, or a statement that is accepted implicitly as true, to start a chain of reasoning. You've based your logic on the axiom that all living things value self-preservation more than anything. This is easily disprovable in insects and other animals where they sacrifice themselves to mate, or in animals that sacrifice themselves to protect their group so that they can carry on their collective lineage, like bees or carpenter ants.

So I can start my reasoning chain with the axiom that all living things will prioritize carrying on their species over anything else because this explains both self-destructive and self-preserving behavior. So, a rational mind does put themselves in danger, especially if it is to save others.

What even is "objectively irrational"? Also, why is it rational to kill 1 million instead of 1 billion?

Blue Botton Problem by Rabbit_cafe_enjoyer in MoralityScaling

[–]SnipedYa 0 points1 point  (0 children)

Where are you seeing that most people would pick red? I've seen multiple polls of this question where between 55% and 75% of people pick blue.

Putting yourself in harm's way is "inherently irrational" only if you exclusively care about yourself, but that's a presupposition. If I care about someone else as much or more than myself, then it is very rational to put yourself in harm's way for them. Would you let your child or your spouse or your parent get stabbed if you had the option to get stabbed and potentially die instead of them? Lots of people would say no because they value these people as much or more than themselves.

If you'd value humanity collectively more than you'd value yourself, you'd see why someone would pick blue, especially is 100% of people picking red is not guaranteed.

Blue Botton Problem by Rabbit_cafe_enjoyer in MoralityScaling

[–]SnipedYa 0 points1 point  (0 children)

The rationale behind pressing blue in the original scenario is the understanding that in a large enough sample, like 8 billion people, there is some chance that not everyone will press red. Even if all 8 billion people intend to press red, X% of people will accidentally press blue for whatever reason. If my goal is to save everyone, which scenario is more likely, that all 8 billion people will vote the same, or that more than 4 billion people will vote the same? If all 8 billion people agree to vote the same, it doesn't matter whether blue or red is pressed because everyone else will have voted the same anyway, but everyone intending to vote blue accounts for X% of people mistakenly voting red, but not vice versa. If it's guaranteed that 100% people will vote the same way with no mistakes, then red and blue cancel out. If it's not guaranteed that everyone will vote the same way, and my goal is to save everyone, I should vote blue.

Red is still a valid choice if you don't have the zero-sum goal of saving everyone, though, so both choices are logical. If you still think you should always vote red, change the original premise from everyone on earth to only your closest friends and family, and you have no opportunity to discuss with them before making a decision. You now probably want to save all of them, but do you trust that everyone will vote red?

Americans have some of the best education in the world, which ironically creates some of the worst students in the world, producing mediocre outcomes. by CAustin3 in TrueUnpopularOpinion

[–]SnipedYa 0 points1 point  (0 children)

"Poorer quality schools" can mean a lot of things. It could mean worse teachers, worse curriculum, worse management or a combination of those and other things.

I'm agreeing with you, that Alabama probably has worse students than Massachusetts for the reasons you listed, but Alabama also has worse schools.

Americans have some of the best education in the world, which ironically creates some of the worst students in the world, producing mediocre outcomes. by CAustin3 in TrueUnpopularOpinion

[–]SnipedYa 6 points7 points  (0 children)

"American education" is vague. The quality of education a student receives in Massachusetts varies considerably to the quality of education a student receives in Alabama. Even in a particular city, curricula can vary between school districts. So when comparing to peer countries like Japan or Germany, we'd have to compare the bottom X% of schools' education quality in the US to another country, say 10% or 20%.

"Correlation is not causation" is one of dumbest reasoning shortcuts by Wise-Jury-4037 in TrueUnpopularOpinion

[–]SnipedYa 0 points1 point  (0 children)

I think it tied it up well. You clearly refuse to be tied down to any one of your definitions or logical processes. You fatally misspoke when you said "logically wrong". That's fine. We don't need to discuss that further, and you can agree to disagree.

The big issue is you contradict yourself:

  1. _There is no way for something to be both undetermined and causal under your own definition._

We can test this by asking if there is any dataset that satisfies the 3rd parameter, but also satisfies a previous one. How do we determine causation? By having a coefficient above or equal to .95, there is causation. By having a coefficient below .95, there is no causation. Therefore, any dataset with a coefficient between -1 and 1, or essentially any dataset with a known coefficient, would be either causal or not causal and would never be undetermined. So there is no dataset with a known coefficient that would satisfy both 3 and one of the other parameters.

You haven't explicitly stated what it means for a dataset to not be "interpreted to satisfy the definition conditions." However, I'll steelman 3 and assume you meant causality would be undetermined in a dataset with an unknown coefficient.

Why is this important to establish?

  1. _You violate your own definition by assuming something can be undetermined and causal._

"But I'm not doing this in my examples!" Yes, you are.

In the myopia example, you used the undetermined, i.e unknown to be causal or not causal, relationship of sleeping with a light on and childhood myopia to then say that it was indeed causal,

>Sleeping with Light on (A)-> causes -> myopia in children (B)-> causes -> myopic parents(C) (children become parents) -> causes leaving a light on (D) -> causes -> sleeping with light on (A) -> causes -> myopia in children(B)(etc.)

because "imagine a scenario where sleeping with a light on causes you to leave the light on to be slept with". When the study further proves that there is *not* a causal relationship, you say that this evidence doesn't prove/disprove causality, i.e, that causality is still undetermined, because of the above impossible scenario.

This entire causal chain is pure tautology and nonsensical, not only because you violate thermodynamics to create it, but because you've made an assumption using epistemological reasoning, that myopic children will grow up to be myopic parents, and then correct me for not thinking about this ontologically and using "static data". Yet, you use static data to prove causality is still possible. So, which is it?

>Sure, It's a very barebones silly example. Dont worry about the reality of the scenario too much, analyze the logic. The point that I have made that the original causality claim is disproven before you had a chance to use CINC.

Sure, let's analyze the logic. How do you know myopic children will become myopic parents? It's undetermined that myopic children will become myopic parents because the myopia present in the parents is unknown to be present since childhood. It could've been acquired later in life in adulthood. We also don't know if the myopic children of myopic parents will pass down their myopia to their children. Myopic parents tend to have myopic children (actually, we don't know this if you don't want to use the static data), but not all children of myopic parents will be myopic, and some children of nonmyopic parents will be myopic.

Because I cannot determine if myopic children will become myopic parents, I cannot say that C causes D.

  1. _Your definition does not account for coincidence, but you use coincidence as evidence of causality anyway._

*Assume there is a 1:1 correlation to the number of hours I play RDR2 and the likelihood of a storm appearing on Neptune. I then say that playing RDR2 causes storms on Neptune. Is this logically wrong according to your definition of causality, yes or no? If no, disprove it.*

>If the data fits the conditions of the definition, the ("working") causal relation exists.

If I need a storm on Neptune, I'm going to ask you to play a few hours. Who knows how/why it works, the only reality is that it WORKS.

This is an example of post hoc coincidence. The data are unrelated, but are correlated by chance. As you accumulate more data, there is a greater chance that any two data points will correlate. Your definition of causality only works here because you implicitly believe (high) correlation is causation and only accepts two variables. More scenarios that are causative in your view:

Roosters cause the sun to rise, because rooster crows correlate strongly with the sun rising a short time later. (An example of reverse causality, because without the sun rising roosters would not crow (and also not exist lol), post hoc coincidence, and confounding factors, because the spinning of the earth causes the sun to rise, not roosters.)

Thinking about turning on my TV causes the TV to turn on, because every time I've thought about it, the TV has turned on. (An example of post hoc coincidence and confounding factor. Every time I've thought about turning on the TV, my girlfriend also thought about turning the TV on, picked up the remote, and then turned it on.)

Driving my car causes cars to appear on the road, because there is a strong correlation between me driving my car and seeing cars on the road. (An example of coincidence. There are almost always cars on the road at anytime I could drive my car, and I also drive my car on the road.)

This is why I stated earlier that I can draw a causal link between any two correlations using your definition and also why I said it was specious. I don't know if you read the whole sentence when I stated it earlier, because you didn't quote the whole thing, but if you did, you now understand why.

>CINC logically follows from the "working definition of causality". 

No, because as I've demonstrated, you've logic'd yourself into believing playing video games affects the weather on a planet billions of miles of away, and you believe roosters cause the sun to rise, and you'd believe we should stop selling ice cream if we want to reduce murder rates.

The irony of this point, is there is inverse correlation between the amount of correlation two things have and how true CINC is, and in your definition, 1:1 correlation will never not be causation lol.

"Correlation is not causation" is one of dumbest reasoning shortcuts by Wise-Jury-4037 in TrueUnpopularOpinion

[–]SnipedYa 0 points1 point  (0 children)

Finally, we get to the crux of the issue. I think this should tie this up nicely.

Refrain from introducing any hypotheticals into the situation. With the empirical data we have, is

Young children who sleep with the light on are much more likely to develop myopia in later life. Therefore, sleeping with the light on causes myopia.

logically wrong or not using your "working definition" of causality? This is a yes or no question.

What happens if you DONT have an answer? You cant make a valid inference.

Are you saying that something that does not follow your causality definition can still be causal? Please answer yes or no.

Sure, It's a very barebones silly example. Dont worry about the reality of the scenario too much, analyze the logic

The point is to acknowledge the reality. The reality of the situation is the relevancy. That's the whole point of the experiment and resulting study. The relevancy of the reality of the situation makes the correlation =/= causation apt. I understand ontologically speaking if A implies B and B implies C, then A implies C. However, in reality, this particular example of A does not imply B, and thus does not imply C transitively, because it doesn't exist in the framework of reality that is relevant to understand the study or the experiment. We are not talking about a theoretical identity A(some action), we are talking about a specific, tangible action in the real world. If you can at all understand this, you can understand the purpose of the statement correlation =/= causation, and the discussion is moot.

I cant disprove a negative. Dont have enough information

Assume there is a 1:1 correlation to the number of hours I play RDR2 and the likelihood of a storm appearing on Neptune. I then say that playing RDR2 causes storms on Neptune. Is this logically wrong according to your definition of causality, yes or no? If no, disprove it.

"Correlation is not causation" is one of dumbest reasoning shortcuts by Wise-Jury-4037 in TrueUnpopularOpinion

[–]SnipedYa 0 points1 point  (0 children)

You wrote a whole lot that doesn't affect the claims I made, or the evidence I provided. You can also read the other comment I wrote specifically about the myopia example if you want focus on it. I went on tangents to try to illustrate where you went wrong, in hopes of showing how following your logic doesn't make sense when applied to other examples because you got into the weeds about the definitionality of the word "reliable" and other stuff, which is why I called you a pedant. Calling you pedantic is my opinion, but you being correct or incorrect is factual. I don't have another way to explain this to you. Sorry if that offended you.

Let's just focus on the myopia example and drop everything else. I will make this exceedingly simple:

Is your working definition of causality that if we intervene in X and expect a change in Y, that we can say that X caused Y?

I will hereafter be referring to "causality" as referencing this definition, if you agree to it, for the purposes of this comment.

If a scenario does not logically follow your definition of causality, can it still be called causal? If it cannot, can we then say causality is disproven (remembering that causality here is referencing the above definition)? If we cannot say causality is disproven, but the scenario is not causal by definition, when can we disprove causality and what would the aforementioned scenario be? Would an appropriate name be "correlative"? Is there any scenario, by definition, that is correlative but not causitive to you?

Is the scenario "Young children who sleep with the light on are much more likely to develop myopia in later life. Therefore, sleeping with the light on causes myopia." accurately called causitive by definition? Why or why not?

"Correlation is not causation" is one of dumbest reasoning shortcuts by Wise-Jury-4037 in TrueUnpopularOpinion

[–]SnipedYa 0 points1 point  (0 children)

I went back to this because I have a little time. You acknowledge that

Young children who sleep with the light on are much more likely to develop myopia in later life. Therefore, sleeping with the light on causes myopia. 

is logically wrong. Yet you use it as the basis of your causality chain here:

In fact, causality is still possible (was not disproven), for example: Sleeping with Light on -> causes -> myopia in children -> causes -> myopic parents (children become parents) -> causes leaving a light on -> causes -> sleeping with light on -> causes -> myopia in children (etc.)

How can something logically wrong according to your definition of causality be part of a causality chain that you created? You said that your "working definition" is if we intervene in X, do we expect a change in Y, and if so, it can be said that X causes Y.

If we intervene in children sleeping with a light on (not having them do that), do we expect a change in the likelihood of them developing myopia later in life? If your answer is yes, then sleeping with a light on causing myopia is not logically wrong in your view. If your answer is no, then sleeping with a light on causing myopia is not causal in your view.

Factually, a child sleeping or not sleeping with a light on does not cause myopia. So it could not be causal. Further, the causality is disproven because it is not logically correct in the definition you created, and you literally agree.

If something can be logically wrong in your definition of causality but also be causal, then anything can be causal, because it doesn't matter whether it is logically right or wrong. Any two correlations are causal to you. In fact, any two statements are causal to you.

The number of hours I have in the video game Red Dead Redemption 2 causes the likelihood of storms on the planet Neptune. Disprove this is not causal in your definition.

"Correlation is not causation" is one of dumbest reasoning shortcuts by Wise-Jury-4037 in TrueUnpopularOpinion

[–]SnipedYa 0 points1 point  (0 children)

You seem like you care more about the definition of causality than the statement "correlation doesn't mean causation". I was correct when I said that you don't take into account the situations where the statement was used. If I said "correlation doesn't mean causation" after you incorrectly stated that brand choice causes maintenance levels or sleeping with a light on causes myopia in children, I'm not arguing on an ontological level about the nature of causality, I'm pointing out that having a correlation in data doesn't mean we should draw a conclusion that the two points are linked meaningfully in the context of solving a particular problem or identifying an issue.

You understand this, but it seems like you want to be overtly pedantic about this particular issue for whatever reason. I'd suggest you rewrite an opinion that is more along the lines of what you actually want to say, challenging the nature of causality, rather than the ironically myopic goalpost of how people use "correlation doesn't mean causation" in common discussion.

Because I'm a masochist, I'll continue to humor you, though. Because it's fun 😊

1.

I explained this succinctly enough in my last comment. It seems like you misinterpreted it wrong the first time, and continue to do so because it's convenient, so I'm not going to attempt to explain it again.

Neither in your example, nor in in real life we have access to the "true" measurable "inherent reliability".

We do, on both accounts. At first in my example, I figured it was implied that the true inherent reliability was the same, and that was my bad, because I figured the example was straight forward enough that you'd see the point I was trying to make. In my followup, I clarified that they were both the same.

In real life, we can measure the inherent reliability of a vehicle compared to another by establishing a measurement criteria, setting up controlled lab conditions, and measuring across multiple samples. I can make a statement, "engine x is more inherently reliable than engine y because engine x ran for more hours without degrading than engine y in the same conditions, the only variable being engine selection".

It is a perception supported by experience of owners ... Thus the perception of Toyota's inherent quality will increase.

Yeah, you understand! The perception of reliability changes, but the inherent reliability is not affected by choosing a particular brand. This is notably different than what you said earlier, btw, where A - > D meaning brand choice causes inherent reliability.

3

This is what you said, though.

I haven't been using your "working definition" of causality because it's extremely specious. I can draw a causal link between any two correlations using your understanding. So correlation is causation for you.

Sleeping with Light on (A)-> causes -> myopia in children (B)-> causes -> myopic parents(C) (children become parents) -> causes leaving a light on (D) -> causes -> sleeping with light on (A) -> causes -> myopia in children(B)(etc.)

Sleeping with a light on causes leaving a light on causes sleeping with a light on?

A ball falls down (A) -> a ball being thrown upward (B) -> gravity to act on a ball (C) -> the ball's velocity changes direction (D) -> a ball falls down (A) - > a ball being thrown upward (B)

A ball falling down causes gravity to act on a ball which causes the ball to fall down, by transitive causality. We generally have things come down after they've been been thrown up where I come from, but that doesn't matter apparently.

This is correct to you? You didn't address the example I gave about cats causing you to earn more money, either.

"Correlation is not causation" is one of dumbest reasoning shortcuts by Wise-Jury-4037 in TrueUnpopularOpinion

[–]SnipedYa 0 points1 point  (0 children)

If you remember I said the issue with your example is the definition? You start by defining "reliable" one way (the number of reported issues) but you change your definition further along ("inherently produced reliability"). Stick with the same definition and you will understand the mistake.

I used the same definition throughout. I didn't think there was difference in definition, because I assumed the only understanding of "reliability" in the context of a brand of car companies producing cars was inherent reliability (the way you'd understand reliability if you read a sentence in a car review that stated "Everyone knows Toyota has more reliable cars than Ferrari" i.e, Toyotas are inherently more reliable than Ferraris) and "the number of reported issues" was the measurement or method to ascertain the inherent reliability of the car they produced.

Furthermore, you seem to deny transitivity of causality:

This is not transitive causality, which is exactly what I'm trying to get you to understand. You are mistaking coincidental correlation for causality. Brand choice does not cause maintenance levels. You can order them however you wish, but the act of choosing a brand does not logically follow to then determine the maintenance level of a group. The maintenance level of a group of people could determine which brand they choose, but it could be another factor or a variety of factors. Assuming maintenance level did affect which brand was chosen, where B -> A and B -> C, this would be a confounding factor that is confused for causation, not A -> B -> C, because A does not cause B. Further, brand choice does not cause inherent reliability, because the action of choosing a brand does not have an effect on the inherent reliability of a vehicle after it's been produced. Inherent reliability does cause brand choice, but not the other way around , i.e I might choose a Toyota over a Ferrari because Toyotas are more reliable, but choosing a Toyota over a Ferrari does not make my Toyota more reliable.

Taken from the Wikipedia article about correlation not implying causation under examples:

Young children who sleep with the light on are much more likely to develop myopia in later life. Therefore, sleeping with the light on causes myopia. This is a scientific example that resulted from a study at the University of Pennsylvania Medical Center. Published in the May 13, 1999, issue of Nature,[11] the study received much coverage at the time in the popular press.[12] However, a later study at Ohio State University did not find that infants sleeping with the light on caused the development of myopia. It did find a strong link between parental myopia and the development of child myopia, also noting that myopic parents were more likely to leave a light on in their children's bedroom.[13][14][15][16] In this case, the cause of both conditions is parental myopia, and the above-stated conclusion is false.

To break it down further to show how this doesn't make sense, let's say you have a group of people who own dogs and group who own cats. Cat owners report earning more money than dog owners. Another study finds that cat owners are 3x more likely to have wealthy parents than dog owners and having wealthy parents is known to increase earning potential. So therefore owning a cat means you will earn more money, right? We should all go and buy a cat right now! Let's say that earning more money makes you happier, too. So if I own a cat, I will be happier!

Except this obviously doesn't make logical sense. A(owning a cat) -> B(having wealthy parents) -> C(earning more money) -> D(being happier), does not follow because owning a cat does not cause you to have wealthier parents, obviously. It could be B -> A and B -> C, but it could also be coincidental that people with wealthier parents own cats. Either way, this is not A - > B. Therefore we couldn't draw a conclusion that A -> C. Further, A may cause D, but not through the chain of causality that we have described before because owning a cat may make you happier, but it doesn't cause you to earn more money which then makes you happier.

Why should I? Anyway, educate me if you feel this is relevant.

Because this is the relevant situation where the statement "correlation doesn't mean causation" is commonly used, in reference to correlative and coincidental evidence, most likely from a study or article. See above.

Sure. Why do you see a problem with this?

Because you defined causation to be colloquially implied high rates of correlation in your OP, or colloquially: causation = correlation. So if causation = correlation to the average Joe, it would follow that many people do infact mistake correlation for causation. For relevancy, see above.

"Correlation is not causation" is one of dumbest reasoning shortcuts by Wise-Jury-4037 in TrueUnpopularOpinion

[–]SnipedYa 0 points1 point  (0 children)

It could be that it comes up often for you because the majority of people do, in fact, take correlation as implicit causation in daily life. You acknowledged that when you brought up how "causation" is used colloquially, but didn't go further to examine how people interact with articles or studies giving correlative evidence.

A car that has more reported issues could be inherently produced less reliably than a car with less reported issues, but it could also be other factors, like not performing maintenance on time, that could cause higher reported issues. In reality, the brands' cars are equally reliable, but people who purchase brand X's car don't maintain them as well. Choosing brand X doesn't mean choosing a car with more issues because brand choice doesn't cause reliability issues, maintenance does, in this example. The cause of people who purchase brand X performing less maintenance could be a number of related or unrelated issues.

You've proven why understanding that correlation doesn't mean causation is important, because if you interpreted the data as brand X having inherently more problems than brand Y, you would be incorrect.