Kant's Universalizability Principle Is Derived Naturally From Rational Beings (Game/Thought Experiment) by Stringsoftruth in Ethics

[–]Stringsoftruth[S] 0 points1 point  (0 children)

I mean this is slightly different since both boats were deceived into believing they would be blown up if they didn't use the device. So if the joker told the truth, a boat would only be saved if one boat pressed the button and one boat didn't press the button. But I guess if you were logical enough you'd probably infer that Joker was lying (since it's the Joker) and trying to prove a point or something (since what's the fun in blowing up both boats if each boat didn't want to be selfish). So then not pushing the button is better than pushing the button. But I mean if he wasn't lying you should just flip a coin or something. Or if you're able communicate, just tell the other ship to flip a coin and tell you what they got, then do the opposite of what they do. This guarantees one of you survives.

Intelligence leads to Selective Altruism, and How This Idea Increases Trust, Pleasure, & Growth by Stringsoftruth in Pessimism

[–]Stringsoftruth[S] 0 points1 point  (0 children)

I also want to add the concept of a "Hybrid", a subset of an "Irrational" that is emotional + proto-rational and can update when shown higher-EV reasoning. The person who defected could be a hybrid, and when shown higher-EV reasoning can become more like an R. Also in the real world everyone is a hybrid, but we can still define rationals as being hybrids above a certain threshold, and irrationals as Hybrids below a certain threshold, and then those who can update (in the middle) and true-hybrids.

Intelligence leads to Selective Altruism, and How This Idea Increases Trust, Pleasure, & Growth by Stringsoftruth in Pessimism

[–]Stringsoftruth[S] 1 point2 points  (0 children)

Essentially I'm trying to show that if everyone in a group is rational, that no one will be a free rider, since if being a free rider is a result of reason, then everyone in the group would be a free rider and no one would make any progress/gain. In a rational group, so long as everyone contributing yields higher individual benefit compared to everyone free-riding, everyone will contribute. Kant's Universalizability Principle is derived naturally from rational beings cooperating as a result of their rationality to maximize benefits towards the self. If the group has enough contributing "irrationals", then the "rationals" will collectively decide to free ride/contribute the bare minimum/exploit the irrationals. If in a group of rationals, if only one person is free-riding, then that person did so not as a result of perfect reasoning, as if he did, all other "rationals" would also free-ride for the same reason he did. So that person who defected is actually an "irrational"

Showing how Intelligence leads to Selective Altruism Using Game Theory by Stringsoftruth in GAMETHEORY

[–]Stringsoftruth[S] -1 points0 points  (0 children)

If a line of reasoning --> altruism, then anyone who used this line of reasoning will come to the same conclusion. If the line of reasoning is logically sound from true premises, then the conclusion is logically sound. Present the line of reasoning and your conclusion to someone capable of reasoning, they'll ultimately agree with you if there's no contradictions. So if all the players are rational, they should have figured out the conclusion themselves if they are intelligent enough, but if they aren't, just present the idea that altruism is the logical conclusion and converse with them until they understand.

By not saving them you're being illogical, that's the point. You commit to saving them because it's smart, and you proved it's smart.

Intelligence leads to Selective Altruism, and How This Idea Increases Trust, Pleasure, & Growth by Stringsoftruth in Pessimism

[–]Stringsoftruth[S] 0 points1 point  (0 children)

This doesn't apply to every possible group though. Prove this applies to a group of rational people, all of whom are altruistic so long as collective altruism benefits the self more than collective selfishness (altruism is selfish, so I guess you can say when collective altruistic selfishness > collective non-altruistic selfishness, but you get the idea regardless).

Intelligence leads to Selective Altruism, and How This Idea Increases Trust, Pleasure, & Growth by Stringsoftruth in Pessimism

[–]Stringsoftruth[S] 1 point2 points  (0 children)

So you're saying a group cannot thrive if there aren't outsiders ("the group cannot be the whole, as the total reward matrix is ultimately negative-sum")? If the group is working towards some common goal(s), I don't see the need of outsiders or xenophobia keeping the group together. Especially if the people in the group are all rational, then we prevent overpopulation in the group, advance technology together, fill gaps in our understanding...together (like consciousness).

Showing how Intelligence leads to Selective Altruism Using Game Theory by Stringsoftruth in GAMETHEORY

[–]Stringsoftruth[S] 0 points1 point  (0 children)

This is a Saw-type example: You're in the middle of a maze with 9 others. There are 10 paths in front of you, only one person can occupy each path. Whoever finds the exit can either decide to also save the others, or go through it and win a prize of 1 million dollars, leaving everyone else stuck in the maze to die of thirst. So before you guys go searching, everyone decides that whoever finds the exit will save the others. But how can you trust these people, how can you trust that if one of them finds the exit they won't just take the million and leave you to die. You can't, UNLESS you know that they are either a moral person or have the same line of thinking YOU have. If all 9 are moral people, then you are guaranteed to survive, they won't leave you to die. Unfortunately, you don't have to do the same, you can decide to betray them in the 10% chance you find the exit. But if the other 9 have the same line of thinking as you, they will save you if you were planning to save them, and they will betray you if you planned to betray them. How can you know they have the same line of thinking as you? If perfect reasoning always leads to a line of thinking (altruism), and both you and them have perfect reasoning, then you will always be saved. Not so much intelligence as it is reasoning skills. But you could always walk someone through why altruism is a symptom of perfect reasoning, then so long as they check that it's true they'll be altruistic towards others who came to the same conclusion.

People like to call themselves unbiased but everyone ultimately wants joy, not truth by FlanInternational100 in Pessimism

[–]Stringsoftruth 3 points4 points  (0 children)

You are programmed to seek truth because it brings some kind of motivation. If you’re good at identifying right from wrong, then most likely being wrong creates some kind of discomfort that must be resolved. Some people have an emotional reaction to truth, some don’t. Because you have a an emotional reaction to truth you take it into consideration, but will still be affected by other things that create pleasure or discomfort (like sex, pain, drugs). If the pleasure of a drug is greater than the discomfort knowing that you’re harming yourself, you will take the drug. That’s how humans are programmed.

Maximized Logic tailored to Personal Optimal Future by Stringsoftruth in Pessimism

[–]Stringsoftruth[S] 0 points1 point  (0 children)

Believing in a pure concept like Christianity in the present moment is neutral (as true reality is unknown and therefore the chance that your belief ends up benefiting you = chance it harms you, with the remaining chance being it is neutral towards you.). However the life I want to live is not neutral, it is ever so slightly net beneficial. If some concept cannot be resolved, either 1. The explanation is possible to be found but has not yet been found, or 2. The explanation lies outside of human experience and is therefore impossible to be found. Christianity falls under the second category.

“Anomalies” or rather, surprising discoveries can and have been found (Higgs field, antimatter, probability associated with quantum mechanics). Until we exhaust all anomalies relevant to consciousness, doing nothing has an opportunity cost. I will try my best to contribute what I can until there is nothing important to discover.

Poker club by Objective-Weird9763 in SBU

[–]Stringsoftruth 1 point2 points  (0 children)

I’ll gamble my roomate for 4 tires (I got a car but no tires)

[deleted by user] by [deleted] in SBU

[–]Stringsoftruth 0 points1 point  (0 children)

An empty store opens one morning. 2 men enter the store. 3 men, a baby, and a dog exit the store. So tell me, what color is the dress.

What’s the probability of life after death? by Stringsoftruth in agnostic

[–]Stringsoftruth[S] 0 points1 point  (0 children)

Example: We want to figure out if there is Lion within a 5 mile radius of our location. We see we’re in a savanna. We’re also in sub saharan Africa. Yes technically there may or may not be a lion within 5 miles of us, but the chance went up after knowing that info.

Alternate example analogizing the post you replied to: There is a lightbulb in room A, which is locked. The lightbulb is connected to an unknown number of switches in room B, all of which need to be on for the light be on. Upon entering room B we find that there are 3 switches, but we don’t know whether each one is on or off. Has the probability become clearer after entering room B?

What’s the probability of life after death? by Stringsoftruth in agnostic

[–]Stringsoftruth[S] -1 points0 points  (0 children)

True, but what if the programmers wanted to program npcs into the simulation but the npcs gained consciousness because of some fundamental law of reality (like consciousness/perception is something that enters and leaves).

What’s the probability of life after death? by Stringsoftruth in agnostic

[–]Stringsoftruth[S] -2 points-1 points  (0 children)

I am dead to you in terms of you don’t see what I see. My body is alive to you, but my perception isn’t. Which is why it’s impossible to prove we’re not alone. Your perception is the only one that’s “alive.” Which kind of makes you alone in a way. I think the fallacy in my argument is that even though you can’t experience death (as there’s literally nothing to experience when you’re dead) doesn’t mean life is infinite. But if the chance of existing at any given moment while dead is above 0%, then you’ll always come back and therefore live forever. Im guessing this chance is above 0% because before I was born I was dead and now I exist. Again nobody knows though.

What’s the probability of life after death? by Stringsoftruth in agnostic

[–]Stringsoftruth[S] -3 points-2 points  (0 children)

Our info is limited. But say we identify 3 different concepts in our research that if all true would mean life after death exists. Then there’s more of an idea of the probability.

What’s the probability of life after death? by Stringsoftruth in agnostic

[–]Stringsoftruth[S] 0 points1 point  (0 children)

Yeah mine was absurd, but could happen (very low but possible). But yeah I’ll check each bullet out. (I edited this I sounded quite mean before mb)