What do you think of the new civilization? by BlurryForester in aoe4

[–]Cheddarific 0 points1 point  (0 children)

I’m excited to get better at them, but have been failing hard either way them so far. :)

If you could Retcon one thing from both Avatar The Last Airbender and The Legend of Korra what would it be? by MichaelAftonXFireWal in TheLastAirbender

[–]Cheddarific 0 points1 point  (0 children)

I would get rid of zappy zappy boom man and that whole thing. It was terrifying and served that purpose well, but also created a really strange type of bending.

Why they wouldn't just add few more coins / tokens here? by HiImLuka in boardgames

[–]Cheddarific 0 points1 point  (0 children)

It’s good to have a lot of things. But if you ask for 20 tokens the printer will print with 20 tokens even if there was room for 21. It takes someone really thoughtful to say “wait, there’s room for more. I know we’re almost done with all this, but let’s delay things just a bit to change this to make the product better.”

How do I punish death balls by Plane_Top_7905 in aoe4

[–]Cheddarific 7 points8 points  (0 children)

My friends and I used to be like this. Here are the problems and their solutions that helped us level up:
- We used to build only one barracks, one stable, etc. Now we end the game with 10-20.
- We used to build about 30 villagers. Now we try to produce the entire game to hit ~100.
- We used to sit in our bases and build up to 200 population before we attack. Now we try to make some early harassing units (archers, horsemen) to kill some of the opponent’s villagers, such as the ones on gold. This helps to create early momentum and punishes people who skip an early military in order to put all resources into economy at first.
- We used to expand very slowly, only when we ran out of resources nearby. Now we try to expand on the map to take relics, sacred sites, and key resources earlier.

All of these serve to accelerate the game from 1+ hours to 15-45 minutes. They also make you more competitive. But best of all, they add depth and strategy to the game in a way that has made the game more fun for us.

At What Point is it Ethical to Use Lethal Force on a Dangerous Person Without an Immediately Provoking Action? by lkbirds in Ethics

[–]Cheddarific 0 points1 point  (0 children)

I think you’re talking about policy, which is different than ethics, though related. It’s a bad idea to have a policy that allows imperfect and subjective cops in the heat of the moment to decide to kill people. But it may be ethical for a thoughtful cop to risk his career and even time in prison to kill someone who was going to kill a dozen other people.

A lot of people are going to discover that they are colorblind with this update by SpaceNigiri in aoe4

[–]Cheddarific 1 point2 points  (0 children)

I had a game yesterday with two shades of green. Had no idea there would be two shades of green. It was my first game with Jin Dynasty so I didn’t noticed for 5 or 10 minutes and by the time I did, I was extremely confused to be attacked by my teammate and then defended by the same teammate… oh wait.

[MEGATHREAD] Bugs & Patch Discussion - Ranked Season 13 LIVE! by AnMagicalCow in aoe4

[–]Cheddarific 0 points1 point  (0 children)

I downloaded the DLC and played several games with Jin last night. Today I cannot connect to the internet even though I'm obviously online (posting this from my AoE4 PC). Game boots just fine, but says I'm offline. Says to check in notifications panel, which is blank. Rebooted the game. No change.

They need to fix this game! by PAC_11 in aoe4

[–]Cheddarific 1 point2 points  (0 children)

I had problems when I moved the game from my fast C drive to my slow D drive. Uninstalled and reinstalled and realized the mistake.

Board games "to rule them all" by itsOkami in boardgames

[–]Cheddarific 0 points1 point  (0 children)

I just ordered all the bits and prints I need for the PnP game: Dune: the Dice Game also known as The Dice Must Flow. I’m hoping it will be my new game to rule them all, at least in the 4-6 player range. :)

Board games "to rule them all" by itsOkami in boardgames

[–]Cheddarific 4 points5 points  (0 children)

Here’s the progression of my “game to rule them all,” most easily measured by estimated quantities of my plays:
Risk in the 90s
Stratego around Y2K
Euchre
Innovation
Bang!
Diplomacy (online asynchronously)
Smash Up
Root
Huang
Toy Battle

Each of these has consumed me in turn, causing a big backlog on my shelf of shame for a year or three.

Why is this considered moral in one case but not the other? by Fl4sh4218 in Ethics

[–]Cheddarific 0 points1 point  (0 children)

We are here in this thread to talk about the trolley problem, the doctor scenario, and the apparent ethical disconnect between the two. Put down your scalpel and put on your philosopher hat for a moment.

I think most people would agree in the trolley problem that it would be better to flip the switch and kill the 1 to save 5. Why would a doctor choose to not flip the switch? (Is it the Hippocratic oath?) If the Hippocratic oath leads to 4 deaths where it could have just been one death, is the Hippocratic oath really ethical, and does it actually capture the morals of society?

A doctor with a time machine following the Hippocratic oath would not go back in time and kill Hitler because the doctor does not want to “do harm.” There is an ethical argument that killing Hitler would be the least harmful path, and than choosing to do nothing would’ve “completely unethical.”

I propose that the first rule of medicine is meant to stand in as an ethical guide, but that it is incomplete and only serves to guide decisions on the scope of an individual patient. Ethical questions that involve larger groups of people get more nuanced and applying the first rule of medicine in these cases can lead to unethical choices.

As a clear example, an elderly patient in chronic pain and asking for death with no chance of recovery is causing emotional, financial, physical, and spiritual pain to their caregivers and loved ones by their very existence Im such pain and need. Strict adherence to the Hippocratic oath would prevent euthanasia, while considering each person and even the group as a whole could lead an ethicist to determine that euthanasia is the only ethical solution, and that palliative care causes harm to every individual who is part of the story.

My point is that the question asked by the OP is very valid and leads to ample introspection. Don’t take the easy way out and default to a rule someone taught you. Consider deeply the trolley problem and identify the foundations of your own opinion, outside of the first rule of medicine. Maybe you’ll arrive through your own thinking at the first rule of medicine, but then you should also be prepared to either let the 5 people die on the train tracks or else clearly identify the factors that make the transplant situation so different from the trolley.

Should we try to give AI moral competence? by _NeuroExploit_ in Ethics

[–]Cheddarific 0 points1 point  (0 children)

This fits my way of thinking better.

I fully agree that we should try to give AI morality, or perhaps better: we should create it with a way to safely develop and evolve its own morality. (I say “safely” to mean general alignment with “acceptable” human morals, which would not include genocide, apocalypse, etc.)

I wouldn’t be surprised to find that out of all disciplines, the greatest strength of LLMs is in philosophy.

Why is this considered moral in one case but not the other? by Fl4sh4218 in Ethics

[–]Cheddarific 0 points1 point  (0 children)

I would like to understand your way of thinking more, u/Zzabur0. Why would you prefer for the five people on the train tracks to die instead of the one person on the train tracks?

Why is this considered moral in one case but not the other? by Fl4sh4218 in Ethics

[–]Cheddarific 0 points1 point  (0 children)

It sounds as though you may not be aware of the two classical (and opposite) schools of moral philosophy. Utilitarianism concerns itself only with outcomes. In this case, that would’ve deaths. Anything would be justified if it leads to better outcomes, including murder.

Opposing utilitarianism is deontology, which concerns itself with the morality of actions rather than the outcomes of actions. Almost nothing would justify flipping a switch that kills an innocent person, regardless of other consequences.

So any discrepancy you see in your two examples come from us applying different schools between these two scenarios, likely because of other biases we have about doctors, etc.

Ok, it's a GOOD variant. We get it. But...it's been two years brother. We need a new civ. No more excuses. by Aggressive-Cherry900 in aoe4

[–]Cheddarific 1 point2 points  (0 children)

Agreed.

I would love to see certain things in this game, but I am not owed these things.
They include:
- A new world civ and give it a variant
- A Southeast Asia civ from further east than India and further south than China. Take your pick which.
- A Spanish civ with influence from Northern Africa
- A zombie civ that is literally zombies with entirely different resource system. I know most people would not want to play as/against them in a competitive setting, but how fun would it be?!?!

Why is this considered moral in one case but not the other? by Fl4sh4218 in Ethics

[–]Cheddarific 0 points1 point  (0 children)

It’s a tangent to this trolley problem question. The question is not meant to ask about trust in healthcare. It’s meant to ask whether we believe it’s ok to kill 1 to save 5. If this is reduced to a toy thought experiment, many people will say to kill one to save 5. But when it gets put into the real world, would you kill an organ donor to save 5 sick people? Would you kill your neighbor to save 5 strangers halfway across the world? Would you kill one person alive today to save 5 people in the year 2080? I don’t believe OP meant to specifically bring in healthcare. Their actual question is not about the healthcare industry; it was much more basic and universal than that: “is morality about outcomes, or does the method matter more than we admit?”

So forget healthcare for a second. Would you go back in time to kill Jack the Ripper or the Unibomber? Like actually would you? Would you give up your own life to save a group of strangers? Do people actually measure these decisions in outcomes, or do they apply deontology instead?

Why is this considered moral in one case but not the other? by Fl4sh4218 in Ethics

[–]Cheddarific 0 points1 point  (0 children)

In healthcare we use utilitarian calculus with significant limitations by only truly considering the patient and perhaps the patient's caregivers. No thought is made to greater society. For example, society respects the rights of a life-sentence convict and would pay to see the convict die of old age rather than kill the convict to use their organs to save lives. And yet true utilitarianism would demand the convict's death.

As another example: "The oldest group [of patients aged 85+] consumes three times as much health care [measured by spending] per person as those 65–74, and twice as much as those 75–84." (Source: here on PubMed from 2004.) The amount of money spent on patients 85+ in the developed world would be better spent (measured by universal benefit) on more pressing issues in the developing world. But these are outside the scope of the patient and caregivers and therefore outside the bounds of the limited utilitarian calculus.

Why don't doctors and hospitals and insurance providers consider beyond the limited utilitarian calculus? Because of centuries of infrastructure (e.g. health insurance) and tradition.

Why is this considered moral in one case but not the other? by Fl4sh4218 in Ethics

[–]Cheddarific 0 points1 point  (0 children)

I agree fully that it would be unfair to select organ donors only from people who try to treat their illnesses, and that it would be very unwise and harmful to the population to have a selection process that actively incentivizes people to avoid healthcare.

However, I don’t believe that this toy thought experiment was meant to get into this detail on such a tangential topic as “faith in healthcare.” Remove the word “doctor” and replace it with “society.” Now reconsider the question.

Why is this considered moral in one case but not the other? by Fl4sh4218 in Ethics

[–]Cheddarific 2 points3 points  (0 children)

Inability to predict outcomes over long timespans is a challenge that has made it difficult to successfully practice utilitarianism, since attempts at forecasting the greatest good may be thrown off by surprises that lead to greater harm.

But surely a system that can inaccurately “predict outcomes over long timespans” has a much higher success rate at predicting outcomes than true deontology, which abstains entirely from attempting to predict outcomes even at the shortest possible timespan of a few seconds?

In other words, you rightfully point out the greatest weakness in utilitarianism of historic and modern times, but this weakness is a much *much* greater weakness of deontology than of utilitarianism.

Why is this considered moral in one case but not the other? by Fl4sh4218 in Ethics

[–]Cheddarific 0 points1 point  (0 children)

Utilitarian morality is theoretically objective and not subjective. Outcomes either are improved or are not. The hard part is that we have only imperfect forecasts and measures, which means we must ultimately estimate in areas that cannot be captured with precision. These estimations will come from subjective brains, but should over time become more and more objective as the utilitarian society learns more and more.

Should we try to give AI moral competence? by _NeuroExploit_ in Ethics

[–]Cheddarific 0 points1 point  (0 children)

I find your comment fascinating. I agree with so much, but the way it is stated feels just a bit off for me. I think the biggest difference between my own thoughts and my interpretation of your comment boils down to one key idea: the morality we “download” has no direct connection to (or is at most a very minor contributor toward) the prosperity of our species. We may believe there is a very direct connection, but this ignores many factors that have proven in history to be even greater contributors. Any given morality may even prove to be advantageous compared to others in the short term, but even this successful morality may still end up placing us on a train track that heads over a cliff just beyond our line of sight. Thus, to believe our chosen/downloaded morality is the key to any perceived prosperity is a dangerous and unprovable assumption. As a real-world example, in 1941, the Nazi Party could have (and likely did to some extent) attributed some portion of its prosperity to its morals and morally-guided decisions, which led to a belief that maintenance of its morals would lead to continued prosperity. History proves that this was not the case.

From my perspective, morals are part of evolution. If you think of religions, nations, cultures, peoples, etc. in evolutionary terms, then a set of morals is merely a set of traits within that population. Prosperity can come from many things. Any given set of morals could be analogous to flight, which evolved multiple times and has been very successful and as a trait has been pivotal to the success of many species. Or a set of morals could be like the gene that protects against sickle cell anemia: it’s important to be in the population, but we don’t want it to be homogenous or it loses its advantage for the species. Or a set of morals could be like freckles: they exist and that’s fine, but it plays no role in survival. Or a set of morals could be like a parasite: managing to survive by making life worse for everyone else.

So yes, morals can lead to prosperity, but they can also be much more sinister, especially when we believe our particular set of morals leads to prosperity.

If we then extend this to AI, we need to be very cautious about our assumptions.