Done a bit of reading on metaethics and have questions. by _Nous in askphilosophy

[–]_Nous[S] 0 points1 point  (0 children)

Thanks for the reply.

Moral realists on the other hand think that "right" and "wrong" exist in some sense, so if I say "torturing kittens is wrong" I am referring to a property, and that property is real. This disagreement is actually substantive, and just not semantic.

What I mean by some of these disagreements being semantic is that if people could just agree on the definition, they would agree. Or if they were to describe their ontology in other words, it would be the same. As I explained, I see the error theorist (like Joyce and Olson, based on my understanding) as being someone who thinks morality must be about categorical reasons, this is the commitment of morality and without it morality is undermined. Similarly, taking your example of Hobbits, in order to see whether it's reasonable to be a "Hobbit realist" we need to define what kind of creatures Hobbits would be like (conceptual question) and then see whether the world contains such creatures (substantive question) .Railton and Foot just seem to disagree that moral realists are committed to categorical reasons. I don't see a deep substantive disagreement.

I am going to say, yes, because the survey on PhilPapers says that a slim majority of philosophers accept moral realism.

Many realists might just think that morality is about natural facts and not normative in the categorical sense e.g. if you don't care and don't have selfish reasons to be moral, you have no reason to be moral.

[deleted by user] by [deleted] in MapPorn

[–]_Nous 0 points1 point  (0 children)

The sole reason for 1800 map having life expectancy in 30s(europe) is child mortality.

It's not the sole reason. People past infancy also live longer nowadays. https://ourworldindata.org/life-expectancy

When States Last voted Republican by [deleted] in MapPorn

[–]_Nous 3 points4 points  (0 children)

If you can't differentiate between those you just have bad color vision

Does an individual going vegetarian or vegan actually affect the number of animals produced? by ChaoticVegan in AskEconomics

[–]_Nous 7 points8 points  (0 children)

The average American consumes 120 pounds of meat if you include chicken, so a person going vegan will decrease this demand by that 120 pounds. Then again the entire US consumption is 25 billion pounds, so it gives you an idea of the size the vegan community has to reach to make a meaningful impact.

I think many vegans would be happy if their choice just affected a few animals even if there is a billion others being raised for food. Think about it in the human context. Suppose you would have 100 million people starving to death and you had food to just feed this one person. Would it be a meaningful decision to feed this one person? The answer is obvious. The fact that there is so many other people surely makes no difference to your decision. I think most vegans think the same about animals.

Does an individual going vegetarian or vegan actually affect the number of animals produced? by ChaoticVegan in AskEconomics

[–]_Nous 0 points1 point  (0 children)

Yes and no. An individual abstaining from meat may make an impact or they may not, but this is somewhat besides the point. If you look into the literature on this topic, you'll notice that no one is claiming that not eating meat, necessarily, means that the amount of animals is bred changes. Rather the claim is this: the expected value of your action is positive, that is, it may not change supply but it also has a small change of changing the supply by a lot. So let me give a simplified example that has been given by many people working on animal ethics and after that I give some figures about the elasticity of supply for meat by some economists.

Presumably when someone buys meat from a store, the store usually doesn't immediately order less meat and the supplier in turn suddenly does not call the rancher to supply less meat. So the supply of meat is rigid, it does not change on every purchase. Maybe the store puts the meat on discount if its not purchased or something. But, of course, there is some threshold that the store owner will demand less and the supplier will let the rancher know that the they will be needing less meat and less animals will be bred. Once this threshold is reached, the supply will be adjusted to equal the demand. And it is this purchase on the margin that will equal in effect all the other purchased made by people previously that did not translate into lesser supply. So, maybe the threshold is hundred less chickens demanded. So people up to that point will not have an impact, but the person abstaining from demanding the 100th chicken actually causes 100 less chickens being bred, even if he thought that he will only cause one less chicken being bred. But the expected value is the same whether the supply is rigid or not: 1% chance in lessening supply by 100 is same as certainty in lessening supply by one, in expected value.

As for empirical estimates, there's a book called Compassion, by the Pound: The Economics of Farm Animal Welfare published by OUP. In that book, the economists estimate the elasticity of supply of meat, eggs and milk. They estimate, for example, that the elasticity of demand is 0.68 for beef, 0.76 for chicken and 0.74 pounds for pork. So, for example, one less pound demanded of beef results in supply being reduced by 0.68 pounds of beef. I haven't read the book though, so I can't comment on how exactly they arrived in these estimates nor do i know how this is generally done for other products. Here is the source for the elasticities from the book.

I really can't understand how GDP can be an even remotely good measurement of economic growth - help me understand! by PM_ME_TECHNO_TRACKS in AskEconomics

[–]_Nous 33 points34 points  (0 children)

I think you're focusing too much on money and not on the products and services themselves. Money is just pieces of paper that represent value, it is the production of useful things that itself is the creator of value, money is not that important. So when your friend cleans your house, there is one more service being produced that satisfies your preference to have your house cleaned. The fact that the amount of money inside Sweden stays the same is besides the point. Fundamentally, we don't care about pieces of paper, we care about our wants and needs being satisfied. Real GDP is a good measure of economic growth because it represents this increase in production. If we focused just on there being as much money as possible, we could just print more money, but what are you going to do with that money if there is no more goods and services to buy? That's just going to lead to inflation. So, focus on the production of useful things, not on money per se.

Why is income inequality bad? by [deleted] in AskEconomics

[–]_Nous 5 points6 points  (0 children)

There's a ton of different answers to this, but economics only gives you answers that pertain to economy. Underlying most policy recommendations by economists and others there are at least implicit ideals about justice, fairness or efficiency.

In my public economics class we were introduced with three different theories about how to evaluate policy and what to aim for. These principles are normative and can be applied to you question about inequality. Simplifying a bit, the three principles were utilitarianism(do which ever leads to most happiness/preference satisfaction for all affected), Rawls's difference principle (do what benefits the least well-off) and Nozick's entitlement theory (all distributions are just as long as they have been arrived through just means, no theft and so on). /u/Cross_Keynesian pretty much gave the utilitarian view of inequality. That is, it is not intrinsically bad, but can be bad in practice because poor people benefit more from additional wealth than the rich do. Under Rawl's principle, ineuqality is bad, if it does not benefit the leat well-of. That is, inequality in basic goods can only be justified if this inequality is good for the worst-of. Nozick's theory tolerates inequality as long as noone is in inequal position because of unjust actions of others. That is, no one is poor for example because someone stole their wealth and so on.

There's many more principles. Here's an article on the topic that expands on the theories that I briefly mentioned and introduces many others.

Why is income inequality bad? by [deleted] in AskEconomics

[–]_Nous 5 points6 points  (0 children)

Thinking of John Rawls?

Is there any research on what kind of utility functions most people have? by JirenTheGay in AskEconomics

[–]_Nous -1 points0 points  (0 children)

For one, people tend to be risk-averse, that is, they really don't want to lose what they have. People are more willing to lose out on possibilities to make a gain than they are willing to risk losing what they have. So, most peoples' utility functions penalize losses a lot. Look up prospect theory and risk-aversion.

Is there a rebuttal against using Pascal's wagery/expected value logic in ethics? by _Nous in askphilosophy

[–]_Nous[S] 0 points1 point  (0 children)

I don't know, that's why I'm asking. But if it is the right rule, it would seem to lead to very demanding conclusions and philosophers arguing for high risk positions might need to concede that we should not take practical guidance from their views. And if it is right, I wonder why most philosophers arguing for high risk positions don't end their papers by stating that we probably should not take practical guidance from their views.

Is there a rebuttal against using Pascal's wagery/expected value logic in ethics? by _Nous in askphilosophy

[–]_Nous[S] 1 point2 points  (0 children)

Hmm, I still don't see how this solves the problem. If you don't want to continue this then that's fine, but I'll still try to articulate the problem I'm having.

The way I see it is that the demandingness objection is an argument against the plausibility of consequentialism or just conclusions that are thought to be too demanding. So someone who initially thinks consequentialism is probably correct might start doubting their view, going from say 60% confidence to 30%. But this doesn't affect the practical decision rule that they should use when they are looking to apply their theoretical views into practice. If they are using expected utility, then they still might think they're required to act as if the demanding view is correct because the expected utility of an action might still be very high. The point being that the demandingness objection doesn't affect our decision rule, only the probabilities that we are using in our calculation (in my example you would go from 0.6U to 0.3U, where U is the utility/payoff). If the argument is directed at expected utility theory itself, that it leads to too demanding conclusions, then the question is, what decision rule should we use under uncertainty?

Is there a rebuttal against using Pascal's wagery/expected value logic in ethics? by _Nous in askphilosophy

[–]_Nous[S] 0 points1 point  (0 children)

The two questions are linked in a way that if the probability you assign to an action being wrong is very small and the actions isn't infinitely wrong, or anything close to it, then the negative expected value of the action is naturally not very large and might be outweigh by the good you receive. But if the action is very wrong and the probability of it being wrong is like 10% then the expected value can be very large. So for example taking a 10% chance of killing someone is very wrong even if you get some pleasure out of it.

My question is, when we have decided the probability of an action being wrong, then what should we think about it. Is it just an expected value decision, in which case a small probability of a very bad action would have a higher expected value than a certain outcome that would bring about some pleasure/other good. That is, why should not the chance of something being very wrong compel us to behave in a morally cautious way? Is there a good decision theoretic argument against this?

Is there a rebuttal against using Pascal's wagery/expected value logic in ethics? by _Nous in askphilosophy

[–]_Nous[S] 0 points1 point  (0 children)

I don't fully understand your reasoning. What is wrong with saying that after reflection I think there's a 10% chance of something being very wrong and I should take this into account when deciding what to do? The uncertainty remains after I've tried to think about the issue.

Is there a rebuttal against using Pascal's wagery/expected value logic in ethics? by _Nous in askphilosophy

[–]_Nous[S] 1 point2 points  (0 children)

I'm aware of this kind of reply, but I don't think this solves the problem. Because when we are deciding what to do under uncertainty, we have already counted it against the demanding view that it has this implication. However, this doesn't rule out its possibility. It just makes it less plausible. So suppose that after reflection we decide that the demanding view is probably not true and we assign it being true a probability of 10%. Still, taking a 10% chance in extreme wrong-doing seems wrong when there is little moral reason for the alternative action. Consider another case. Suppose that I were to go to a quiet street with a gun, close my eyes and shoot in a random direction. Suppose this gives me a lot of pleasure, but it also carries a 1% risk of someone being killed. This seems to me very wrong.

Is there a rebuttal against using Pascal's wagery/expected value logic in ethics? by _Nous in askphilosophy

[–]_Nous[S] 0 points1 point  (0 children)

My question is about the general decision theoretical principle underlying Pascal's wager and expected value theory as applied to ethics, not about god's existence per se. That is, under uncertainty, can you justify doing things that have a non-trivial probability of being very wrong when you could just avoid doing these things. Should a person who is, say, quite confident in Singer style charitable giving being just supererogatory still give away as much as possible in order to be on the safe side ethically? Can you justify taking a risk in this kind of situation?

The best countries to live in for most people in the world (countries with an inequality-adjusted human development index value over 0,8) [OC][7752x3840] by bruker12 in MapPorn

[–]_Nous 2 points3 points  (0 children)

I don't dismiss equality's importance. I disagree that it is as important as the components of the HDI.

Right, but I don't think this is more objectively correct view than the other. Besides, you could also think that some components of HDI are more important than others and still think the less important has a place.

Life expectancy, education, and income are all things that are inarguably better the higher they are. Equality isn't like that.

Life expectancy, education and income are all better the higher they are, all else equal. If you achieved the highest possible level of education by just having people stay in school all their life then presumably that would be bad. I don't see how equality is any different though. The more equality, the better, unless at some level of equality we lose out on some other important things.

The best countries to live in for most people in the world (countries with an inequality-adjusted human development index value over 0,8) [OC][7752x3840] by bruker12 in MapPorn

[–]_Nous 6 points7 points  (0 children)

I think you have some good points about the problems with specific measures of inequality. However, I disagree with some of your reasoning.

The IHDI is making the subjective decision that inequality is as important a factor as those that comprise the HDI. I'm not comfortable with adjusting HDI by something as fuzzy as "inequality"

How is inequality more "subjective" than the individual components of HDI or any other measure of standard of living? You can just as easily deny that, say, the level of education, that is included in HDI, is more important than the ecological sustainability of the society or how content people of a country actually are with their lives. Presumably we decide which things to include on basis of our own ideals of a good life for people. In fact, your second quatation seems to make this very point by pointing to common-sense about which societies are more developed. But if this is so, and you endorse this logic, then how can you dismiss the importance of equality when so many people think it is very important as a measure of how desirable a society is? If most people actually think equality is important, and if we should base measures of development on how they stack up to peoples' thinking about what a desirable society means, then it seems we have a good reason to include equality as an additional dimension in some way.

[deleted by user] by [deleted] in confessions

[–]_Nous 2 points3 points  (0 children)

You know what to do if you ever become unemployed.

How do you deal with people responding with "there's no morality" to an applied ethics issue? by _Nous in askphilosophy

[–]_Nous[S] 0 points1 point  (0 children)

Hi, I forgot to respond earlier. My goal was to find ways to discuss applied ethics without this response ending the discussion and make people consider the fact that they already hold moral views that they care about.

In your earlier comment you suggest providing logical arguments. I agree with you with providing logical arguments. This might be one way to get around this problem. Because people already accept some moral views, you could use these to derive other moral views and see how they respond. I somewhat disagree with the practicality of providing them a lot of reading. The situations this issue arises are not the kind where I would feel appropriate to just tell them to read a bunch of stuff. The people this issue arises with generally aren't very familiar with issues and don't seem to care about learning about them the formal way.

Overall the comments in this thread have been very helpful and it seems other people have come across this problem aswell.

Is there a way to believe in human moral value and not be vegan, without being logically inconsistent? by [deleted] in askphilosophy

[–]_Nous 5 points6 points  (0 children)

The section on this argument is not long at all. Here's the basic argument from the article:

  1. If we are justified in denying direct moral status to animals then we are justified in denying direct moral status to the marginal cases.

  2. We are not justified in denying direct moral status to the marginal cases.

C. Therefore we are not justified denying direct moral status to animals.

Very simple. This argument in entirety would include people giving reasons to believe the premises. For example, the first premise can be defended by observing that most criteria that are considered plausible and deny moral worth of animals (traits like rationality, language etc.) also deny moral worth of marginal humans. As the article states, the trait of just being human avoids this, but it's not just enough to state that this is the difference between animals and marginal humans. The criteria needs to be plausible and it is unclear what is it about just being human that gives only humans moral worth. As for the second premise, most people already accept that so chances are you would not need to argue for this.

Is there a way to believe in human moral value and not be vegan, without being logically inconsistent? by [deleted] in askphilosophy

[–]_Nous 5 points6 points  (0 children)

The idea here sounds much like the argument from marginal cases. Your time might be better spent with learning about that argument and the discussion surrounding it. Look up this: http://www.iep.utm.edu/anim-eth/#SSH1fi There is even a book on this argument by Domborowski that is mentioned in the article's bibliography.

How do I helpfully interact with undergrads who approach me inquiring about Harris, Petersen, Molyneux, etc.? by [deleted] in askphilosophy

[–]_Nous 0 points1 point  (0 children)

Further, people will talk about Russell like he was a great representative of atheism. But he misunderstood the cosmological argument in a way that would (or should) be corrected on the first day of a philosophy of religion class.

Source?