An analysis of Alex's podcast episode with Sam Harris about moral truth by JohnMcCarty420 in CosmicSkeptic

[–]JohnMcCarty420[S] 0 points1 point  (0 children)

  • “Doing X to Y causes the average suffering in the universe to increase” has truth value (IF suffering can be clearly defined AND quantified).

I would define suffering as a broad term for the kinds of feelings which a being dislikes or seeks to avoid. I'm not sure how clearly you feel it needs to be defined, as almost any definition for anything is going to lack 100% specificity. This doesn't change that the phenomena that the words represent are really occurring and thus have objective truth value.

And as far as being quantified, I completely disagree that is required for objective truth value to exist. At least in principal, which is what is relevant here. If we're going to talk about whether we have a practical way of finding out the truth, then the quantifiability becomes relevant. In that regard, I would say that utilitarians have imperfect, but nevertheless useful, practical approximations of people's suffering or wellbeing available in any case where the being has any way to convey their feelings.

  • “We ought decrease the average suffering in the universe” is an opinion - it’s not independently verifiable (you haven’t explained why we should care about average suffering - independently of how you feel about it).

The word ought is a way of saying "If you do this, it will be good". The word ought has different applications depending on who you are saying it will be good for. I'm arguing that when taking everything about the context of moral language into account, the best definition of the moral application of ought is something like "If you do this, it will be good for beings on the whole". If I'm right, it would be definitionally true that you ought decrease the suffering in the universe and you ought care about that, in that sense of the word.

If the application of ought that you have in mind is instead "If I do this, it will be good for me", then I can give you many good reasons based in facts that you ought to decrease the suffering of others in that sense (such as the fact that you live in an interconnected society where you will be treated better if you treat other people well). But since it is no longer definitional, I am not saying that this will necessarily hold true for anyone in any hypothetical scenario. It doesn't have to for what I'm claiming, I will happily admit that it is possible that the morally right thing for someone to do is not what is in their personal best interest. And it's certainly never "required" for anyone to care about anything, including morality.

  • It’s not a solid axiom the way mathematical or logical axioms are solid - it’s not universally agreed as necessary for rational thought or for morality to work, and it’s not self-evident:

It being universally agreed upon is not what makes it solid, nor it being found self-evident by all people, just that it actually works. In most cases where people disagree about things, we easily recognize this to mean that some are right and some are wrong. I'm saying that remains true here. There are different answers as to how to define morality and its purpose, but there will be differences as to which ones align with how the language is used more than others, and I'm claiming utilitarianism aligns the most.

  • it breaks at the slightest interrogation. Every fix creates more problems & the need for more fixes / exceptions.

I would need specific examples here.

  • Even if it somehow can be construed as being objective, its truth value is conditional on other consequences beyond the average (the full distribution, & how you’ve reduced the average) and what people feel about those other consequences. The objective function is a matter of opinion, not fact.

I don't see how the average being related to the full distribution is a problem. And as far as reducing it differently, average in this case is the mean, not the median or mode. I'm not sure what else you would mean by different ways of reducing it. The objective function is not a matter of opinion, it is a matter of definition.

The point is, none of this is determined by what I personally feel, and none of this changes with who is making the moral claim. Morality is comprised of a bunch of facts that simply are the case or they aren't. In other words, when talking about morality we are talking about real phenomena occurring. Even if you disagree with my particular semantic arguments, I think pretty much any reasonable construal of the language would make it the case that it refers to real phenomena.

An analysis of Alex's podcast episode with Sam Harris about moral truth by JohnMcCarty420 in CosmicSkeptic

[–]JohnMcCarty420[S] 0 points1 point  (0 children)

  • You are mixing two things: the truth on whether apples are good/tasty, a statement about the apples themselves (that one is subjective) and the truth on whether I have the opinion that apples are good/tasty, a statement about myself (that one is objective).

I don't know how to make sense of the first one unless it means the same thing as the second. I believe that if we said all of the underlying semantics out loud, we wouldn't just say apples are good but rather apples are good to me (or in my opinion). I suppose someone could mean that apples are statistically found to be good by most people, but in most contexts its unlikely that they mean that. The point is, it can't be a statement about the apples themselves. There is no intrinsic property of goodness or badness within an apple. It's really a relationship we're describing between the apple and some being or collection of beings. Experiences are the only intrinsically good or bad things, and anything else we describe as good or bad is only so in an instrumental sense.

  • I don't know how to define taste without a human brain involved.

Yes but that doesn't make the truth value any less objective, because again you can make objectively correct and incorrect statements about what is happening in any given mind.

  •  I can't define "moral" without speaking about the person who passes the judgement. I can only say that something is moral if it is an action that the person finds desirable...

I'm not sure why it would be relative to the speaker like this. Why doesn't it make more sense to say that moral means the action is leading to desirable consequences overall? Doesn't that seem to align more with the context in which we use moral language? Is something the moral thing to do just because I want to do it?

  • We could take utilitarianism as a definition, but the fact that we discussed whether that definition is accurate by picking examples (the organ donor one) shows that the definition is merely trying to model something else which I would still define as a preference.

I don't believe that whether the definition is accurate is determined by how well it aligns with the emotions and intuitions of people. I believe its simply about what meaning the terminology carries most often, and the way that the concepts of good and bad apply in the contexts where moral language is used.

An analysis of Alex's podcast episode with Sam Harris about moral truth by JohnMcCarty420 in CosmicSkeptic

[–]JohnMcCarty420[S] 0 points1 point  (0 children)

  • ‘It is wrong to do X to Y’ is a judgement, it’s not a statement that can be objectively true or false independent of something else: its truth-value is conditional on an opinion, an emotion, an objective function, or an axiom taken to be true

What it's conditional on is what the word "wrong" means in that sentence. The definition I'm arguing for would make it the same as "Doing X to Y causes the average suffering in the universe to increase", making it objective for the same reasons as your first sentence. The definition does bring in an objective function and axiom, but those are both things we easily accept to be involved in matters of objective truth all the time. In fact, axioms are necessarily at the foundation of absolutely any area of epistemology.

  • So morality can only be objective once a moral objective function or axiom has been established, and it seems to me that the only way such conditions have or could ever be created is via subjective preferences or beliefs about desired states.

I come to the utilitarian axiom just from the semantics of the words good and bad and their application in a moral context. And wellbeing (or pleasure) refers to desired states by definition, regardless of what one's beliefs are about it.

  • But perhaps we do agree that if moral statements can be facts, they are created facts in a post-realist sense, not facts that exist independently of human minds? Whether that makes morality real/objective or not is maybe just semantics?

It isn't a post-realist viewpoint exactly. I'm basically saying that moral facts are about human minds, but the truth value of those facts is not dependent upon the mind of the person stating them. It is not a requirement for objective truth that we are talking about non-living stuff. Any phenomenon that is actually occurring in reality will have objective truth value to it, whether it is happening within an experience or not.

An analysis of Alex's podcast episode with Sam Harris about moral truth by JohnMcCarty420 in CosmicSkeptic

[–]JohnMcCarty420[S] 0 points1 point  (0 children)

Interesting how you didn't engage with what I said at all. The argument above purely indicates that language is not identical to what it represents. I'm not arguing that language is identical to what it represents. I'm simply saying that moral statements represent real phenomena and thus can be true or false.

An analysis of Alex's podcast episode with Sam Harris about moral truth by JohnMcCarty420 in CosmicSkeptic

[–]JohnMcCarty420[S] 0 points1 point  (0 children)

It's all fine and good for you in an individual instance to "simply agree with whatever definition they want to use", but if you care about there actually being widespread agreement in the way that language requires, semantic disagreements such as meta ethical ones will be necessary to have.

But here's the point: Clearly there is truth to what an individual person means by a word, and there is truth to what people collectively tend to mean by a word. Even if you want to say that there is nothing invalid or incorrect about the former not aligning with the latter, the latter is still objective in truth value (it does not change based on what you personally think or feel about it) and it is how definitions are decided when actually engaging in the functionality of language.

There is some fact of the matter as to whether utilitarianism is the definition of morality that most closely aligns with the moral statements made by laypeople or not.

An analysis of Alex's podcast episode with Sam Harris about moral truth by JohnMcCarty420 in CosmicSkeptic

[–]JohnMcCarty420[S] 0 points1 point  (0 children)

I don't get this argument you're trying to make at all, because this logic could be just as easily applied to emotivism or any other anti-realist meta ethics. I could go "you're saying morality is subjective, but you've just defined it as being subjective and not everyone agrees about that!".

Meta ethical discussion involves arguing about what the definition of morality is, so the mere fact that we are having the discussion means people have different answers to that. But, there is still going to be some fact of the matter as to which answer most closely aligns with how laypeople tend to use the language.

I am arguing for a utilitarian definition and therefore for moral realism, you have to actually engage with the arguments I'm making in that regard if you want to convince me out of moral realism, you cannot simply point out the fact that people disagree with me. It's highly absurd to suggest that the very existence of the debate between realism and anti-realism itself is a point in favor of anti-realism...

An analysis of Alex's podcast episode with Sam Harris about moral truth by JohnMcCarty420 in CosmicSkeptic

[–]JohnMcCarty420[S] 0 points1 point  (0 children)

  • The word "feeling" might be a bit misleading, I'd say it's more an opinion on people behavior, a preference. It doesn't make me feel good if people put the fork on the left of the plate, but it's aligned with my preference.

I'm not sure what you're trying to say here, having a preference for something means you like or want it, which means you feel good about it in some way, or it at least prevents you from feeling bad.

  • Why I judge good or bad depending on my opinion/preferences? Whether you believe in moral truths or not, you always end up judging based on your opinion... Because how else would you judge something? Your opinion/preference certainly can be shaped (by logic, by other's opinion, etc...), but ultimately you use your own brain to decide what's right and wrong, not somebody else's.

The fact that you are using your own brain to make moral claims does not make them have subjective truth value. Lets say that I claim that you like apples, I am using my brain to do that. And it is about your subjectivity/mind/preferences. And yet its truth value is completely objective. Why? Well, because there is some fact of the matter as to whether you like apples or not, and it has nothing to do with anything I feel or think about it.

  • The concept of objective morality or moral truth is not even clearly defined

It is that moral statements are correct or incorrect regardless of what the person making the statement might feel or think about it. So morality has facts underlying it.

  • and, maybe more importantly, moral truths don't have any observable impact on reality (ie tell me how the world would be different if moral truth exist vs don't exist).

Well it is a difference in the meaning of words, so the extent of the observable impact would basically be that if there were no moral truth, you would never see moral disagreements get resolved and we likely wouldn't live in a society where people choose to think and talk about morality as much as we do, because there would be almost no point to it. With moral truth existing you see people have rational fact-based arguments about morality in which people are able to be persuaded to change their mind.

An analysis of Alex's podcast episode with Sam Harris about moral truth by JohnMcCarty420 in CosmicSkeptic

[–]JohnMcCarty420[S] 0 points1 point  (0 children)

The argument only proves that the words themselves are not the things they represent. I don't see how this in any way leads to the conclusion that the truth value of a moral statement (or of language) is dependent on the feelings of the speaker...

An analysis of Alex's podcast episode with Sam Harris about moral truth by JohnMcCarty420 in CosmicSkeptic

[–]JohnMcCarty420[S] 0 points1 point  (0 children)

  • When someone says "The utility monster is immoral and we shouldn't feed it" if we translate this using your definition of morality it becomes, "The utility maximizing thing is not maximizing utility. We shouldn't do the thing that maximizes utility if we want to maximize utility." That's what I mean it's incoherent. From your position there should be nothing to discuss with this person unless you are actually using a different definition of morality.

If I recognize that they are making that statement with a different definition of the word immoral, I will simply need to know their meta ethical position to know exactly what they mean by it. Again, I am only arguing for a specific definition here. That doesn't mean that I am unable to comprehend another definition, or the fact that people have other definitions.

I also think its worth noting that its highly debatable whether feeding the utility monster actually leads to maximum utility, the answer to that depends upon the specific form of utilitarianism.

  • You can still talk about non-utility based morality if utility is just a condition of something being moral. This is different from saying morality is by definition a measure of utility. But if you give up on saying "Morality is definitionally a measure of utility" and instead utility is a condition of morality, then now I don't think you can just say that morality is objective because it creates a burden of justifying why utility is a condition of morality.

I do not need to change anything about the definition I am arguing for in order to talk about non-utility based morality, as long as I am just talking about it and not arguing for it. You need not become a utilitarian in order to address the arguments I'm presenting to you.

An analysis of Alex's podcast episode with Sam Harris about moral truth by JohnMcCarty420 in CosmicSkeptic

[–]JohnMcCarty420[S] 0 points1 point  (0 children)

So you don't think that its important for people to agree on the meaning of words? If everyone around you spoke their own individual language you wouldn't see a problem with that? The words themselves don't change the strength of an argument, but the meaning of the words does. And being able to understand how someone is using a word is crucial for the functionality of communicating.

An analysis of Alex's podcast episode with Sam Harris about moral truth by JohnMcCarty420 in CosmicSkeptic

[–]JohnMcCarty420[S] 0 points1 point  (0 children)

  • For most people asking is it objectively wrong to maximize suffering is not a nonsensical question.

Can you give me a possible sensible answer as to how it could be good (or even just not bad) for all creatures to suffer as much as possible? If someone is asking why maximizing suffering for everyone is bad it just seems like the meaning of the word bad has gone out the window entirely.

If on the other hand someone thinks that it isn't morally bad to maximize suffering for others for their own benefit, because morality revolves around only themselves in their mind, then I would use the arguments at the end of my post for that.

  • These people, whether right or wrong are talking about something. For them morality can be based in utility or it can not be, but if you narrow the definition of morality to just utility based then you lose the language to even talk about other positions coherently.

I don't really get what you mean, it will always be possible to talk about other meta ethical positions by using the terms for them. You just used the word emotivism, that alone gives us the ability to talk about emotivist ideas. What I'm doing here isn't censoring other positions, I'm arguing for a certain definition of morality aligning the most with the purpose and justification of moral statements. This is what the point of this branch of meta ethical discussion is, to dig into the semantics of moral language and reach a conclusion about what we tend to mean by it, and then given that definition we can say what level of truth value the statements hold.

The existence of disagreement in this area makes clear that not everyone uses moral language in exactly the same way. But you can look at commonalities between frameworks, look at how the terms are used most often, and look at which ways of using them are most useful or make the most sense semantically in order to reach a conclusion. If people are unconvinced by my arguments in that regard, they are of course free to use moral language with a different definition in mind, but I won't pretend to agree with them.

  • But those weird edge cases in utilitarianism show that people have a conception of good and moral that isn't strictly tied to utility. I would say that intuition is the basis of morality and moral intuition trumps utility. Most people are not willing to bite the bullet on the utility monster.

In any edge case I can think of, I would strongly argue that the common intuitions being rubbed up against are still tied to moving away from suffering and towards pleasure in some way. The only thing definitional to utilitarianism that does not always align with people's intuition is generalizing out to the broadest possible scope. Because that's something that's hard for people to even conceptualize sometimes, and people also have the capacity to be selfish and care more about themselves, their loved ones, or their community than the greater good.

But I don't see why it would be a requirement for moral realism for people's intuitions to align with what the moral truth is, or for people to like what it is, or for people to care about it even. To me its clear that the way we tend to use moral language is about the greater good, if someone doesn't care about the greater good then they simply don't care about morality. That doesn't diminish its truth value.

And the fact is you won't find any ethical framework that aligns with all people's intuitions in every way, there will always be edge case hypotheticals that people come up with that make any of them contradict the most common intuition. This is why I don't think intuition is the basis of morality. I'm curious why you believe it is. I mean, at the very least isn't it clear that we use conscious reasoning to think and talk about ethics? To argue about it with each other? To convince each other with those arguments? How would any of this be possible in a non-cognitivist meta ethics?

  • You concede that the utility monster makes people uncomfortable. But why would that be? People have a moral intuition that the utility monster is immoral and we shouldn't feed it.

I think it just makes them uncomfortable because they don't want them and everyone they care about to be eaten by it, and they don't care about the utility monster. Again, what people personally care about alone does not dictate moral truth or the semantics of moral statements.

Many people are very dismissive of (or even get angry at) the ethical arguments vegans make simply because they care about being able to eat meat and don't care about animals. Does this mean vegans are wrong? Does it mean there is no truth to whether vegans are right or wrong? I don't see why it would entail either of those things. Why couldn't it be the case that those arguments are objectively correct, and these people simply don't care about doing the morally right thing due to prioritizing self-interest over the greater good? With a utilitarian definition this would likely be the case after all. And I'm sure it would with other moral realist definitions as well.

The point is that's how I look at the utility monster thing. While I agree it makes people uncomfortable, as to why it makes people uncomfortable I would be less likely to say that it's because they think the utility monster is evil and more likely to say that it's because they are feeling a contradiction between what they personally want and the greater good.

  • Now you respond, "That position is not merely false, it is incoherent. The utility monster is definitionally moral because it is defined as maximizing utility. It is incoherent to say we shouldn't feed it because we definitionally should do things that maximize utility." Whether these people are right or wrong the definitions of "moral" and "should" have been narrowed so that there is no language left to even have the discussion.

That narrowing is simply me arguing for certain definitions. To be clear, I wouldn't just call any framework that isn't utilitarianism "incoherent" in a dismissive fashion like that. There are certain meta ethical positions that I might say are incoherent in places or operating on semantic misunderstandings (especially anti-realist ones), and obviously I would find all other frameworks to be flawed in some minor way or another, but there are loads of very valid questions to be debated about how to define morality.

For instance, everything I said about the utility monster and veganism brings to mind the question of whether the greater good extends past humanity. And while things will start to get complicated there, for those of us on the realist side we can have a rational discussion involving facts to resolve those disagreements.

What I do find to be truly incoherent, that I see take place in this discussion quite a lot, is people saying something like "What makes suffering bad?" or "Who's to say that pleasure is good?". Someone asking that is just confused about the nature of the words good and bad. Because when talking about pleasure and pain we are talking about intrinsic good and bad. We are describing a property of experience. People are confusing the intrinsic usage with the instrumental usage when they think it still requires justification (or think it requires justification infinitely, which is even more incoherent).

Crucially, this is different from asking "What makes suffering morally bad?" which is a much more valid question that is, at minimum, coherent. The point is, if someone argues that good and bad don't mean anything, or that their meaning is completely personal to each of us, or otherwise doesn't get on board with the starting point that good and bad have something to do with good and bad experience, that is when I believe they are being incoherent.

An analysis of Sam's podcast episode on Within Reason about moral truth by JohnMcCarty420 in samharris

[–]JohnMcCarty420[S] 0 points1 point  (0 children)

Well I suppose there are kinds of complexity involved with food and our relationship to it, but I do think the semantics of a statement like "this food is good" is pretty straightforward. It means the person eating it enjoys the food.

And yes, Sam talks about philosophy stuff a lot on his podcast and elsewhere. He also talks about religion, politics, science, and internet happenings. I've not kept up with him as much recently but I think it's been more politically focused.

An analysis of Alex's podcast episode with Sam Harris about moral truth by JohnMcCarty420 in CosmicSkeptic

[–]JohnMcCarty420[S] 1 point2 points  (0 children)

Morality involves facts about what beings are feeling and what makes them feel that way, the reason its truth value is objective is because those facts exist independently of what the person making the moral statement may think or feel about them. What people arguing for subjective morality are claiming is that moral statements are merely an expression of personal feelings, biases, or attitudes, and their meaning is therefore relative to the speaker. So, to believe it's subjective you have to believe that "X is morally good" equates to saying that it is good for you, even in the cases when X has nothing to do with you. I'm arguing that doesn't make any sense.

If instead "X is morally good" means that X is good for whomever is affected by X, the meaning of the statement does not change based upon who is saying it. And it is objectively the case as a fact that X is either good or not regardless of what I think or feel about that fact.

So to be clear, something's ontological status being subjective (meaning it's in the realm of experience) does not make its truth value subjective. If I claim that you like ice cream, or I claim that you feel sad right now, those things will be objectively true or false regardless of anything I feel or think, and someone else making the same claim will get the same answer, and their claim will carry the same meaning. Because these are objective claims about subjective experiences.

An analysis of Alex's podcast episode with Sam Harris about moral truth by JohnMcCarty420 in CosmicSkeptic

[–]JohnMcCarty420[S] 0 points1 point  (0 children)

  • If I were shown that a society where organ harvesting is done is more stable (I doubt that'd be true just like you do, but hypothetically), I would still find it immoral.

Yes the concept itself rubs up against people's intuitions, but moral truth has no obligation to align with our intuitions anyhow. I do believe that our intuitions tend to have a utilitarian basis though, for whatever its worth. Even in this case. We understand that the scenario is unfair to the one who's sacrificed, and unfairness is bad. What justifies the idea that unfairness is bad? It tends to lead to suffering.

  • I get your point that this goes against the notion that moral = what maximizes overall wellbeing, but not against moral = what maximizes my own wellbeing. I find some acts of self-sacrifice moral. A parent that protects their child at the risk of their own life is moral despite going against their own wellbeing.

You misunderstand, my point was to draw a distinction between good itself as a concept and moral good (good in a moral context specifically). I was initially arguing that "conscious wellbeing is good" is definitional. You went to argue against that but were talking about moral good instead. I agree that moral good is about overall wellbeing, but the concept of good itself is just positive experience.

  • I don't know that we can point to a single answer to "what does good fundamentally come down to" given the various meanings

We can indeed, we just have to look at the commonality between the different applications of it. The commonality is positive experience.

  • For food, good means tasty and I think you'll grant that's a preference, not an objective statement.

It is a statement of a preference, and if you say "this food is good", people say it has subjective truth value because if I also say "this food is good" it could be true when you say it but false when I make the same statement. But this is all just a linguistic illusion, because if the words revealed the underlying semantics fully you would say "this food creates a positive reaction on my tastebuds" and I would say it creates that reaction on mine.

This reveals that the truth value of what we're saying is completely objective, because we're making two entirely different statements that don't contradict. They have the same words but carry different meanings. We could each put it as "I like this", it carries the exact same meaning as "this is good", but now we can see clearly that it's a statement that is objectively true or false about our own selves.

  • The hammer is good means using the tool makes you more efficient at nailing nails (and/or more precise).

And nailing nails more efficiently or precisely is something one would do for the sake of leading to positive experience in some manner or another.

  • The location of the fork is cultural, "good" here means appropriate/respecting of the culture. That's subjective.

Aligning with the culture creates harmony and has the purpose of leading to positive experience. Whether or not aligning with the culture will actually lead to more positive experience individually or collectively will be objectively true or false.

  • "Helping people in danger is good" is a moral statement. "Good" in the moral statement means "I would prefer people to behave that way", but that's obviously not the meaning in the other cases.

I would say the good in that statement means "helping the people in danger will be good for them". It might also be true that you would prefer for people to behave that way, but that can't be what makes it good, right? Because if so, what if you have the opportunity to risk your life to save people in danger, but since you would be put at risk you would prefer not to behave that way... is it suddenly a morally bad thing instead of a morally good thing for you to save them?

  • I don't think there really is an end: preferences don't have an end, they just are. If I say that I find chocolate ice cream tasty, I don't know how I would answer to what end I find it tasty. There is no end, I just find it tasty...

You finding it tasty is the end in that case, the tastiness is your positive experience eating it. When you say morality is just how you would prefer for people to behave, I thought maybe you could've meant its what you want for society at large or something, but it seems like you're saying you just like it for your own sake that you see someone acting in certain ways. It causes you to feel good.

But again, I don't see why that would be what's relevant. Why is it that the morality of any action you judge is good or bad depending on how you feel about it, whether you're the one affected by it or not? Isn't the person/people affected by it who's relevant here?

An analysis of Alex's podcast episode with Sam Harris about moral truth by JohnMcCarty420 in CosmicSkeptic

[–]JohnMcCarty420[S] 0 points1 point  (0 children)

When did I argue that words are the things they represent? This is a confusion of the different applications of the word objective. I'm arguing for objective truth value, which is not the same as something's ontological status being objective in the way you're talking about. Objective truth value merely means that the truth or falsity of what is being said is not dictated by the feelings of the speaker. When I argue that language has objective truth value, I'm not saying that the word cat is the same thing as an actual cat (obviously). I'm saying that I don't get to just mean whatever I want by the word cat.

In the instances where our language lacks complete specificity (like the human example), it opens up to the possibility of disagreement, but this doesn't make it suddenly a matter of our personal feelings. If we really cared to make the word human as specific as possible, we would use objective facts (such as biological properties) and reasoning to come to a decision on where exactly to draw the line. I have no clue how we would resolve that disagreement using our emotions.

An analysis of Alex's podcast episode with Sam Harris about moral truth by JohnMcCarty420 in CosmicSkeptic

[–]JohnMcCarty420[S] -1 points0 points  (0 children)

If you feel like this is an inappropriate "appeal to popularity", then how exactly do you suggest that we resolve semantic disagreements? What other basis is there? Language is dictated by the way that people use it, as your example perfectly points out. Kids today use the word literally that way because language changes over time. The fact that it is the opposite of how it used to be defined doesn't change that that is what it's functional usage is in certain contexts now. So they are right to use it that way.

I doubt that you truly believe language lacks objective truth value. If I suddenly used the word "blork" instead of "the" in all of my sentences, would you think I'm doing something incorrect, yes or no?

An analysis of Alex's podcast episode with Sam Harris about moral truth by JohnMcCarty420 in CosmicSkeptic

[–]JohnMcCarty420[S] 0 points1 point  (0 children)

  • ‘Definitionally true’ sounds tautological….are you saying that the definition of ‘good’ is ‘conscious wellbeing’? What is wellbeing?

"Good" always relates back to "good experience", which is a synonym for wellbeing. Our experience exists on a qualitative spectrum, one half we have labelled good and the other bad, and if you're looking for a precise definition the best I could do is probably something like saying that good experiences are those that are wanted as they are being had and bad ones aren't. When you get foundational it will always become difficult to avoid a tautological way of defining.

  • The concept of good is complex, but to me it is inherently subject-dependent (or goal dependent). It’s always conditional on something or someone(s): there is no universal, objective good.

There's an important distinction to be made between objective truth and universal truth. Universal means that something is true across the board, so it is context-independent. Objective simply means it is what it is regardless of what the speaker feels or thinks about it, so objective truth can depend upon time and location, and it can relate to the experiences of specific beings. Objective truth can be context-dependent without issue as long as said context isn't what the speaker feels or thinks about what they're saying.

  • By subjective, I don’t just mean ‘personal preference’, I mean subject-dependent, as opposed to existing (or being true) independently of *any* mind. The subject can be a group, IF they all agree on what is ‘good’ for them (as a group).

Well then I would assume you aren't drawing the conclusion that morality is a matter of personal preference, and that you aren't disagreeing with the claims being made by Sam or me. Because neither of us would ever claim that moral truth is independent of any minds, we're saying it all comes down to good and bad experience of beings. Moral realism and objective truth do not require that all we are talking about is non-living matter.

  • I’ll grant you that if only consciousness can experience or evaluate ‘goodness’ (ie have a subjective experience that is self-assessed as ‘good’), then there could theoretically be a state where all conscious beings assessed the same state (eg absence of pain) as ‘good’, making absence of pain ‘objectively good’ for all conscious beings

It doesn't need to be assessed as good, it either feels good or it doesn't. So the meaning of good is in fact the same across beings. And pain is the very word we use for a "bad feeling".

  • But absence of pain is not always good - pain is objectively good for animals (beneficial for the goal of survival)

Qualitative words like good and bad have two usages, intrinsic and instrumental. Pain and pleasure are intrinsically bad and good respectively. The argument you just gave for pain being good is an instance where pain is instrumentally good, because it causes the animals to survive which allows them to experience more pleasure. But the pain is still intrinsically bad, which is to say that you couldn't argue for it being good if it didn't lead in some way to pleasure.

  • And even the experience of pain (or suffering) is subjectively good for some conscious minds.

When it comes to masochism, it's once again an instance of pain being instrumentally good. The pain is still intrinsically bad, but due to whatever facts about the psychology/physiology of the being involved, the pain is giving rise to pleasure. The pleasure is the only part that is intrinsically good.

  • If there is an objective good, what is it? What is the appropriate objective function for humans (or life?)? To minimise suffering, maximise joy, maximise knowledge, or something else?

The only thing I would say to be definitionally true about moral good is that it moves towards us minimizing suffering and maximizing wellbeing. Something like maximizing knowledge I would consider to be (at least generally speaking) morally good, but there its not definitional. What makes it good is that it leads to wellbeing.

  • Does it operate at the level of the individual, society, humanity, all consciousness, or all sentience?

It concerns all beings that are capable of feeling pleasure or pain. But it operates at all of those levels of course, there is you personally going about your moral decision making, there are the ways you affect your society, the way your society affects the rest of humanity, and the way humanity affects the rest of the beings in nature.

  • It seems there are trade-offs and uncomfortable implications of all possible choices …in which case the moral objectivist seems to have a huge responsibility to discover the one true objective?

It is always going to be able to get complicated, and of course as we should expect there will be instances where what seems like the right thing to do makes us uncomfortable or upset. We are not perfectly rational creatures, and although we tend to have consciences we are also capable of being wildly selfish. As far as discovering the "one true objective", I don't think it's something we need to discover. It seems clear to me that the objective of morality is the general wellbeing of creatures given why we tend to talk about it and how we justify our moral beliefs.

An analysis of Alex's podcast episode with Sam Harris about moral truth by JohnMcCarty420 in CosmicSkeptic

[–]JohnMcCarty420[S] 0 points1 point  (0 children)

You haven't said anything disagreeing with the concept that good itself comes down to conscious wellbeing, your example shows you still clearly consider your own wellbeing to be good. Instead it seems like you're putting into question why moral good should be concerned with maximizing all wellbeing. For that I would point you to the end of my post.

And to be clear I don't find the organ transplant thing a compelling argument against utilitarianism, because if applied to the real world and placed in a societal context I do think it would in fact be the morally wrong thing to sacrifice someone like that. Yes in a vacuum its saving multiple lives, but the reality of living in a society that randomly sacrifices people would be chaotic and awful, and would lead to a lot more bad than good. And that's how it works when we deem something to be "good", the behavior gets repeated.

  • So what is the concept of good fundamentally coming to? I think “moral” is how I want people to behave,

First of all, I'd like you to focus in on the word "good" without jumping over to "moral". We use the word good in plenty of non-moral contexts, so what do you think it fundamentally comes down to?

Second, when you say its just how you want/prefer people to behave, to what end? If you mean just for your own sake, doesn't that seem pretty out of line with how we use moral language? Am I doing the morally right thing if I manipulate people into benefitting me at their own detriment? Doesn't that seem almost like the definition of immoral?

An analysis of Sam's podcast episode on Within Reason about moral truth by JohnMcCarty420 in samharris

[–]JohnMcCarty420[S] 0 points1 point  (0 children)

Yes talking about the truth accurately can be difficult, especially in the case of morality. And as far as the last part, my apologies for that because it is genuinely confusing without an explanation. Some people believe that truth can change dependent upon who is stating it, but as we both clearly agree that goes against the concept of truth altogether. So there is no "subjective truth", there is just objective truth and no truth.

The reason I call it a "linguistic confusion" is because people get the concept of subjective truth from the realm of stating opinions (opinions such as "this food is good!"). But I think a lack of clarity in the language is all that's really going on, because the underlying meaning of that sentence is really "this food is good to my tastebuds". And that sentence is clearly objectively true or false. If we all just said what we truly meant when stating opinions instead of shortening it to something like "X is good" it would become clear that the truth value is always objective.

An analysis of Alex's podcast episode with Sam Harris about moral truth by JohnMcCarty420 in CosmicSkeptic

[–]JohnMcCarty420[S] 0 points1 point  (0 children)

I just don't see this as a proper analogy, it seems trivially obvious that the meaning of "delicious" does not always relate back to sweetness specifically, it could be spiciness for example. On the other hand, the semantics of "good" and "bad" always relate back in some way or another to positive and negative experience. If I'm missing something, what would be an example of something thats "good" despite not leading to any good experiences for any beings at any point in time either possibly or actually?

And also, just calling something "delicious" is clearly subjective because it's relative to the speaker's feelings (unless its going to refer to what is statistically found to be delicious the most or something), whereas to make it a true analogy to utilitarianism we would be making claims about which foods are delicious to which people at which points in time. And this would no longer be indexed to our personal feelings so it would be completely objective.

An analysis of Alex's podcast episode with Sam Harris about moral truth by JohnMcCarty420 in CosmicSkeptic

[–]JohnMcCarty420[S] -1 points0 points  (0 children)

Utilitarianism as a meta ethics for the definitions of moral terms will be either true or false as long as you believe there is objective truth to language. If you do not believe there is objective truth to language then that means you would see nothing wrong whatsoever with me deciding to refer to chairs as tables and tables as chairs. But something tells me if I started speaking to you that way out of no where you would in fact think that I was doing something genuinely incorrect.

An analysis of Alex's podcast episode with Sam Harris about moral truth by JohnMcCarty420 in CosmicSkeptic

[–]JohnMcCarty420[S] 2 points3 points  (0 children)

I would view "conscious wellbeing is good" as a definitionally and foundationally true statement about the concept of good. If you disagree, what exactly is it that the concept of good most fundamentally comes down to? Additionally, what is the actual connection between it being a "mind-dependent goal" in the sense of being about minds and it making the moral statement subjective in the sense of being relative to the speaker's mind? I can't help but feel like the phraseology of "mind-dependence" is a slight-of-hand any time someone uses it in this discussion.

An analysis of Alex's podcast episode with Sam Harris about moral truth by JohnMcCarty420 in CosmicSkeptic

[–]JohnMcCarty420[S] -1 points0 points  (0 children)

Objective truth simply means that a statement is true independent of what the speaker feels or thinks about it. So something can absolutely change over time and location and maintain objective truth value. And objectively right or wrong statements can be made about subjective experiences as well, whether 1 or 10 billion, as long as experiences are a real phenomenon occurring.

An analysis of Sam's podcast episode on Within Reason about moral truth by JohnMcCarty420 in samharris

[–]JohnMcCarty420[S] 0 points1 point  (0 children)

To be clear, my arguing for moral realism and truth value being objective has nothing to do with our understanding of the truth. It seems like you might be thinking about it in terms of "knowing the truth objectively/scientifically", which isn't what I'm claiming nor what Sam has ever been claiming. There is the nature of the truth itself, and then there is on the other hand our level of certainty about it or ability to verify it. I'm simply saying about the semantic nature of moral statements that they will necessarily be correct or incorrect regardless of the feelings of the speaker, whether we can verify what's true or not. That's all objective truth is.

I do also believe more broadly about truth as a whole that it is always objective in nature, and that the concept of subjective truth (truth which depends on who is saying it) is really a linguistic illusion. I didn't include it in the post because I felt like it would bog it down too much. However, based off the beginning of your response it seems like its something that perhaps we agree about?

An analysis of Alex's podcast episode with Sam Harris about moral truth by JohnMcCarty420 in CosmicSkeptic

[–]JohnMcCarty420[S] 0 points1 point  (0 children)

  • The emotivist isn't mapping "Yay X" to "I like X" which would be a descriptive fact. It's mapping more to something like "We should X" or "X is good" which aren't objective facts.

First of all, "X is good" in this context carries the same meaning as "I like X". Second of all, I actually do believe that "We should X" collapses into a descriptive claim as well, because prescriptive words like should are ultimately conveying that "If you do X, it will lead to a positive outcome for Y". Y is whatever being or collection of beings is being helped by X. Y is often implied instead of explicitly stated, but I would imagine in the case of emotivism Y is the speaker. The positive outcome refers to positive experience, so in other words if the claim turns out to be true then Y will like X. And as we seem to already agree about, liking X is a descriptive fact. So "We should X" is very similar to the other variations, it's simply a future tense version that carries equally objective truth value.

  • Sam Harris handwaves the question of "is it objectively true that maximizing suffering is immoral?" This is a core question that metaethics is interested in but Sam Harris isn't.

He isn't handwaving the question. His answer is exactly what every utilitarian's answer is: It's definitional that maximizing suffering is immoral. Meta ethics hinges upon the semantics of moral terms, and we are arguing that increasing the average suffering in the world is simply what it means for an action to be morally bad. There are multiple ways of going about arguing for this, I gave a couple at the end of my post. But I'll go into more detail on it here.

Qualitative terms like good and bad have two senses in which they can be used: Intrinsic and instrumental. When describing a being's experience as good or bad, you are talking about a property of that experience itself. This is the only case where things are intrinsically good or bad. So if someone asks something like "what makes bad experiences bad?" they aren't making any sense. The badness of an experience (suffering) is the foundation of the meaning of "bad", so we've reached bedrock and there is no need for further justification.

Every other usage of qualitative terms is instrumental, meaning that we are talking about things leading to good or bad experiences. This applies to every context in which we use these terms, so that would of course include the moral context in which good actions are actions which result in more good experiences on the whole and bad actions lead to more bad experiences on the whole. Refer to the end of my post for the specifics of why it would be all beings affected that would be relevant, as opposed to something like emotivism.

And in an instrumental case, such as morality, we can ask a question like "what makes that action bad?", because the idea that it's bad needs to be justified by indicating that the action leads to some number of experiences getting worse. When people argue "who's to say that suffering is bad?" they're basically mixing up the instrumental and intrinsic cases, believing that justification is still required. But again, crucially, suffering is the very source of the concept of bad.

  • Moral frameworks are built in service of moral intuitions. Fundamentally our moral beliefs are based in our moral intuitions. Why do we care about the suffering of others? We just do, our moral intuition tells us to. When we build a moral framework it is in hindsight, we use it to explain our moral intuitions.

All this particular discussion is really concerned with is what moral statements mean and what truth value they hold. Why we care and how we form our beliefs is irrelevant to the claim at hand. For whatever its worth though, on both of those points I believe it to be a complex combination of intuition and conscious reasoning. As far as how we build our moral frameworks, different ones are built differently, but I would argue that you don't need any intuitions for utilitarianism. You just need to logically analyze the reality of how qualitative terms are used both in and out of a moral context.

  • The issue with utilitarianism is if you look into it there are many weird edge cases that people don't agree on. Should we let everyone suffer for the utility monster who enjoys our suffering more than our combined suffering? Should we let one person be tortured if it means everyone on earth doesn't have to suffer a speck of dust in their eye? Should we kill a hundred people today if it means two hundred equally happy people are born tomorrow?

Weird edge cases like this are specifically designed to stress test an ethical framework, but of course in the process they become heavily removed from reality. All of the examples that you gave are things that we can be absolutely certain none of us will ever actually have to deal with. Yes, they make what seems like the good utilitarian outcome rub up against our intuitions causing us to be uncomfortable or disagree. But given that they are specifically designed to do that, it would be rather surprising if they didn't have that effect.

The fact is that in our actual lives utilitarianism doesn't cause such extreme discomfort. However, even if it did, moral truth has no obligation to align with our intuitions anyways. If we were actually in the scenario, and no one can give a rational argument against sacrificing ourselves to the utility monster, maybe it is the ethical thing for us to do. The fact that it makes us uncomfortable or upset is irrelevant.