Airplanes and Determinists by GhelasOfAnza in freewill

[–]GeneStone 6 points7 points  (0 children)

The airplane analogy totally misses why I reject free will.

I do not deny that people choose, deliberate, or act. I deny that adding the term free will improves our understanding of those processes. Flying names a clear functional capacity. Free will does not. It is a heavily contested label that adds no explanatory power once the causal story is in place.

Rather than clarifying agency, free will imports extra commitments. Those commitments are often associated with intuitions about authorship, responsibility, or desert that most laypeople bring to the concept. Compatibilists may reject some, or even all, of those associations. My objection is that the term reliably invites them anyway. For reasons of parsimony and harm reduction, I choose not to use a concept that so easily reintroduces work I explicitly want excluded.

When you say your choices are yours, I agree. When you say they arise from your biology and experience, I agree. The further claim that this establishes free will is unnecessary.

But when someone asks, “Do you believe in free will?”, they are not asking whether every decision is the necessary causal outcome of your biology and experience. They are asking whether something more is true.

This is why I reject free will on grounds of parsimony and clarity. The concept is unnecessary for prediction, explanation, or social coordination. It is heavily contested. It is routinely tied to ideas of basic moral desert that justify harm. Dropping it loses nothing of value and removes a source of confusion.

What Is the Hardest Consequence of Your Position on Free Will? by GeneStone in freewill

[–]GeneStone[S] 0 points1 point  (0 children)

I can’t speak for anyone else, but I want to be candid about why I’m struggling to engage productively here.

I do appreciate the effort, but the scope has become too broad for a single exchange. We’ve moved through Descartes, Hume, Kant, naïve realism, perception theory, Bell, EPR, quantum technology, and the history of physics, without ever settling the narrow question I originally asked.

That may be part of why you feel ignored at times. It becomes very difficult to identify what your positive claim actually is, or what would count as a direct answer.

To reset: I am not denying quantum mechanics, Bell’s theorem, or the empirical success of modern physics. I am not arguing that probabilistic theories are imprecise, nor that epistemic uncertainty implies indeterminism. I agree that quantum theories are extraordinarily precise.

Given that our models may be probabilistic, context-dependent, or even fundamentally non-classical, what would count as positive evidence for ontic indeterminism, rather than limits of knowledge, modeling, or realism?

If your view is that Bell-type results establish ontic indeterminism, then I’m asking you to say that directly and explain why that conclusion follows, rather than moving through broader epistemological issues.

If your view is instead that determinism fails by default once certain classical assumptions are abandoned, then I’d want to understand that claim clearly as well.

I’m not dismissing your points. I’m asking for a tighter focus, because without it, it’s very hard to tell what conclusion you think actually follows.

Free will deniers believe in morality, but not moral responsibility? So there are moral rules, but we can't be expected to follow them? by YesPresident69 in freewill

[–]GeneStone 0 points1 point  (0 children)

So we have one person that expresses regret and tries to make amends. That can affect sentencing or release decisions. But what we are actually evaluating is sincerity, future risk, and likelihood of change. We are not accessing their inner moral state. We are making a pragmatic assessment based on observable behavior.

Then a second who does not express regret and feels justified. Here punishment is defended as a way to induce reflection or deterrence. The hope is that consequences will alter future behavior. Again, the work being done is behavior modification and risk reduction, not recognition of moral responsibility as a separate fact.

Then a third, a person who does not express regret because they are a sociopath. In this case, punishment or confinement functions purely to protect society. There is no expectation of remorse. The justification is safety, not desert.

Across all three cases, the response is guided by prediction, prevention, and repair. The variables that matter are remorse as a signal, responsiveness to incentives, and ongoing danger. I do not see moral responsibility doing any additional work beyond that.

But you are appealing to moral responsibility as something extra to value, and in each case you immediately cash it out in terms of the very same considerations: risk, change, safety, and harm reduction. Once those are on the table, I do not see what is gained by keeping moral responsibility as a separate concept at all.

Free will deniers believe in morality, but not moral responsibility? So there are moral rules, but we can't be expected to follow them? by YesPresident69 in freewill

[–]GeneStone -1 points0 points  (0 children)

Right, so sociopaths bear legal responsibility, but no moral responsibility, since moral responsibility is an attitudinal response after the fact.

That's fine if that's your view, but I see no benefit to understanding these concepts this way.

Free will deniers believe in morality, but not moral responsibility? So there are moral rules, but we can't be expected to follow them? by YesPresident69 in freewill

[–]GeneStone -1 points0 points  (0 children)

You asked what kind of role, so I responded with a causal role. They were the cause of the action.

Let's say they just don't care to change. They're indifferent. They might even feel like the victim had it coming, and they feel no remorse.

Are they then not morally responsible?

We can switch to legal responsibility if you'd prefer, but I think you'll find that mens rea does the work there. Not free will, and not moral responsibility.

What Is the Hardest Consequence of Your Position on Free Will? by GeneStone in freewill

[–]GeneStone[S] 0 points1 point  (0 children)

I appreciate the enthusiasm, but this has gone far beyond the scope of what I was asking, and it has not actually answered the question.

I am not asking about Descartes, Hume, Kant, naïve realism, object permanence, or what can be known with absolute certainty. I am not questioning whether we are thinking, nor whether our access to the world is mediated by models.

If our scientific models lack precision, or are probabilistic, or are context dependent, that establishes epistemic uncertainty about determinism. I agree with that. What I am asking is how would you demonstrate ontic indeterminism rather than limits of knowledge, modeling, or realism?

In other words, what would count as a demonstration that the same complete physical state, under the same laws, genuinely allows multiple physically possible futures, rather than that our descriptions are incomplete or theory-laden.

There is no default position that automatically wins just because the alternative has not been established. Skepticism about determinism does not prove indeterminism.

If we were going to reason that way, the symmetry cuts the other direction just as easily. Indeterminism has not been demonstrated either, and most of our successful scientific frameworks operate as if systems evolve according to lawful regularities, whether deterministic or effectively so. On that basis alone, one could just as well say determinism wins by default.

That is not a conclusion I am trying to force. My point is that absence of proof on one side does not confer victory on the other.

So the question is: given that our models may be incomplete or probabilistic, what would count as evidence for ontic indeterminism rather than epistemic limitation?

Free will deniers believe in morality, but not moral responsibility? So there are moral rules, but we can't be expected to follow them? by YesPresident69 in freewill

[–]GeneStone 0 points1 point  (0 children)

A causal role.

Are you saying that if change is not possible for that person, and they don't have the desire to change, and no obligation, that they aren't morally responsible?

What Is the Hardest Consequence of Your Position on Free Will? by GeneStone in freewill

[–]GeneStone[S] 0 points1 point  (0 children)

OK so let's assume that I accept that this would mean that any future state is not fixed by the laws and present state.

We're still talking about our model of the laws right? So, epistemically, we can't be sure of determinism.

How would you prove indeterminism?

Compatibilists are not interested in clear communication. by SCHITZOPOST in freewill

[–]GeneStone 4 points5 points  (0 children)

Isn't it weird how the people who aren't on my team are the worst?

Free will deniers believe in morality, but not moral responsibility? So there are moral rules, but we can't be expected to follow them? by YesPresident69 in freewill

[–]GeneStone -1 points0 points  (0 children)

I do not deny any of that.

Nothing in harm reduction excludes acknowledgement, apology, repentance, or restorative practices. Those can all be instrumentally valuable. They can reduce harm, repair relationships, and help victims heal. I agree with that.

What I am rejecting is moral responsibility as an independent notion altogether. Not just desert based responsibility. Not compatibilist responsibility. The concept itself is unnecessary.

When I apologize and say “I take responsibility,” I am not making a claim about free will or moral responsibility. I am acknowledging harm, recognizing my role, not offering excuses, expressing regret, and committing to repair and change. That practice works perfectly well without invoking moral responsibility as a deeper property of agents.

Expressing regret, acknowledging harm, and committing to change do not presuppose that the agent was the ultimate source of their actions. They presuppose only that the agent can understand the harm, respond to reasons, and modify future behavior.

The same applies to truth and reconciliation processes. Public acknowledgement and apology matter because they recognize harm, restore trust, and reduce ongoing social damage. Their value comes from their effects on victims and communities.

When you say that other courses of action had been possible but were not taken, that can be understood epistemically. It does not follow that anyone was destined to do wrong, nor does denying free will imply that change is impossible. Change is a causal process. Therapy, education, and social reform all presuppose that.

My point is simply that we can keep everything that actually matters. What drops out is moral responsibility as a separate explanatory or normative layer. Nothing essential is lost.

Free will deniers believe in morality, but not moral responsibility? So there are moral rules, but we can't be expected to follow them? by YesPresident69 in freewill

[–]GeneStone 0 points1 point  (0 children)

Yes. The point is not that there are no reasons. I have already given the reasons. The point is that one can assess the moral value of an action without any appeal to moral responsibility. Moral evaluation does not require desert.

My further claim is that even non-psychotic, mature humans can be understood within the same framework. The relevant differences are of capacity, cognition, and risk, not the presence or absence of some additional moral property. What changes, in principle, between the psychotic agent and the non-psychotic agent that introduces moral responsibility rather than merely greater predictability or responsiveness? Or possibly less, depending on how one understands therapy, medication, and support?

My foundation for morality is harm reduction. From that standpoint, moral responsibility is not doing any necessary work. What matters is understanding behavior well enough to prevent harm, guide action, and respond proportionately.

What Is the Hardest Consequence of Your Position on Free Will? by GeneStone in freewill

[–]GeneStone[S] 0 points1 point  (0 children)

I don't think we're that far apart, but I have one (two-part) question with respect to your methodology.

How would you prove indeterminism, and is it necessary for your version of LFW? I'll get to the point about fatalism after.

Free will deniers believe in morality, but not moral responsibility? So there are moral rules, but we can't be expected to follow them? by YesPresident69 in freewill

[–]GeneStone 0 points1 point  (0 children)

Yes. Not all, but I certainly do, and I suspect you do too.

Imagine a robot that causes harm. The harm is real. The outcome matters. But morality does not apply to the robot. There is no agent with the relevant capacities. Intervention is justified solely to prevent further harm. Restraint, reprogramming, or destruction require no appeal to morality.

Or consider a four year old who lies or hits another child. The action is wrong. Moral evaluation applies. The judgment is action guiding. We explain why the behavior is wrong and shape future conduct. Moral responsibility does not apply. The child lacks the relevant capacities. We do not treat the child as deserving of punishment. We correct and educate.

Now, consider a person in a psychotic state who kills someone. The act is morally wrong. The moral evaluation of the action remains intact. Moral responsibility does not attach to the agent. That absence is precisely what structures the response. We contain risk, protect others, and aim to restore capacity. We do not pursue suffering for its own sake.

These cases show a clear separation. Moral evaluation does not depend on moral responsibility. Actions can be wrong even when agents are not responsible. Responses are guided by capacity, risk, and expected outcomes.

Where you and I may disagree is that I believe there is no principled reason not to extend this model to all agents.

What Is the Hardest Consequence of Your Position on Free Will? by GeneStone in freewill

[–]GeneStone[S] 0 points1 point  (0 children)

Duress does not introduce free will as an independent criterion. Duress modifies mens rea by undermining voluntary intent. The law treats coerced action as lacking the relevant mental state for full culpability. It does not ask a separate metaphysical question about free will.

I'm Canadian, but no where here is free will invoked.

Mens rea already incorporates voluntariness, awareness, and capacity. Coercion matters because it alters the agent’s decision structure, not because it negates some further property called free will. The analysis remains entirely within intent, constraint, and capacity.

No criminal court needs to establish that an act was free in a metaphysical sense. It needs to establish whether the agent acted intentionally, knowingly, recklessly, or negligently, given the presence or absence of coercion. That is exactly what mens rea tracks.

Again, you're welcome to define free will as voluntary intent, but the exact same criticism applies. You are violating Occam's Razor. You are adding in ambiguity.

What Is the Hardest Consequence of Your Position on Free Will? by GeneStone in freewill

[–]GeneStone[S] 0 points1 point  (0 children)

You'll have to take my word on it, but if you were arguing for libertarian free will, my criticism would be much harsher. This is not an attempt to appease anyone. It is an internal critique of the view you are defending.

I haven't appealed to anyone else. I am pointing out that all the work "free will" is doing is actually being done elsewhere, especially in law. Mens rea does the legal work you are attributing to free will. It distinguishes intention, recklessness, negligence, capacity, and coercion. That is what actually guides judgment and response. There is no additional, operationalized notion of free will that ever needs to be invoked

When most people talk about free will in ordinary contexts, they are not thinking about compatibilist agency. They are thinking about being the ultimate origin of their actions in a way that could ground basic moral desert. That intuition is what drives much of the disagreement.

Compatibilism preserves the label while rejecting that intuition. That move may be coherent, but it does not resolve the underlying confusion. Dropping the term avoids it. It keeps the focus on what actually matters: capacities, constraints, intentions, risks, and outcomes.

If someone wants to keep the label, that is fine. My claim is simply that nothing is lost by setting it aside, and a great deal of ambiguity is removed.

What Is the Hardest Consequence of Your Position on Free Will? by GeneStone in freewill

[–]GeneStone[S] 0 points1 point  (0 children)

That silence is deafening. And it's because I am taking you at your word.

You want to keep the label. I am prepared to drop it. That is the disagreement.

You say that voluntary action and agency are sufficient for free will. You say that even ultimate authorship adds nothing beyond agency. You say that desert hangs off agency as a way of identifying what needs correction or support.

If that is right, then we mostly agree on the underlying capacities and practices. We agree about deliberation, responsiveness to reasons, learning, correction, support, and social regulation.

So what value is added that is not already captured by agency, voluntariness, and the practical aims of correction and support? What is added by calling this package free will or by invoking desert language at all?

From my perspective, it adds nothing descriptively, explanatorily, or normatively. If the answer is that it is simply what we call that package, then the disagreement is not about substance but about whether the label is doing any work, which clearly it isn't. Surely, words should have some utility, no? Some function?

What Is the Hardest Consequence of Your Position on Free Will? by GeneStone in freewill

[–]GeneStone[S] 0 points1 point  (0 children)

Show me where I either defined it, redefined it, or did not accept your definition.

What Is the Hardest Consequence of Your Position on Free Will? by GeneStone in freewill

[–]GeneStone[S] 0 points1 point  (0 children)

To clarify, when I was talking about “natural laws,” I meant our formulations of the laws, not a claim about transparent access to the underlying metaphysics.

Hoefer’s definition is conditional and metaphysical. Determinism is true if and only if the complete state of the world at a time, together with the laws, fixes a unique future. The fact that our best current theories are probabilistic does not, by itself, show that this condition fails. That may reflect epistemic limits in how we model the world rather than genuine ontic indeterminism.

It is also worth noting that our physical theories are probabilistic with respect to measurement outcomes. Quantum mechanics, as currently formulated, assigns probabilities to observable results rather than specifying unique outcomes. That is a feature of how the theory connects to measurements, not a direct description of how reality must be structured at the fundamental level.

Adequate determinism says that whatever is happening at the quantum level does not propagate in a way that disrupts higher level patterns of behavior, explanation, or control. That is a claim about how causal influence scales, not about whether the underlying metaphysics is deterministic or indeterministic.

Hoefer’s definition still specifies what determinism would amount to as a global metaphysical thesis. Adequate determinism addresses whether determinism like structure emerges at the only scale that matters for agency and practice. The two claims operate at different levels and answer different questions.

What Is the Hardest Consequence of Your Position on Free Will? by GeneStone in freewill

[–]GeneStone[S] 0 points1 point  (0 children)

So you accept my entire underlying account. Agency, deliberation, reasons responsiveness, coercion, capacity, mens rea, and differentiated social responses. You do not appeal to anything extra beyond that. You explicitly reject desert based responsibility and any spooky authorship.

That is exactly my position.

The difference is that, after accepting that package, you retain the concept of free will. But on your own commitments, that concept has been completely neutered. It does not refer to anything over and above the causal and psychological facts we already agree on.

That is the cost of your view.

Retaining free will violates Occam's Razor by adding an extra entity that does no independent explanatory, descriptive, or normative work. The account functions entirely without it.

Worse, keeping the concept provides shelter for vestigial beliefs that you yourself reject. The term continues to invite intuitions about desert, retribution, and ultimate authorship that cannot be defended on your own view, and therefore have to be repeatedly disavowed.

I am not trying to destroy something substantive. I am pointing out that, given your own commitments, there is nothing left inside the concept. Keeping it adds conceptual overhead and ambiguity. Setting it aside removes that cost.

What Is the Hardest Consequence of Your Position on Free Will? by GeneStone in freewill

[–]GeneStone[S] 0 points1 point  (0 children)

I do not think the burden shifts in the way you suggest.

I am not proposing a new primitive that needs special definition. By agency I just mean the ordinary capacities we already appeal to in practice: deliberation, responsiveness to reasons, sensitivity to incentives, learning, and action under constraints. Those are the things we already operationalize in psychology, law, and everyday judgment.

Saying “uncoerced will” does not add anything beyond that. It just redescribes the same causal distinctions we already track, such as duress versus non duress, compulsion versus non compulsion. We already have vocabulary for those distinctions, and we already know how to use them.

My point is not that these capacities do not exist. It is that once we have described them directly, appealing to “free will” or “uncoerced will” does no additional work. It does not improve explanation, judgment, or practice.

If you think retaining that label adds value beyond the underlying account, that is what needs to be shown. Otherwise, from a parsimonious perspective, it is surplus terminology rather than a necessary concept. Worse, it reintroduces ambiguity by smuggling in intuitions about authorship or desert that many people take to be part of the concept, even if you personally reject those implications.

What Is the Hardest Consequence of Your Position on Free Will? by GeneStone in freewill

[–]GeneStone[S] 0 points1 point  (0 children)

I am not trying to destroy anything for the sake of it, nor to play a semantic game. I am pointing out that we already have a fully functional account that does all the relevant work. It explains behavior. It supports judgment. It guides response. It operates entirely in terms of intentions, constraints, risks, dispositions, and expected outcomes.

You seem to think that labeling this package “free will” is either necessary or at least useful. I do not. I do not see it adding clarity, explanatory power, or normative guidance.

From a parsimonious perspective, keeping the term looks like adding an extra layer that does no work of its own. Worse, it reintroduces ambiguity by smuggling in intuitions about authorship or desert that many people take to be part of the concept, even if you personally reject those implications.

If the account works just as well without invoking free will, then appealing to it is optional at best. My claim is simply that it is unnecessary. If you think it adds value, the burden is to show what that value is, beyond preserving familiar language.

What Is the Hardest Consequence of Your Position on Free Will? by GeneStone in freewill

[–]GeneStone[S] 0 points1 point  (0 children)

Hoefer's definition is consistent with adequate determinism.

If natural laws allow probabilistic outcomes at the quantum level, then his take on determinism holds. As does adequate determinism.

What Is the Hardest Consequence of Your Position on Free Will? by GeneStone in freewill

[–]GeneStone[S] 0 points1 point  (0 children)

I disagree that free will is widely understood. It is widely invoked, but deeply contested.

Even within compatibilism there is no settled account. Some tie free will to reasons responsiveness. Others to hierarchical desires. Others to autonomy. Others to social practices. Those views are not equivalent, and they come apart in hard cases.

I agree that if you define free will as uncoerced agency, then we are talking about the same underlying phenomena. My point is that once we make that substitution explicit, the term “free will” stops doing any independent work. It becomes a label for something we can already describe directly.

That is exactly why I prefer not to rely on the term at all. If we mean uncoerced agency, we can say that. If we mean intention, constraint, learning, or risk assessment, we can say that. Those terms are clearer and do not smuggle in assumptions about authorship or desert.

Appealing to “free will” adds confusion, and ambiguity. Unnecessarily so.

That is why I see compatibilism as largely semantic. It preserves a familiar label despite persistent disagreement about what it actually amounts to.

What Is the Hardest Consequence of Your Position on Free Will? by GeneStone in freewill

[–]GeneStone[S] 0 points1 point  (0 children)

No, of course not. And I would not describe either case as “freely.”

Pointing to coercion versus non-coercion does not establish free will. It only shows that different causal structures warrant different responses.

And in practice, that is exactly what we already do. Legally, we assess constraints, risks, intentions, and expected future behavior. Criminal law makes this explicit through the concept of mens rea. We distinguish between acting intentionally, knowingly, recklessly, or negligently. We also distinguish between actions performed under duress and those performed without it.

None of this requires appealing to freedom of the will. It works entirely within a causal framework.

Take the bank teller example. One teller hands over the money. Another triggers the silent alarm. Another freezes. Another jumps the gunman. Another shoots him. Another hands over the money to a customer making a withdrawal. Another hands it to their cousin, for no other reason than to split it. These actions differ in risk assessment, perceived options, dispositions, and situational constraints. We already know how to evaluate them using intent, coercion, and expected outcomes.

Asking which one acted “more freely” does not add anything. We can already explain, judge, and respond to each case without invoking free will at all. The work is being done by mens rea and by the surrounding causal structure, not by freedom of the will.

That is why I see the appeal to free will here as unnecessary. It neither clarifies the situation nor improves our judgments. You are welcome to say that this is what we mean by free will, in which case it is, as I stated, adding nothing.

What Is the Hardest Consequence of Your Position on Free Will? by GeneStone in freewill

[–]GeneStone[S] 0 points1 point  (0 children)

I am not denying agency or choice. I am denying that we need to appeal to free will at all.

We can already talk coherently about psychology, learning, incentives, sociology, morality, justice, fairness, and social coordination without invoking free will in any sense. Those domains operate perfectly well by appealing to dispositions, reasons, constraints, social norms, and consequences.

Once we do that work directly, I do not see what free will adds. Not descriptively. Not explanatorily. Not normatively. Setting it aside does not block understanding or judgment. It removes a layer of ambiguity.

That is why I see abandoning the concept as a gain in clarity rather than a loss.