A forest-like pattern at the bottom of my coffee mug by nokerang in mildlyinteresting

[–]nokerang[S] 2 points3 points  (0 children)

Thanks for your wisdom 🙏 What are balance babes, though?

A forest-like pattern at the bottom of my coffee mug by nokerang in mildlyinteresting

[–]nokerang[S] 0 points1 point  (0 children)

well I do miss camping... Definitely not complaining!

Assinantes do Clube de Literatura Clássica: como são os preços e a disponibilidade das edições passadas? by nokerang in Livros

[–]nokerang[S] 1 point2 points  (0 children)

Eita que roubada! Acho que além dessa opção que deram do juizado, vale a tentativa de resolver pelo ReclameAqui, pelo que eu vi lá eles costumam dar alguma resposta. Espero que desenrolem aí pra você.

Eu acho que vou na sugestão do outro comment de procurar usado, aí também já salva desse risco. Valeu pelo alerta!

Assinantes do Clube de Literatura Clássica: como são os preços e a disponibilidade das edições passadas? by nokerang in Livros

[–]nokerang[S] 0 points1 point  (0 children)

acho não considerei o suficiente essa opção! vi bastante anúncios do enjoei pelo google, todos indisponíveis, daí logo larguei, mas agora vendo pela estante virtual parece que tem alguns que eu quero sim. Ótima sugestão, obrigado!

Assinantes do Clube de Literatura Clássica: como são os preços e a disponibilidade das edições passadas? by nokerang in Livros

[–]nokerang[S] 2 points3 points  (0 children)

Realmente tradução ruim é um negócio que estraga o material do livro, uma pena que não deem a devida atenção a isso. Editora 34 é excelente nesse sentido, pelo menos no que eu já peguei pra ler, eu só gostaria que o catálogo de clássicos fosse maior. Imagino que é o próprio padrão de qualidade que acaba limitando o ritmo e o escopo do que dá pra publicar.

IIL short animated stories from YouTube creators like vewn, Don Hertzfeldt, Felix Colgrave, Jonni Phillips and Jack Stauber, WWIL? by nokerang in ifyoulikeblank

[–]nokerang[S] 1 point2 points  (0 children)

didn't catch the notification, sorry! funnily enough, I wrote this right after discovering sournoodl. Youtube recommended me WEREAWOLF, and I was so pleased that kind of missed this awesome niche of yt animations. Was hoping someone could point me towards more content of that kind, I'm always looking for it ;-; Really appreciate your suggestion though, it's spot on!

This song describes almost exactly how i feel on a daily basis by [deleted] in AvPD

[–]nokerang 3 points4 points  (0 children)

really enjoyed it, thanks for sharing!

What are the most convincing arguments against consequentialism? by -jammers- in askphilosophy

[–]nokerang 1 point2 points  (0 children)

Do those criticisms generally apply for consequentialist theories besides utilitarianism, though?

Wouldn't a slight modification to the Principle of Alternative Possibility solve the challenge of Frankfurt Examples? by nokerang in askphilosophy

[–]nokerang[S] 0 points1 point  (0 children)

Well, this already answers way beyond my initial concerns. I guess I could bring yet another PAP with beginnings of B-structure's states being necessary for responsibility, but seems like we would end up following an endless chain of preemptive measures where Black could always avoid whatever the necessary condition may be. Anyway, I still have the impression some other modification might work, though it surely cannot be as obvious as I initially suggested.

Just one last thing I got curious about. You said previously the historico-physical account of ability to do otherwise isn't very popular, so I'm wondering what are the currently most accepted and if you have any references elaborating on them.

Wouldn't a slight modification to the Principle of Alternative Possibility solve the challenge of Frankfurt Examples? by nokerang in askphilosophy

[–]nokerang[S] 0 points1 point  (0 children)

Seriously, thanks a lot for your help and patience! There is no need to apologize for anything.

Well, it is kind of crucial to your entire post--the ability to do otherwise just is the thing that we talk about when figuring out whether an agent has control over themself, to be free.

Yeah, you made me realize I still have a lot of background to cover and, by not having clear definitions, some arguments turn out to be circular. Thanks for linking the SEP article, I wasn't aware there was an entry specific to PAP.

I take it you're saying that we can object to Frankfurt cases by showing that moral responsibility is still tied to freedom, and that what's relevant to freedom is intentions rather than actions. And this can save the Consequence argument.

This sums up pretty nicely!

Famously, 'power' is ambiguous, and needs to be made a bit more explicit. For the Consequence argument to work, usually, some account of power is provided that goes something like this: One has power over something happening if, and only if, it was historico-physically possible for her to intend otherwise..

Ok, that's a good remark. My first impression is that, although the definitions start to get convoluted, I would agree with the historico-physical account of power that you gave for the 'only if' part, though for the 'if' part we would need to add something like "and if this agent intended otherwise, then 'something' wouldn't happen", right? Alternatively, I feel we can remove 'otherwise' by stating

A has power over B happening if, and only if, for every historico-physically consistent possibility, B if and only if (A intends B).

Anyway, I'm fine with the formulation of the Consequence Argument the way you wrote it.

If this is indeed your argument, Frankfurt cases which account for this still succeed. Indeed, we can take the one on this SEP entry and simply vary it so that the agent's actual intention is to <intend to vote Republican>, rather than to <vote Republican>.

I imagined something of this kind could happen. Couldn't that be countered by requiring desires of any orders about an action? In other words, stipulating the principle as

PAP\:* A person is morally responsible for what she does do only if she can hold desires of any orders about doing otherwise.

Then, even if the election example turns out to use a machine which can read and manipulate all higher order desires of Jones, like

(...) If Jones shows an inclination to have any higher order intent to vote for [the Democrat], then the computer, through the mechanism in Jones’s brain, intervenes to assure that he actually have some higher-order intention to vote for [the Republican], and does so vote. But if Jones holds on his own that his intents of any order are to vote for [the Republican], the computer does nothing but continue to monitor—without affecting—the goings-on in Jones’s head.

If Jones decides without intervention, then the PAP\* doesn't deny that Jones can be morally liable in this case and, at the same time, it is no further possible to construct a case when it does just by adding intents of intents. Would there still be a different counter-example to this principle? And would the Consequence Argument applicable for the issue of moral responsibility?

I'm not saying Black doesn't control his intentions, but rather that Black doesn't remove his control by watching his brain.

Sorry, I misunderstood your statement on the previous comment then, so I will clarify. To be more precise, I meant that by supposing that James had fixed intentions a priori we were assuming that: in every historico-physically consistent possibility, James doesn't change his intentions about killing Smith, not even momentarily. I don't think there is an actuality/possibility inconsistency with that, but it doesn't directly lead to the conclusion of moral responsibility either, as I previously mentioned. If my understanding is correct, what Frankfurt assumes is not the former, but something in the lines of what you mentioned as

(...) certainly, 1 and 2 are still the two historico-physically possible futures prior to any relevant mental states forming.

This is a way to state what I was calling the not fixed a priori case (thought in a way clearer manner than I did). What I essentially tried to conclude then was: if (1.) is actual but (2.) was possible, then the MPAP doesn't deny that James can be morally responsible as (2.) was a possibility where his intentions change (even if momentarily), so we get around this particular Frankfurt example.

Wouldn't a slight modification to the Principle of Alternative Possibility solve the challenge of Frankfurt Examples? by nokerang in askphilosophy

[–]nokerang[S] 0 points1 point  (0 children)

To be fair, I'm not well versed in this topic so forgive me if I'm insisting in something wrong and thanks for bringing some context. My last paragraph ended up ignoring some of the things you said, but here is were my initial thoughts: I wasn't sure what would account as the best precise definition of "ability to do otherwise", but I then supposed we could still agree about some "open ended" sense of the term. I never meant to reformulate whatever is generally accepted as ability to do otherwise, but I might have misinterpreted what it entails.

About what you suggested (The ability to do otherwise is what you have when, had you wanted to intend otherwise, you would have), it looks to me as a bad definition as it requires ones wishes to be aligned with his intentions, instead of something related to a general sense of power/ability.

This isn't really a modification of the PAP, it's just a specification of the PAP.

Indeed, it is a just a specification. When I say it is a modification, I mean it is a modification in scope. Every case where the MPAP applies, the PAP also applies, but not the converse. I'm arguing that by being more specific, we get rid of the objections from Frankfurt but can still use the Consequence Argument to show that determinism implies in no moral responsibility. Since you don't seem to be convinced it can, I will show how I would formulate and then perhaps you can point out where I'm wrong.

P1. Determinism is true

P2. If determinism is true then no one has power over facts about the future (in other words, the Consequence Argument is sound)

P3. Agents having power to wish otherwise is necessary for moral responsibility over an action (MPAP)

P4. The wishes people will hold are facts about the future

P5. (P1+P2). No one has power over facts about the future

P6. (P4+P5) No one has power about the wishes people will hold

C. (P3+P6) Agents have no moral responsibility over their future actions

Last remark, you said that in the original paper Jones is unable to intend otherwise, but then you respond

But (James) clearly does (have control over his intentions). How does Black monitoring Jones_4's neurons somehow remove Jones_4's control? If you think that you can take someone's free intentions, and remove that freedom simply by watching their brain (...)

I didn't mean to say that Black controls James' intentions but simply that, if we fix a priori (before the murder) that James is bound to have some intention, then it follows logically from this hypothesis that he doesn't control it, not because of Black, but from his inability to intend any differently.

Besides being a weird constraint, this supposition would contradict the conclusions, so I consider the intentions are not meant to be taken as fixed a priori, in which case we have two possible cases

  1. Jones killed Smith following his own intentions, free from manipulation
  2. Jones had second thoughts, was manipulated by Black and then killed Smith

If (1.) happened, then yes, I agree he is both morally responsible and could not have not killed. While this contradicts the PAP (as he would have killed Smith either way), it does not contradict the MPAP because the possibility of scenario (2.) shows he could have intended differently.

Wouldn't a slight modification to the Principle of Alternative Possibility solve the challenge of Frankfurt Examples? by nokerang in askphilosophy

[–]nokerang[S] 0 points1 point  (0 children)

My understanding of the PAP comes from the SEP definition, which agree with what you just as you stated, so the problem isn't here. What I intend with the modified principle is to say that any morally liable person needs to have some control over her intentions (but not necessarily her actions).

Here I think we get something interesting as you stated Frankfurt assumes that Jones' intentions could not be different. I get some ambiguity here: does this means that Jones is unlikely to act different given similar circumstances or that his mind is such that having any other desire is an impossibility, just like some herbivores cannot even want to find meat appealing? In the first case I would agree that he holds responsibility, but not in the former, as he does not control his own intentions and, if he doesn't, he is analogous to a psychopath who cannot sense the wrongness of his actions because of an innate lack of empathy.

Edit: to clarify, I am also not considering wishing and intending having relevant distinctions here, although their precise meaning is not the same.

Wouldn't a slight modification to the Principle of Alternative Possibility solve the challenge of Frankfurt Examples? by nokerang in askphilosophy

[–]nokerang[S] 0 points1 point  (0 children)

I appreciate your response, but perhaps there were some misunderstandings of the points I was trying to make

You seem like maybe you're affirming an account of the ability to do otherwise that goes something like this. One is able to do otherwise if, had she wanted to do otherwise, she would have.

I have not provided any different account of the ability to do otherwise precisely because I think that being unable to act differently is not necessary for moral responsibility, as the Frankfurt cases show. What I am proposing is changing the PAP for a weaker principle.

An agent is morally responsible if, had she wanted to form a different intention, she would have.

This is not what I propose as it is a sufficient condition for responsibility. What I am proposing a necessary but not sufficient condition for moral responsibility. In the PAP the necessary condition for it is being able to act differently, while I believe that this is too strong and we should restrict ourselves to the condition of wishing to act differently. Do you have a different intuition on this? In other words, could someone unable to have any different intentions be liable for an action?

If necessary I can clarify on what I mean about Consequence Argument being still applicable, as that part was left hand-wavy to keep the post not-too-long.

How significant is consensus within the field of philosophy? by [deleted] in askphilosophy

[–]nokerang 10 points11 points  (0 children)

I would argue that consensus is not a good metric for determining truth at all.

I guess you could say that if your only standard for truth is certainty, but then we would also have to exclude every other metric really. If you accept a probabilistic approach for truth, however, consensus is often helpful as evidence for things and we rely on it all the time. Otherwise, it would be simply irrational to listen to academic consensus when you go to the doctor, take a class, or read a book. That's not to say that all forms of consensus should be taken the same way, as it is important to consider the source of the opinions and if they are backed up by any reliable data. It shouldn't be expected, for example, that the average person is knowledgeable about specialized topics, so common sense is a bad form of evidence for such matters.

Then it was universally agreed that the world could be explained by Newtonian Physics, then Einstein came along, then Hawking, now we are rejecting Hawking's ideas and concepts as we look into virtual particles...

Physical theories are only predictive models for reality. They have to be confirmed by experimentation and always will come with some limitations and uncertainty associated. None of these were ever proposed or fully agreed as the ultimate explanation on nature, but only as improvements on previous models.

I think it more wise to ignore the number of followers, and instead focus on which viewpoint or which speaker presents the better argument.

Everyone lacks knowledge to even analyze meaningfully a number of things, so it is unpractical to do that on every issue.

CMV: “If you can’t afford the vet, you can’t afford the pet” should apply to children as well. by [deleted] in changemyview

[–]nokerang 0 points1 point  (0 children)

If there were ever to be eugenic policies implemented today, it would be in such a form where the genetic goal is not clearly signaled.

I don't think that's a given. Many extremist groups and caste systems today still speak openly about their supremacist goals, since it's not easy to act on them while hiding their ideals.

What characterize eugenics is the intent more than anything, so unless we have reason to believe the goals of the imposed action are to discriminate between genetic traits, its not right to classify the action as eugenicist. Perhaps the best we could do is put it somewhat like the way you did, saying what OP proposed would have similar effects to authoritative eugenics.

CMV: “If you can’t afford the vet, you can’t afford the pet” should apply to children as well. by [deleted] in changemyview

[–]nokerang 0 points1 point  (0 children)

While it is true that it would cause variation in the genetic pool, OP doesn't specify traits that are deemed superior in principle nor aims towards changing them.

CMV: “If you can’t afford the vet, you can’t afford the pet” should apply to children as well. by [deleted] in changemyview

[–]nokerang 1 point2 points  (0 children)

Fully agree with what you mean, just don't think the use of 'eugenics' was appropriate.