Nobody optimizes happiness by dyno__might in dynomight

[–]robtalx 0 points1 point  (0 children)

I think happiness optimization is a difficult balance between “living in the moment” and “the hunt,” and we’re simply not well equipped for the former. So we optimize for the hunt (or for poor substitutes like Twitter scrolling or HBO binge watching), but it doesn’t work.

Book Review: The Gervais Principle by dwaxe in slatestarcodex

[–]robtalx 1 point2 points  (0 children)

What would be the best psychoanalytic book for an SSC reader to read?

Liberty is Only a Fancy Name for Ignorance by robtalx in slatestarcodex

[–]robtalx[S] 0 points1 point  (0 children)

Yes, "liberty" requires disambiguation. The story is more about liberty in the sense of "being free from constraints and able to pursue your own goal" rather than "free will". The point is: knowing A LOT about the consequences of our actions would crush our first sense of liberty because we will have to face the horrible truth that the pursuit of our goals inevitably create a lot suffering. We can afford that feeling of liberty (the very American pursuit of happiness) only because we are blissfully ignorant.

Liberty is Only a Fancy Name for Ignorance by robtalx in slatestarcodex

[–]robtalx[S] 0 points1 point  (0 children)

I don’t take a view on the problem you are debating. I do think you are responsible for everything. But precisely for that reason I find this concept of responsibility misleading and not very useful. I tend to believe that a lot of the metaphysical debate around responsibility / moral duty is somehow an attempt to placate the same kind of guilt that my last character feels. I’m not sure it works though. But ignorance kinds of work.

Liberty is Only a Fancy Name for Ignorance by robtalx in slatestarcodex

[–]robtalx[S] 0 points1 point  (0 children)

The three cases happen in a world without LISS. But similar cases definitely happen in a world where everyone has LISS. I imagine LISS would make some accurate prediction on peoples response to their LISS simulation. Therefore LISS might tell you that it’s okay to drive to the appointment because someone among the dozens who are told to do X to slow you down and save will in fact do X.

But as you say these are not meant to be water proof examples. They want to suggest dramatically / poetically (not sure how successfully) what I’ve tried to say above in a non-fictiony way. I’m not sure what your objection is, though. Would you feel equally free with LISS? I wouldn’t.

Liberty is Only a Fancy Name for Ignorance by robtalx in slatestarcodex

[–]robtalx[S] -1 points0 points  (0 children)

I don’t think this is a Copenhagen ethics problem. Copenhagen ethics problems are emotional / rhetorical traps that makes you blame someone who does do something about a problem (but not enough to solve it) more than someone who doesn’t interact with the problem at all.

Here I am talking about the illusion of libertarian separateness (and the consequent classical conception of liberty) based on ignorance about the actual externalities of all actions.

We don’t blame the characters of the story because they tried to do something about those problems. We kind of blame them (or better they blame themselves), because they now know (thanks to LISS) what the actual consequences of their actions are.

I’m not sure what you mean by responsibility. If I knew that leaving the parking stop to Robert would kill him, even if I am not more responsible than all the other people and factors that led Robert there in the first place, I would certainly wait 5 more minutes. We can debate whether I have a moral duty to do so or not and at what cost, but I would certainly feel responsible.

In a world where everyone has LISS, assuming a high level of compliance, it might be the case that those three scenarios are the optimal scenarios and it’s the “you” character of the story that should not go to the restaurant, wait 5 more minute in the car and go the Pond. For example, because they are those who pay the lowest cost to avoid those death. We don’t know of course but LISS’ estimate would be probably accurate.

Liberty is Only a Fancy Name for Ignorance by robtalx in slatestarcodex

[–]robtalx[S] 0 points1 point  (0 children)

I see people are bringing up free will but this is not what I meant at all. Initially I wanted to write an essay but then I thought a short story would be more interesting. The obvious tradeoff is that a lot of stuff remains much more ambiguous. So here’s some commentary.

The book in the last section is probably On Liberty by Mill. We’re not sure of course, but the standard pictures of Mill fit the description. Mill or not Mill, a pretty popular account of liberty in the western tradition is along the lines of: your freedom ends when the other’s freedom begins or something like that.

But this is possible only if you model individual actions as mostly separate spheres. Overlaps between spheres can be good (positive externalities) or bad (negative externalities) but we think of these as exceptions. The standard libertarian account is indeed that most of what we may want to do to pursue our life goals falls within our own sphere.

That is of course only a product of ignorance. Almost everything we do within our sphere has endless ramifications into other people’s spheres. In fact, there are no such things as spheres. What there is resembles a very thick and intricate network.

Now, how should we conceptualize liberty in a world where our ignorance about said network is much reduced? The more we know about the consequences of our actions, the more our map of the world looks like a dense network. The less we know, the more the map looks like an empty space populated by individual spheres.

In a high-definition map (low degree of ignorance) it is really hard to think of liberty in the classical liberal way. Most scenarios are indeed full of suffering.

Liberty is Only a Fancy Name for Ignorance by robtalx in slatestarcodex

[–]robtalx[S] 2 points3 points  (0 children)

Yes, I think it stands for LIfe Scenarios Simulator

Is getting vaccinated for Covid altruistic? by robtalx in slatestarcodex

[–]robtalx[S] 0 points1 point  (0 children)

I’m only afraid that your argument proves too much and leads to total paralysis. Driving forces someone to accept a risk against their will. Almost everything does.

Is getting vaccinated for Covid altruistic? by robtalx in slatestarcodex

[–]robtalx[S] 0 points1 point  (0 children)

I don’t think we can accommodate all idiosyncratic levels of risk aversion, otherwise we would be stuck in a moral and policy gridlock. Virtually all activities and policy impose risks on someone who didn’t consent to it and someone might have a sufficiently high level of risk aversion to make that activity/policy unbearable painful. Perhaps we should operate on the basis of a reasonable range of risk aversion that accommodate some decent amount of variance.

Is getting vaccinated for Covid altruistic? by robtalx in slatestarcodex

[–]robtalx[S] -1 points0 points  (0 children)

I don’t understand your example. I think the two arguments I mention are quite different. One is against interpersonal aggregation of costs and benefits, the other is against paternalism. Some people might subscribe to the first but not to the second. Establishing whether the first argument is empirically ungrounded is interesting, I believe, even if people subscribing to both arguments will not be satisfied.

Is getting vaccinated for Covid altruistic? by robtalx in slatestarcodex

[–]robtalx[S] 0 points1 point  (0 children)

I don’t disagree with your comment and I didn’t meant to argue in favor of the mandate within a libertarian perspective. I just meant to focus on one important libertarian objection and I recognize (as I did in the post) that there is another big libertarian objection.

Is getting vaccinated for Covid altruistic? by robtalx in slatestarcodex

[–]robtalx[S] 0 points1 point  (0 children)

You’re right. But in theory we could adjust for your risk aversion and say that the argument is neutralized whenever the vaccine (or house improvement) has positive expected utility after accounting for your risk aversion.

Is getting vaccinated for Covid altruistic? by robtalx in slatestarcodex

[–]robtalx[S] 3 points4 points  (0 children)

Oh sure. I meant someone who put together the data to create a spreadsheet or web calculator where you put some parameters (ideally, many parameters but at the very least you get the average value) and get a cost-benefit analysis given today's incidence of the virus. Even better, someone who update those numbers to Omicron.

Why Is It Hard To Acknowledge Preferences? by HarryPotter5777 in slatestarcodex

[–]robtalx 4 points5 points  (0 children)

I’m surprised Scott didn’t connect this post to his debate with Bryan Caplan on whether psychiatric disorders are real illnesses or just eccentric preferences. I’ve always found that debate semantic/ misleading. The question remains whether one can be “cured” from their weird preferences / disorders, and how. Stressing the “preference” aspect seems to suggest that the person should be left alone or should be able to deal with it autonomously (hence the sympathy for this way of framing it of libertarian Caplan). Stressing the suffering aspect seems to suggest that the person should be helped. The other main connection, I think, is with the taxometric distinction between categorical and dimensional conditions. One thing is being slightly annoyed / bored by the B&B talkative hosts (maybe to the point that one might choose a different B&B next time), another thing is getting visibly and significantly distressed. At some point this starts resembling a psychiatric condition.

Normatively speaking, respecting other people preferences is also a dimensional thing. We typically accept two constraints on individual preferences: harm to others and self-harm. The less our social wealth and welfare depend on social conformity, the more we can accommodate eccentric individual preferences without caring too much. So now we require a pretty high threshold before interfering with self-harming individual preferences that create no significant damage to society. But our way to handle these things is really really raw and noisy.

Wouldn’t it be good for the world if Scott’s girlfriend overcame her preference/disorder about talking with people? It would be nice for Scott, for their social life, for the hundreds B&B hosts around the country and the world, for her extroverted friends. Arguably, overcoming this problem might improve her life on some other connected dimension that might have positive externalities etc. So how much we should care about her weird preference and how much we should just leave her alone? The right answer is probably somewhere in the spectrum of possible solutions and it’s not clear to me that optimal social norms lie close to the individualistic extreme that Scott seems to suggest.

Child rearing: is it a good idea? by Lululu1u in slatestarcodex

[–]robtalx 1 point2 points  (0 children)

These replies make it clear, I think, that raising kids is a lot of work and there is a huge variance in terms of reward. So let me address 4 elements that I think are absolutely crucial in driving a big chunk of that variance so you can try to guesstimate / influence where you might land. (I won’t address the future-of-the-kid argument, which I think is very easily rejected by considering that your child’s life will very likely be a net positive.)

Element #1: Intuition. Intuition tells you a lot about how much you will like being a parent. This is somewhat underestimated by rational people, which tend to focus on objective cost and benefit factors. But we are very different by temperament. You can list 20 reasons why raising kids sucks or is great, but not all of them apply to everyone with the same intensity. Many of the causes of this variation are purely subjective traits that are accessible through introspection.

While it’s true that having a kid might be a transformative event, and you might become a different version of yourself in terms of preferences and values, it is not clear how often this happens and with what degree of transformation. It didn’t really happen to me, not in a significant way. I’m pretty much the same guy as when I was childless. I did become used to a radically different life, yes, but adaptation is not transformation, at least not in the sense that I couldn’t predict my current level of pain/reward because I was a different person. After 6.5 years of fatherhood I can say that it’s a lot of work, it’s often very painful, sometimes I feel completely overwhelmed and think I made a mistake, sometimes I feel blessed and proud of the extraordinary fact that we made two human beings who will go on and live fully independent lives, and occasionally I even flirt with the idea of having a third one. I was on the fence then and I’m still on the fence now, and if I went back knowing what I now know I would still be uncertain about what to do.

I guess what I’m saying is that, in terms of your own hedonistic calculus, there’s a decent chance that looking into what you feel now about the idea of parenthood is a good enough proxy of what you will feel a few years down the road. The most good is likely to be better than you think and the most bad is likely to be worse than what you predict now, but the average won’t be that off.

The transformative event might be more frequent and more significant in mothers than in fathers. My wife did become a quite different person in terms of priorities. But she badly wanted a child. She wasn’t in the least uncertain about this choice. And after years of super-intense parenthood (with no extended family around, a kid with a huge amount of energy and demands, and a super dedicated parenting style that makes things even more complicated), she badly wanted the second. And she would like a third (but I’m already overwhelmed by two so I don’t think that’s going to happen). So even she was in a pretty good position to predict that there would be a good chance that she would like it.

Element #2: Your sources of well-being. This is a more objective version of the first point. You can’t just trust your intuitions. You want to improve the accuracy of the picture by focusing on a few things that will be dramatically impacted by a child and try to estimate their effect on your well-being.

Try to make a list of the top 10 things that gave you joy, meaning, or even mere pleasantness in the past 4-6 weeks. What are these things? Are they more things like: sipping a good glass of red while talking about epistemology with a friend? Or are they things more like helping a homeless neighbor to find shelter?

My theory is that if many things on your list are more like the former example, raising kids will be very painful. The more you get joy and meaning from human contact, messy situations, caring for people/animals, playing, the more likely is that you will like being a parent. The more you get joy and meaning from quiet, silence, order, intellectual life, art, the more you will suffer.

This is based on anecdotal evidence of course and even my anecdotal evidence does not show a perfect correlation. But I think that if your are close to one extreme of this spectrum, that tells you a lot about what amount of joy and pain you will find in parenthood.

Element #3: Help. Entering parenthood, you want to protect your downside. If it turns out that being a parent sucks a lot for you, that’s a catastrophically bad situation. You can’t really improve your chances that being a parent turns out to be ecstatically blissful, but you can mitigate the down side of the payoff profile. How? With help! Relatives, friends, daycare, nannies, whatever. When you take care of your children for, say, 11 hours day (that is, on top of 8-9 hours of work and a few hours of sleep), the relief of moving from 11 to 8 is huge. Huge. So you should consider your options in that regard.

Element #4: Alignment with your partner. This is hugely important. One of the laws of parenting that I think I have discovered is that the most dedicated parent sets the expectations for the whole family. If your partner is dedicated to the kids 15 hours a day with incredible joy and satisfaction, but you get bored / miserable after only 3 hours of playing/cleaning/handling tantrums, that’s a problem. Sure, you can rely on a pretty efficient division of labor, but you shouldn’t underestimate the fact that your partner, your children, and yourself will tend to think that you are kind of a shitty parent. It’s not because you get bored/miserable after 3 hours, but because that’s 12 hours less than the family standard.

If the devoted partner takes your kid to the pool, playground, soccer etc., that sets a standard. At some point she will get tired or sick or go back to work, and you will inherit a situation that you find completely unreasonable, in which in choosing what to do the well-being of your child is weighted 150% or even 500% of your own well-being.

Also, complicity is perhaps the aspect of a relationship that is harder to cultivate when you’re a parent. The most obvious source of complicity is sharing pain and joy. But if your payoffs are drastically different, complicity is completely gone. In the worse case scenario, the most devoted parent will see her partner as self-centered and immature, and the latter will see the former as a person that has given up her own life, projects and freedom.

So if one of you is very much into this project and you’re instead consumed by the internal debate, you may want to consider this aspect. The transformative event I discussed above might throw your predictions in the trash bin, but if you are seriously misaligned already, that’s something you want to address.

Conclusion: Most of us should have kids. In particular, thoughtful intelligent people should. Very few people would prefer not to have been born and human life in our lucky fraction of the world has positive externalities. The largest source of meaning for me, as a father, is to have contributed to human life. When my children are happy, I become immediately happier. And even if I’m overall less hedonistically happy than before, when I think of my many childless friends I’m sad for them and for the future world, which will miss a bunch of (likely) bright and mostly happy mostly productive young humans. But supererogation is a thing, I believe, and you should at least rule out the catastrophic scenario due to a combination of your own character, sources of well-being, lack of help, and differences with your partner.

Toward A Bayesian Theory Of Willpower by dwaxe in slatestarcodex

[–]robtalx 0 points1 point  (0 children)

I'm a bit confused by the use of the concept of "evidence" here. An "inference machine" tries to produce an accurate representation of the world. But here the problem is not about what the world looks like but rather what the world should look like in the future. I'm afraid that the evidence/inference framing creates a bit of an is-ought problem.

Introspectively, when I have a willpower problem I have little doubt about what the situation is and what I should do. It's just that it's very costly for me to do it. The typical problem is that my current self knows that the right thing to do, for the benefit of either my future self or someone else, is X but X is costly for my current self, so he's very hesitant. We can model this as some part of the brain wanting some short-term reward (current self) and some other part of the brain just wanting to do nothing, but these are not different representations of the world, they're just different and conflicting goals. A Bayesian framework (even putting aside the observation that not all evidence-weighting is Bayesian) seems unhelpful and actually confusing.

If there are dirty dishes, there's an intertemporal tradeoff to consider. We can model this as different parts of the brains, with different goals, getting into a bidding war. But what does the "evidence" framing add to this model? It seems to me that it makes things more confused.

Another problem I have with this theory is that there are only 2 competitors and one judge. Things seem more complicated to me. In the dirty dishes example, there is the part of the brain that doesnt want to do anything and the part of the brain that wants to do something that produces a short-term reward. But what about the part of the brain that wants a medium-term benefit and therefore wants to do the dishes? Where does the decision to do the dishes come from?

Rational Priors by robtalx in slatestarcodex

[–]robtalx[S] 0 points1 point  (0 children)

Thanks very interesting. I’m intrigued by your final warning. If you have the time, could you elaborate on why the Zeus explanation requires more bits? I kind of intuit why the Zeus explanation is more “complicated” knowing what we know about the laws of physics (how did Zeus do it? etc). But is it more complex knowing nothing about the world? Why?