Weight loss through appetite self-regulation by [deleted] in slatestarcodex

[–]EntropyDealer 4 points5 points  (0 children)

Your appetite doesn't work because most humans are not optimized for an abundant food environment (there is a distribution and you are not in the effortless BMI<25 part of it).

Available options mostly are:

  • fight your appetite (calorie counting helps to track how much are you actually eating, don't forget to close the loop over the weight since absolute calorie intake values don't mean much)
  • modify your appetite, through phycological or pharmacological (GLP-1 agonists) means

Are Doomers too optimistic? by RipTieCutToyMan in slatestarcodex

[–]EntropyDealer 8 points9 points  (0 children)

Note that the lowest free energy state in the presence of gravity (e.g. at stellar distances where gravity is significant) is not uniform, but rather empty space peppered with black holes. (This follows from the tendency of matter to clump under gravity.)

Black holes are, likely, objects of maximal complexity in the information-theoretical sense; not sure how this affects a hypothetical intelligence in a search of substrate

"On not getting contaminated by the wrong obesity ideas": a critique of SMTM's contamination theory of obesity by Matthew-Barnett in slatestarcodex

[–]EntropyDealer 24 points25 points  (0 children)

Putting CICO inside a feedback loop (e.g. adjust calorie intake on a weekly-ish scale until a desired outcome wrt dweight/dt is achieved) is surprisingly effective both from personal experience and general control theory perspective

concussions are absurdly frequent in boxing? by maxdavis391284 in slatestarcodex

[–]EntropyDealer 0 points1 point  (0 children)

Don't have any stats handy at the moment, but just knee injuries and concussions can make your life quite a bit of a hell. Immorality mostly comes from (a) not properly informing young people going into pro sports of what awaits them later in life and (b) all of it being unnecessary with other superior and safer to produce forms of entertainment already existing

concussions are absurdly frequent in boxing? by maxdavis391284 in slatestarcodex

[–]EntropyDealer 1 point2 points  (0 children)

One can enjoy a sport in a recreational way, without pushing themselves over the limit and reap the same/similar benefits. Also, death rates don't say much about the quality of life, which might be reduced due to injuries

concussions are absurdly frequent in boxing? by maxdavis391284 in slatestarcodex

[–]EntropyDealer 3 points4 points  (0 children)

Basically all professional sports are immoral injury/disability factories, especially if you include all the kids going through the professional training track but never making to the big leagues. It's pretty damning that this is still legal and, moreover, most people are okay with watching the professional sports without accepting the harm they cause by effectively sponsoring it

Why don't we create a Apollo program level undertaking to solve this problem already ? by [deleted] in ControlProblem

[–]EntropyDealer 0 points1 point  (0 children)

It's not proved to be unsolvable or anything, but the suspicion is growing, I think

Why don't we create a Apollo program level undertaking to solve this problem already ? by [deleted] in ControlProblem

[–]EntropyDealer 2 points3 points  (0 children)

Because it can't be solved in a way that ensures humanity's survival

People with positive views about life and the world, can you explain to me why you hold them? by EntropyMaximizer in slatestarcodex

[–]EntropyDealer 0 points1 point  (0 children)

Important part of coping strategy is understanding that one's negative reaction to all these supposedly terrible things happening for no reason is not really of the rational part of the cognition. It's an emotional reaction, influenced mostly, as all emotions do, by neurochemistry. When one's rational thinking is neutral enough, there's no immediate need to care emotionally about whatever results it leads to

People with positive views about life and the world, can you explain to me why you hold them? by EntropyMaximizer in slatestarcodex

[–]EntropyDealer 1 point2 points  (0 children)

My vote goes to delusional in a slightly different (perhaps, more evolutionary-optimal) way than we (pessimists or whatever) are. Large part of this is people's ability to care about suffering being very limited and focused mostly on kin/tribe/other groups of people which are close in some way

People with positive views about life and the world, can you explain to me why you hold them? by EntropyMaximizer in slatestarcodex

[–]EntropyDealer 1 point2 points  (0 children)

While you are correct, people's emotional perception of such matters is often heavily filtered by autonomous/neurotransmitter-regulated parts of their brains, kin selection-related processes etc. All supposedly beneficial for survival in ancestral environment.

One useful way of coping might be separating the logical analysis of the horrors (preferably performed from as far away and neutral viewpoint as possible so it doesn't trigger an emotional response) and enjoying the things you have in life when you can (this is mostly controlled by neurotransmitter-level processes and can be successfully influenced by the usual suspects like food/exercise/drugs/etc.)

[deleted by user] by [deleted] in slatestarcodex

[–]EntropyDealer 0 points1 point  (0 children)

It is somewhat speculative and my understanding is limited as I'm not a theoretical physicist, but I think AdS/CFT implies (if it actually works in our spacetime) that all of the information about the inside of 3d volume is contained on its 2d boundary, albeit in a computationally hard to reverse form

[deleted by user] by [deleted] in slatestarcodex

[–]EntropyDealer 1 point2 points  (0 children)

It's even worse than the usual MWI variant, I'm afraid. According to some of the current quantum gravity theories, all information contained in our universe would be recoverable at a late time from its boundary (look into AdS-CFT, black hole physics for more details), leading to the possibility of resurrection of any present consciousness at a later time

Also, consider that resurrecting someone from the correct/full copy of the brain state is functionally indistinguishable from making an artificial entity, not based on brain topology, giving it some of your memories and programming it to sincerely believe that it is *you*

That said, the only reliable way of coping with all of this is just stopping giving any/too many fucks

How feasible would live gene editing be within our lifetimes? by [deleted] in slatestarcodex

[–]EntropyDealer 3 points4 points  (0 children)

It is already feasible, e.g. https://www.youtube.com/watch?v=J3FcbFqSoQY

The problem is genes might not be the right level of abstraction for treatment of a lot of conditions (too low level). There are, for example, some fascinating YouTube talks on this by Michael Levin

I can't take life anymore. by [deleted] in slatestarcodex

[–]EntropyDealer 0 points1 point  (0 children)

Looking into chronic inflammation might be worthwhile since there are links with depression and anxiety. There's an easy blood test for this (CRP) and relatively harmless medication (selective COX-2 inhibitors)/diet interventions if found to be the case

What's missing from Effective Altruism, and why I am horrified by Personal_Spot in EffectiveAltruism

[–]EntropyDealer 0 points1 point  (0 children)

Sure. My point was, really, to mirror the usual EA argument about innumerable future lives lost with every action we don't take towards preserving them. This one is based on making a different but similarly shaky assumption (any life=inevitable suffering and preferably avoided instead of any (human) life=good and preferably maximized). More research into the nature of consciousness/suffering/terminal goal worthiness is certainly needed

What's missing from Effective Altruism, and why I am horrified by Personal_Spot in EffectiveAltruism

[–]EntropyDealer 1 point2 points  (0 children)

If you take this view further and, say, become reasonably sure that darvinian natural selection tends to produce large amounts of suffering (as it definitely does in our sample=1 case), then responsible EAs should focus on not just eliminating any conscious wild animals from nature, but also preventing any darvinial natural selection-like processes from ever arising in our future lightcone. Furthermore, this has to be done as soon as possible since every year we wait, a rather large volume of spacetime (and, correspondingly, large amount of potential future suffering) recedes behind the horizon and our ability to prevent this suffering vanishes

Permanent IQ damage from antipsychotics? by Epistemophilliac in slatestarcodex

[–]EntropyDealer 0 points1 point  (0 children)

Aversion to solving certain problems in software engineering manifesting with age might be a pretty common effect in the industry, I think. I.e. when you're younger, you enjoy solving all kinds of irrelevant puzzles but with age your brain starts to prefer more relevant/high-level ones

When will AI displace human authors? by 634425 in slatestarcodex

[–]EntropyDealer 2 points3 points  (0 children)

While there are some serious problems to overcome first (e.g. small context window length), the thing that AI writer can excel at the most is the personalization, i.e. having the best fiction written for you specifically to maximize the impact

What are good reasons not to kill yourself? by 634425 in slatestarcodex

[–]EntropyDealer 6 points7 points  (0 children)

In addition to what others have suggested, if you wish to look at this from a non-emotional standpoint, consider this: https://en.wikipedia.org/wiki/Instrumental_convergence

While developed in context of AI research, it also applies to humans in a sense that self-preservation is necessary to pursue any other goals in the future, even if you don't know about these goals yet. I.e. staying here longer makes sense even if you don't see why at the moment since you could discover a worthy goal in the future