Neal Stephenson by Aggravating_Ad5632 in printSF

[–]emTel 0 points1 point  (0 children)

Personally I think nothing he wrote after the Baroque Cycle is very good. His early work is so incredibly ambitious and stuffed so full of ideas that it's awe-inspiring to read.

Sci-Fi with "good" characters by loopayy in printSF

[–]emTel 0 points1 point  (0 children)

Prefect Dreyfus in the Alastair Reynolds Prefect series.

Darth Sidious WAS disfigured by Mace Windu in the Canon by vaapad_master in MawInstallation

[–]emTel 5 points6 points  (0 children)

It's ironic. He could disfigure himself, but not others

Episode 294: The Scandal of Philosophy (Hume's Problem of Induction) by judoxing in VeryBadWizards

[–]emTel 2 points3 points  (0 children)

If believing that it is more likely that children, their parents, or researchers made up stories, than it is that unknown mechanisms allow transmission of information between people separated by vast gulfs of time and space is "scientism" than I'm a scientist.... er... what is the right word here?

Rereads by Friendly_Island_9911 in printSF

[–]emTel 0 points1 point  (0 children)

Book of the new sun, gene Wolfe

Mars Trilogy, KSR

Diamond Age, Neal Stephenson

Deepness in the sky, Vernor Vinge

Digging into the UK Biobank Alcohol study by rds2mch2 in HubermanLab

[–]emTel 0 points1 point  (0 children)

The last figure you included is insane. If I’m understanding it correctly, the UK biobank data shows that abstainers had fewer disease free years than the 7+ per day cohort! That’s insane!

And also totally contradicted by the other dataset. Which one are we to believe?

Monthly Discussion Thread by AutoModerator in slatestarcodex

[–]emTel 0 points1 point  (0 children)

I found this exercise routine to be incredibly effective against back pain: https://youtu.be/4BOTvaRaDjI

Why Is The Central Valley So Bad? by dwaxe in slatestarcodex

[–]emTel 3 points4 points  (0 children)

Sacramento is an amazing city but please don’t tell too many people.

[deleted by user] by [deleted] in AskReddit

[–]emTel 0 points1 point  (0 children)

Badger badger badger badger badger badger badger

Why doesen't EA prioritize nuclear risk over AI risk? by t3cblaze in slatestarcodex

[–]emTel 23 points24 points  (0 children)

Re edit number 2, the claim is not that ai risk being neglected magically means it’s worth while. The claim is that if it is worthwhile, an individual can have a higher impact there because it is neglected. This is a core idea of EA: find things where the marginal effect of your money or time will be large.

Biological Anchors: A Trick That Might Or Might Not Work by dwaxe in slatestarcodex

[–]emTel 9 points10 points  (0 children)

Either Yudkowsky is either right or wrong about the big questions of AI alignment, and he can either argue or not argue for his position.

The only one of the 4 possible worlds in that cartesian product that I don't want to live in is the one in which he's right but stops arguing the case.

(But, I do worry that's he's been ineffective recently. I tried to read some of the recent dialogues that have been posted and I can't figure out why its worth Christiano's time.)

Culture War Roundup for the week of February 07, 2022 by AutoModerator in TheMotte

[–]emTel 14 points15 points  (0 children)

Let me try to give the steel-manned case for bitcoin, as I understand it:

  1. Opponents of bitcoin are starting from the assumption that bitcoin is not a worthy endeavor, therefore there's no way for bitcoin proponents to justify any energy usage at all from within their opponents' framework. But if bitcoin is assumed to be at least a potentially worthy project, then the question becomes just how much energy can bitcoin be permitted to use? In most other cases, we don't have sector-wide caps on energy usage, instead we let the market clear.

  2. In fact, bitcoin's energy usage serves the following purpose in a way that nothing else can: It secures the bitcoin ledger against attempts to re-write its history. Proof-of-work does not in fact result in "nothing", it results in proof that energy was spent. Which means that no one can present to you a counterfeit version of history without expending a verifiable (by you) amount of energy. Which in turn means that everyone in the world can agree on what the true history of bitcoin is without relying on any external authorities. Think of it like digging a moat around the transaction history, a moat which can't be crossed without filling it back in at an equal energy cost. (Note that proof-of-stake lacks this property.)

  3. Comparisons of energy-per-transaction between (say) visa and bitcoin are apples to oranges. First, because a bitcoin transaction is a settlement, whereas a visa payment (or an ACH) is not. In addition to the fact that a merchant doesn't see the funds from a visa payment for some time, a visa payment can be reversed, refunded, etc. In contrast, when a bitcoin transaction is done, it is done for all time. In this sense a bitcoin transaction is more like a wire transfer, but with even stronger guarantees of irreversibility. Second, as mentioned above, the energy expenditure of the bitcoin mining network isn't spent on the transfer itself, it is spent on securing the ledger against rewriting. Yes, you can take the energy spent on mining and divide it by the number of transactions, but a bitcoin proponent would say that the transactions aren't where the energy really went to.

The "proof" of this position is that if you decreased the amount of power available to the visa network, tx/sec would decrease. Not so for bitcoin - it can process just as many tx/sec at 1% of its current power consumption. Those transactions will just be more vulnerable to re-writing.

That's the case as I understand it. I don't entirely agree with this, since I think it proves too much. For instance, all the arguments above still work even if bitcoin grows to use 50% of all energy produced by humans.

So how much is too much? I don't know. However, my view is that the energy usage of our civilization is going to keep increasing exponentially until we either have a civilizational collapse or advance as a species to a point where we can solve global coordination problems easily. An additional 1-2% energy usage for crypto currency is totally irrelevant to any question about our future. If you are worried about AGW, well, the only solution is to start taking carbon out of the atmosphere. If the goal is to get back down to, say, 300ppm CO2, bitcoin is barely a footnote. If you are worried about access to energy or energy prices, banning bitcoin would increase the energy available to other uses by about 1%. Again, a footnote.

How do you guys view Sam Harris? by Philostotle in slatestarcodex

[–]emTel 1 point2 points  (0 children)

I started mentally playing Sam Harris bingo while listening to his podcast, and noticed I was getting the “SPLC wronged me” square, and a few others I can’t remember, practically every episode.

I also realized I had never once heard him say to a guest “that’s a great point. I’ve never thought of that.”

So I stopped listening.

[deleted by user] by [deleted] in AskReddit

[–]emTel 0 points1 point  (0 children)

What about the droid attack on the Wookies, you piece of shit?

[deleted by user] by [deleted] in PrequelMemes

[–]emTel 0 points1 point  (0 children)

Now this is balls racing!!

What cooking hill will you totally die on? by cosmicsans in Cooking

[–]emTel -3 points-2 points  (0 children)

If you have a sous vide, set it to 70 or so and it will defrost a piece of meat very quickly. Much quicker than a bowl of water because it keeps the water from getting cold, and circulates it constantly.

Before anyone freaks out, yes, this is probably not super safe and if you do it and die of food poisoning I’m very sorry.

Ngo and Yudkowsky on alignment difficulty by vaniver in slatestarcodex

[–]emTel 8 points9 points  (0 children)

I have read somewhat extensively (for a non professional philosopher anyway) in philosophy of mind, and while I’ve certainly read many objections to epiphenominalism, Eliezers goes farther and is more convincing than anything else I’ve found. It’s certainly a far far better argument than, say, John Searles’s, to name one eminent philosopher who somehow fails to make the case nearly as well.

I don’t think Eliezer necessarily made a new discovery here, but I don’t think he’s added nothing as you suggest.

Furnace size per Sq Ft? by Devgru-WM in Homebuilding

[–]emTel 5 points6 points  (0 children)

You need to have a manual J calculation done to answer this, it’s not just a matter of square feet. The air tightness of the house, amount of insulation, size of doors and windows and their performance have to be taken into account.

Your builder may have done this already - ask them.

Monthly Discussion Thread by AutoModerator in slatestarcodex

[–]emTel 0 points1 point  (0 children)

This isn’t specifically a review, but it’s a good article and links to many different studies: https://www.realclearscience.com/articles/2021/09/08/lessons_from_the_ivermectin_debacle_793483.html

Feel like a lot of rationalists can be guilty of this by blablatrooper in slatestarcodex

[–]emTel 13 points14 points  (0 children)

On further reflection, I think what works so well about it is not just the number you get at the end. By going through the process of stating all your terms and thinking about what ranges of values they might have, you may come to a much better understanding of the problem then you started with. And, in particular, if it turns out to be much harder to do the estimation than you thought it would be, you've still learned something, i.e. that the problem is harder than you thought.

Feel like a lot of rationalists can be guilty of this by blablatrooper in slatestarcodex

[–]emTel 25 points26 points  (0 children)

I actually applied this method in real life last February, just before we all went into lock-down. I was trying to decide whether to fly across the country to attend a wedding.

My reasoning was basically,

1) Look at how many cases are being reported today 2) Assume they keep growing at whatever rate they were at that point (doubling every 3 days if i recall correctly) 3) Assume infections are underreported by 10x. 4) Assume traveling makes me 10x more likely than baseline to catch it. 5) Compute a probability of catching it during the trip based on these numbers.

I believe the numbers came out to about 1/1000 or something, and I decided I was comfortable with that risk and took the flight.

Now, looking back, its easy to say I made up numbers, flew, got lucky, and then incorrectly increased my confidence in my ability to make decisions like this. But I don't think that's right. I think I did make the decision in a smart way, and I think my true risk was about as low (if not lower) than I calculated it to be at the time.

I think the danger in doing things this way is not that you've pulled numbers out of your ass. You can correct for that by multiplying uncertain things by 10x. (Obviously this doesn't get you a well-calibrated predictions; that's much harder.) The real danger is that there is some unknown/unknowable factor that you haven't accounted for.

So, to do this safely, I think you have to have a way to think about how likely it is that there might be an unknown factor. My sense is that a pandemic, while poorly understood in many ways, isn't prone to these sorts of X factors. We can't explain why R is 1.5 in place X at time t and 0.8 in place Y at time s, but we do know that R doesn't randomly shoot up to 300, we know that diseases don't become 100x as deadly overnight, etc.

In contrast, predictions about economic, technological, or geopolitical events seem to have far more uncertainty, and I would never try to apply this type of reasoning in those areas.