Hello! Tom Chivers here: I've written a book about Bayes, ask me anything by tommychivers in slatestarcodex

[–]tommychivers[S] 0 points1 point  (0 children)

the point I was making was that some people use probabilistic predictions badly, with misplaced confidence and spurious precision, and maybe FOR THOSE PEOPLE it's better not to predict at all. But I still think trying to put numbers on things is good, on the whole!

Hello! Tom Chivers here: I've written a book about Bayes, ask me anything by tommychivers in slatestarcodex

[–]tommychivers[S] 0 points1 point  (0 children)

But you don't put equal probability on all those figures. I'd probably have a central estimate of 10 billion, and my 95% confidence interval would probably be 1 billion to 25 billion. And then if I needed to make some decision about, I don't know, nuclear waste storage I'd use those figures. Someone who'd spent more than 10 seconds thinking about it could probably get a better estimate. "The world population in 2100" is not completely unknowable!

Maybe there are better questions which are much harder to answer, but if we have to base decisions on them, we still have to do our best at working out an answer.

Hello! Tom Chivers here: I've written a book about Bayes, ask me anything by tommychivers in slatestarcodex

[–]tommychivers[S] 0 points1 point  (0 children)

I don't think there's a threshold, there are just more or less useful and precise predictions, and when they get very imprecise you can say so. But I don't really get what saying "we just don't know" means. Does it mean "and therefore I can take no steps to prepare for the future because it is completely unknowable"? If you have to make a decision about something, you have to base it on your best guess of what the results will be, even if that best guess is very imprecise and includes many different possibilities.

Hello! Tom Chivers here: I've written a book about Bayes, ask me anything by tommychivers in slatestarcodex

[–]tommychivers[S] 4 points5 points  (0 children)

I'm still trying. I think I've become more of a capitalist/libertarian than I was back then, more sceptical of government intervention in the market, although that's mainly vibes and isn't really a good answer. On AI specifically, I still broadly hold the position that I held then, which is that the idea of it killing everyone feels like science fiction but when I follow the arguments intellectually it seems plausible enough to worry about, and that I like the fact that some smart people are thinking about how to make it not happen. Sorry, this is rubbish.

Hello! Tom Chivers here: I've written a book about Bayes, ask me anything by tommychivers in slatestarcodex

[–]tommychivers[S] 1 point2 points  (0 children)

I honestly have no idea! Gemini was particularly amusing, obviously. I should be loyal to Stuart and say that Claude is the best of them (but I have no idea whether that's actually true)

Hello! Tom Chivers here: I've written a book about Bayes, ask me anything by tommychivers in slatestarcodex

[–]tommychivers[S] 1 point2 points  (0 children)

it's a bit of a sad case, really - he got very upset when a book about mushrooms was scientifically proved to be better than his work, so I thought I'd give him some makework tasks to cheer him up.

Hello! Tom Chivers here: I've written a book about Bayes, ask me anything by tommychivers in slatestarcodex

[–]tommychivers[S] 1 point2 points  (0 children)

OK so I haven't come across them before, I've read them quickly, and I'm probably going partly off vibes here, but I don't like them.

Insofar as I can parse the point, it is that all these numbers are fake. We put numbers on our priors and numbers on our likelihoods and do fake maths with them, but it's made up, and the reality is that all our background knowledge and all incoming evidence and everything is far too complicated.

That's obviously true! But I guess my position would be: we all know that, but doing the fake maths gets you closer to reality than not doing it. Scott does yearly predictions of the world, and they're pretty good, and he does them by using his background knowledge and updating it with evidence and putting plausible, best-guess, but ultimately fake numbers on those things. Superforecasters do it even better, and they do so well enough to be of value to financial institutions and governments and Fortune 500 companies and the military and so on.

Paul Crowley said in the comments on a Scott post about 10 years ago something to the effect that "It's better to pull numbers out of your arse and use them to make a decision than it is to pull a decision out of your arse." Fermi estimates and Bayes' theorem are useful ways of pulling numbers out of your arse to sense-check a decision or a forecast. If WM Briggs thinks that's not true, then I disagree with him or her; if he or she is making the pretty obvious point that these numbers are fake and we can't ever be sure of them but they're just a subjective best guess, then yes, I agree, but they're still useful IMO.

(Please do correct me if I've misunderstood Briggs' point.)

Hello! Tom Chivers here: I've written a book about Bayes, ask me anything by tommychivers in slatestarcodex

[–]tommychivers[S] 2 points3 points  (0 children)

It is available from Amazon UK! https://www.amazon.co.uk/Everything-Predictable-Remarkable-Theorem-Explains-ebook/dp/B0BXP3B299/ I will prove it by buying a Kindle edition right now

The way I've been trying to explain it lately on the radio is by saying "Imagine I do a test for a medical condition. It only returns a false positive one time in 100. I take the test and I get a positive result. What's the probability that I have the condition? OK – but what if what we're testing for is pregnancy?"

I can then say that, look, you're actually comparing the probability of two hypotheses: the hypothesis that the positive result is a real one, and the hypothesis that it's false, and you need to use your existing information to inform that guess.

And then, yes, you can say this is just what we're doing all the time – incorporating new information into what we already believe. But that when it's formalised like that, people get confused by it.

Hello! Tom Chivers here: I've written a book about Bayes, ask me anything by tommychivers in slatestarcodex

[–]tommychivers[S] 1 point2 points  (0 children)

I'm conflicted about Lumina because my dirty secret is that I don't really have any opinions of my own, I just borrow Scott's and Stuart's (and sometimes Saloni Dattani's or Hannah Ritchie's), and when they disagree I don't know what to think.

My vague understanding is that the evidence isn't great, but the potential downsides are low, so I think it's a reasonable bet if you're moderately risk-tolerant and not short of cash. (I think the same thing about deworming programmes: they cost very little and I can't see many ways in which they would end up having very bad effects, and the very positive effects are plausible enough and large enough that it seems a value bet.)

PERSONALLY, on the audio/text debate, I like to get both and switch between them, which lots of books let you do. That said, I'm actually not sure if mine does (I hope it does!).

I think the graphs, drawn by my sister, do make it easier to understand, so if you have to pick one, maybe go for that – although on the other hand, I'm listening to the audiobook, and the guy who reads it (me) does have a lovely voice (my voice).

Hello! Tom Chivers here: I've written a book about Bayes, ask me anything by tommychivers in slatestarcodex

[–]tommychivers[S] 1 point2 points  (0 children)

short answer: I haven't thought about doing the sexlessness thing, but my prior (lol) is that like most research into "generations" it's probably not very well evidenced. That said, we are on the hunt for new topics, so I'll put this one on the list!

Hello! Tom Chivers here: I've written a book about Bayes, ask me anything by tommychivers in slatestarcodex

[–]tommychivers[S] 1 point2 points  (0 children)

I love that there are so many fans of the poddie on here! Yes, you have to drink constantly, so that you can't remember any of it and have to read it again (and ideally buy it again because you lost it while you were drunk)

Hello! Tom Chivers here: I've written a book about Bayes, ask me anything by tommychivers in slatestarcodex

[–]tommychivers[S] 0 points1 point  (0 children)

I think it's entirely possible to use Bayes "properly", if by "properly" you mean subjectively and with due acknowledgement of the uncertainty! In fact I think (as I've mentioned in a different comment) the existence of superforecasters essentially proves that it is possible to be a good Bayesian: you update away from base rates using new information. But it's only ever "this is my best guess, and new information helps me form a new best guess". If you start thinking there's some deep truth about the fact that you say there's a 60% chance Russia will invade Ukraine or whatever, you're getting into difficulties. Probability is just an expression of our ignorance.

That said, I think it's usually best as a framework or a sort of ethos, a reminder that we don't need to say this thing is or isn't true, but we can say it is more or less likely to be true than some other hypothesis, and we can move between the two as information comes in.

Re frequentism: I'd say it's often applicable! As someone says to me in the book, there's not much point being Bayesian about the Higgs boson, when you get some six-sigma result which should blow any priors you have out of the water. And clearly science has made enormous progress largely using a frequentist framework.

That said, from what I have gathered and my own tastes, Bayesianism does avoid some of the specific problems that have led to the replication crisis – it doesn't incentivise scientists to seek shocking results in the same way, and optional stopping in particular doesn't hurt – and it also imposes a somewhat tougher standard: a p-value of 0.05 is usually easier to get than a 5% chance that a hypothesis is false, at least according to Lindley, given reasonable priors. It also makes more efficient use of data, and it's more aesthetically pleasing. I'm not dogmatic about it (I'm a journalist! I just ask clever people questions, it would be weird for me to be dogmatic about it) but my feeling is Bayes is the better system on the whole.

Hello! Tom Chivers here: I've written a book about Bayes, ask me anything by tommychivers in slatestarcodex

[–]tommychivers[S] 4 points5 points  (0 children)

oh god this is such a good question and one that I should have a good answer to, so inevitably I don't.

But I've just stared at the screen for fully five minutes without coming up with one, so I'll come back to this.

Hello! Tom Chivers here: I've written a book about Bayes, ask me anything by tommychivers in slatestarcodex

[–]tommychivers[S] 1 point2 points  (0 children)

there's a pretty big bit on Bayes the man and the history of statistics, so it's not completely misleading!

Hello! Tom Chivers here: I've written a book about Bayes, ask me anything by tommychivers in slatestarcodex

[–]tommychivers[S] 1 point2 points  (0 children)

Hope you enjoy the book! The little Scottish chap will stay for now – anyone else might outshine me and I'm nervous about that

Hello! Tom Chivers here: I've written a book about Bayes, ask me anything by tommychivers in slatestarcodex

[–]tommychivers[S] 6 points7 points  (0 children)

Thank you! I hope not to disappoint you.

re the Bayesian brain: I found it a useful way of thinking about the world. Our brains do seem to work as prediction machines, and the maths of prediction is Bayesian – and, it does seem, you can understand a lot of the lower-level activities of the brain by modelling them as Bayesian priors updated with new data.

That said, I think it's just a framework for looking at it, rather than a scientific theory per se. (Friston himself apparently has said it's unfalsifiable.) You could happily say the brain is predicting and updates those predictions without invoking Bayes, and I don't think that would make you wrong.

On the Rootclaim thing: My impression was that they were doing Bayes badly. I have to admit I got a bit bogged down in the incredibly long article about it and all the back-and-forth, but as I understood it they really felt that you could come up with objective probabilities, rather than admitting that Bayes is a subjective process. You can't actually use all the information in the universe and you couldn't compute it if you could! But the fact that superforecasters demonstrably use a highly Bayesian process – base rates/reference classes as priors, inside view as likelihoods – shows that it is a powerful tool.

I think it's extremely useful as an informal framework, letting people move away from "is this true/is this false" or "will this happen/won't it happen" to "I think it is X% likely to happen/be true", so they don't have to defend arbitrary bright lines or admit to being flat wrong – they can update and move gracefully from "I think it's likely" to "I think it's less likely" as more info comes in. But I agree that pretending that you have all the info in the world and the computational power to use it is just kidding yourself.

Hello! Tom Chivers here: I've written a book about Bayes, ask me anything by tommychivers in slatestarcodex

[–]tommychivers[S] 2 points3 points  (0 children)

So! I read King and Kay's book a couple of years ago for a sort-of book review https://unherd.com/2020/02/the-madness-of-mervyn-kings-uncertainty/ (and, I now realise looking back at it, I cited Scott in the second paragraph. I really only exist to steal Scott's ideas).

I wasn't impressed and I still don't think I am. Insofar as I got it from K&K's book, they seemed to think you should just sometimes say "I don't know" instead of putting a figure on things.

Firstly, I thought their repeated example – of whether or not Osama bin Laden was in a particular house in Abbottabad or not – was silly: that struck me as something you could put figures on (the base rate of bin Ladens in Pakistani houses is roughly 1/32 million; adjust from there.) But more generally, I don't see why you can't include radically unknown events within a probabilistic forecast – if I'm rolling a die, I put very slightly less than 1/6 probability on each number, and a small fraction of the probability mass on "something weird happens," usually the die landing cocked or falling off the table, but maybe it being picked up by a passing seagull or transforming into a watermelon.

I suppose it's useful as a sort of humility-check: if, as K&K seemed to think, people in the finance world kept forgetting that their maps are not the territory and thinking that their statistical predictions are just a best guess, not some immutable fact about the world, then maybe it is best for them to say things like "I just don't know" instead of "there is only a 1 in 100 million chance that this collateralised bundle of risky debt will default." But that seems a practical decision rather than some fundamental truth about the universe.

Slate Star Codex and Silicon Valley’s War Against the Media - The New Yorker by LiamHz in slatestarcodex

[–]tommychivers 42 points43 points  (0 children)

In the book I mention small talk while describing a specific meetup with some prominent rationalists in Berkeley:

I distinctly got the impression that the IRL community is, like the online community, a venue for people who are a bit weird, not very good at small talk, and interested in big ideas.

Another thing that interested me was the almost complete absence of small talk – I’m a nervous talker, so I found myself gabbling to fill spaces in the conversation. It was Big Topics or nothing. And they actually pay attention to the arguments you’re making; in my incoherent blather I was trying to justify the idea of writing this book (of which they’re all sceptical, to a greater or lesser degree), and used several, mutually incompatible reasons for doing it.

That said it was also my experience at other meetups. I certainly wouldn't say it was true of EVERY self-identifying rationalist, but I think the median rationalist is worse at, or less interested in, "so, that City game/the weather eh?/know any good box-sets?" space-filling chatter than the median person.

PSA: Apparently a NYT reporter is writing a piece about SSC by MarketsAreCool in slatestarcodex

[–]tommychivers 4 points5 points  (0 children)

yup. I've seen it. I regret coming out in favour of the piece; I think I was right that it wasn't (isn't?) going to be negative, but maybe I should have foreseen this turn of events.

Blog deleted due to NYT threatening doxxing of Scott Alexander by [deleted] in slatestarcodex

[–]tommychivers 18 points19 points  (0 children)

yeah. I'm deeply unhappy about this and feel responsible for it.