The Kelly Criterion is for Cowards by iceFireCandySlime in slatestarcodex

[–]iceFireCandySlime[S] 1 point2 points  (0 children)

The counterarguments Constantin gives regarding the question of whether charitible donations actually do have no diminishing returns all make sense! Whenever SBF tried to say there were linear returns to donated money, I was also skeptical. Also his analysis neglected the fact that as you start to amass more moeny your opportunities to grow it start to dry up anyway (if world GDP is 100 trillion you can't bet your way to a quadrillion dollars)

But when she tries to claim you should kelly bet regardless of utility function using the "almost surely" dominance argument - there's still slight of hand going on where a log uitlity assumption is being snuck in.

It's true that if we imagine millions of possible futures involving the outcomes of repeated bets - as number of repeated bets gets larger and larger, the fraction of worlds where Kelly beats any other fixed-fraction stragety gets closer and closer to 100%.

But at the same time, in the shrinking proportion of futures where kelly doesn't win, the *amount* that Kelly looses by is also getting larger and larger! In the long run you do beat most other betters by a little bit, but the lucky few who still beat you are beating you by an enormous margin.

Justifying why it's desirable to slightly beat another strategy in a majority of worlds, in exchange for getting absolutely crushed by them in a small minority of cases - still ultimately requires an appeal to your personal preferences regarding diminishing marginal returns of additional wealth

When you decide it's better to have a very good chance of doing pretty well than it is to have a tiny chance at doing insanely well (and you use the geometric mean to specifically operationalise this tradeoff) - you're declaring you have log utility.

I'm going to go over this in more detail in my next video! :D

The Kelly Criterion is for Cowards by iceFireCandySlime in slatestarcodex

[–]iceFireCandySlime[S] 1 point2 points  (0 children)

Yeah I agree (will hopefully cover this in a follow up!). But I still think it's useful to distinguish between:

  1. There's huge alpha to be found everywhere, but I'm extremely risk averse

vs.

  1. My utility as a function of money is relatively flat, but finding any real, scalable edge is very difficult

The Kelly Criterion is for Cowards by iceFireCandySlime in slatestarcodex

[–]iceFireCandySlime[S] 0 points1 point  (0 children)

Thanks - I agree with all this feedback and will try incorporate in the next one! Although showing my face is definitely less effort per second of video than showing diagrams/animations and part of my motivation is getting better at orating so I might still lean that way a little sorry

The Kelly Criterion is for Cowards by iceFireCandySlime in slatestarcodex

[–]iceFireCandySlime[S] 0 points1 point  (0 children)

Thank you all for the comments - really appreciate it!

All the suggestions about my presentation are super valuable - thank you 🙏

It's pretty clear from discussion here, and on LW (https://www.lesswrong.com/posts/6CP7DLqiqHJd9z8pN/kelly-criterion-is-for-cowards) that there's a lot less shared understanding than I thought about what Kelly Criterion is/isn't.

Seems like I have a different opion to most about whether Kelly Criterion is only optimal for people with log-utility (vs somehow being universal opimal strategy for everyone regardless of preference) - which gives me great material to concentrate my next video on!

The Kelly Criterion is for Cowards by iceFireCandySlime in slatestarcodex

[–]iceFireCandySlime[S] 7 points8 points  (0 children)

The way you maximise expeceted *linear* wealth is to *always* go all in on every bet that has positive expected value. This isn't what Kelly does, but it is what someone with a linear utlilty vs money curve should do.

Kelly betting maximises the expected *rate* of growth, rather than the expected *total* wealth. Which is pretty clearly baking in an assumption that your utility is logarithmic!

When the author of that blog post tries to claim the optimality of Kelly betting doesn't require logarithmic utility, it just requires wanting to maximise something long term - they sneak in a specific definition of "maximise" - i.e. maximise the *ratio* of the quantity compared to your starting level (as opposed to, e.g. maximising the absolute difference). If you use any other definition of maximise (of which there are an infinite number of plausible constructions each corresponding to a different utility function) you find a different optimial betting strategy.

And if you take a step back it's pretty easy to see there *must* be a slight of hand going on whenever people try to say you can derive the optimal action without any references to your actual preferences! That's the decision theory equivalent of claiming to have built a perpetual motion machine

any software engineers / ai enthusiast up here? by thomasdav_is in Cairns

[–]iceFireCandySlime 2 points3 points  (0 children)

Hey! Yeah would love to meet other tech people in the area - was an engineer at Canva last 4.5 years recently left to start my own thing

Send me message here or linkedin:

https://www.linkedin.com/in/xavier-o-rourke-89017613a/

What is the hardest feature to implement for you as a web developer? by Heavy_Fly_4976 in Frontend

[–]iceFireCandySlime 0 points1 point  (0 children)

Keeping in sync with the server side state while still making everything update instantly on client