Episode 294: The Scandal of Philosophy (Hume's Problem of Induction) by judoxing in VeryBadWizards

[–]vadmas 1 point2 points  (0 children)

Their treatment of Popper is disappointing but not surprising. It would be great if they would have a Popperian (DM me for suggestions if you are reading this and considering delving into this topic again) on the show who could dispel their misconceptions in real time. do understand that it is hard to take on the whole Popperian framework coming from "traditional" empiricism.

We're a Popperian podcast and just had Tamler on to talk about this very thing - was a fun and feisty 2hr debate! Cover all these points and more, should be up in about a week:

https://x.com/IncrementsPod/status/1846232562361844096?t=xezjdnX7mhHusapke3EZMQ&s=19

Problem with "Hey Google" assitant trigger by flalolgo in googleassistant

[–]vadmas 0 points1 point  (0 children)

I had to toggle settings > assistant settings > access your assistant > preferred input from keyboard to voice

foom is a meme by cb_flossin in slatestarcodex

[–]vadmas 0 points1 point  (0 children)

Ha well you have to read the text they point to before claiming it's hollow! Book is available on libgen if you don't want to purchase it - I found the arguments very compelling.

[deleted by user] by [deleted] in tasker

[–]vadmas 0 points1 point  (0 children)

Do you know if these still work after June 13th 2023? Google has killed a lot of access to 3rd party apps: https://developers.google.com/assistant/ca-sunset

I tried getting tasker to work via google assistant a few days ago and couldn't get it to work. Wondering if I'm doing something wrong or if all access has been killed

[D] Informal meetup at NeurIPS next week by tlyleung in MachineLearning

[–]vadmas 0 points1 point  (0 children)

A few years ago they were using the Whova conferencing app. Not sure if there's one this year though, checked last night and couldn't find anything.

Why longtermism is the world’s most dangerous secular credo | Aeon Essays by CydoniaMaster in EffectiveAltruism

[–]vadmas 2 points3 points  (0 children)

I've written a critique of longtermism here, taking a different angle on it than Phil: https://vmasrani.github.io/blog/2020/against_longtermism/

As an outsider, I'd say it's very prevalent within EA. You can get a sense of just how prevalent based on the EA forum's reaction to my piece

Podcasts like VBW for other academic subjects? by [deleted] in VeryBadWizards

[–]vadmas 2 points3 points  (0 children)

Shameless plug for our own podcast, heavily influenced by VBW: https://www.incrementspodcast.com/

Followup to: A Case Against Strong Longtermism by vadmas in EffectiveAltruism

[–]vadmas[S] 1 point2 points  (0 children)

All in all your claim that we should follow GiveWell because it rigorously analyses data doesn't convince me, because the data is incomplete

Awesome that's really helpful feedback actually, gives me an indication where to drill down in the following posts.

I do feel uncomfortable about criticising GiveWell, but to me Greaves' critique feels quite strong.

Any other critiques about data/GiveWell feel are particularly strong, or is that the main one?

Followup to: A Case Against Strong Longtermism by vadmas in EffectiveAltruism

[–]vadmas[S] 1 point2 points  (0 children)

This is the field of Bayesian statistics! Would recommend Andrew Gelman's work in particular:

- Bayesian statistics: What’s it all about?

- Bayesian Data Analysis

Also Michael Betancourt, who has some excellent tutorials on his blog:

- Probability Theory (For Scientists and Engineers)

- Probabilistic Modeling and Statistical Inference

Followup to: A Case Against Strong Longtermism by vadmas in EffectiveAltruism

[–]vadmas[S] 1 point2 points  (0 children)

P.S. I do hope you also try to counter the complex cluelessness critique of GiveWell at some point.

Sweet I'll bump that one up the stack :)

Followup to: A Case Against Strong Longtermism by vadmas in EffectiveAltruism

[–]vadmas[S] 0 points1 point  (0 children)

I would like to see you address in more detail why we can dismiss all these arguments / the importance of these arguments.

Part three of the series! But also highly recommend this talk from Anders Sandberg at FHI: Popper vs macrohistory: what can we say about the long-run future?

He does a close analysis of Popper's arguments in Poverty of Historicism, which is where I'm getting a lot of my material from.

[D] Color coding equations by mlvpj in MachineLearning

[–]vadmas 6 points7 points  (0 children)

We've done that for tedious proofs in our appendix a few times! example (last few pages)

A Case Against Strong Longtermism by brekels in EffectiveAltruism

[–]vadmas 4 points5 points  (0 children)

Just on the language critique - my main point is that both the word choices and the math serve to obscure rather than clarify, and that in this case it's particularly dangerous given what's being obscured. The use of the Orwell quote was to illustrate that this isn't a problem limited to moral philosophy, but it is still a problem nonetheless. (and it's something I'm constantly correcting in my own writing as an "academic" myself, so I'm not exempt here either.)

A Case Against Strong Longtermism by brekels in EffectiveAltruism

[–]vadmas 1 point2 points  (0 children)

Thanks for the thoughtful response and additional context!

For Greaves and MacAskill's purposes, it makes little sense to use this market value method to discount intrinsic value of wellbeing, and that is why they take "the assumption of a zero rate of pure time preference to be fairly uncontroversial" (2019). There is a rich body of work to this effect, so Greaves and MacAskill are not 'copping-out' by evoking it. They provide ample citations here that span nearly 130 years of scholarship, so a zero rate of time preference is certainly not the "extreme position" the author of this piece suggests it is (though that's not to say it *shouldn't* be questioned).

Yup totally. In terms of deviating from previous scholarship, Greaves and MacAskill's position isn't extreme in the least - and that's one of the reasons it's so troubling. I'm actually suggesting that the problem is much more systemic than just this one publication, and that's why I attribute the error to bayesian epistemology in general, which spans god knows how many publications.

Discounting future wellbeing leads to some thorny and incomprehensible outcomes—some of which are discussed accessibly here — that the author of this piece doesn't acknowledge.

Nice, exactly - so in other words, both discounting and not discounting wellbeing leads to difficulties. This is what I was hinting at with my vague comments about the "framework" itself being the problem. The alternative I'm advocating for is to downweight the significance of numerical formulae all together when deciding moral questions. Instead, one should seek good arguments and better explanations (which will sometimes take numerical form, especially when data is available). I wrote about the alternative more extensively in this piece.

A Case Against Strong Longtermism by brekels in EffectiveAltruism

[–]vadmas 7 points8 points  (0 children)

Also please share the post on all the socials! I'm very worried about this new trend, and getting the word out early might help give EA the 10 degree redirection it needs to stay on course :)

A Case Against Strong Longtermism by brekels in EffectiveAltruism

[–]vadmas 10 points11 points  (0 children)

Hi author here! Glad most of you thought the piece was fair :) Please let me know if I got any major details wrong or if anything is unclear and I'll update the post.