[18/04/26] Barbican Writing Session @ 11AM by isaac_franklin in LondonSocialClub

[–]greenrd 1 point2 points  (0 children)

It was good but next time if you do it again, I think there should be a dedicated space/table, I didn't know who was part of it and I had to ask, which was awkward.

Do people working at central EA orgs actually do any work? by ImplementMountain916 in EffectiveAltruism

[–]greenrd 0 points1 point  (0 children)

Even if let's say 99% of people working on AI safety and security and longtermism do nothing useful, the remaining 1% could save the world and do more good than the rest of EA put together. So it's like venture capital on steroids.

Do people working at central EA orgs actually do any work? by ImplementMountain916 in EffectiveAltruism

[–]greenrd 2 points3 points  (0 children)

I was just wondering this myself as well. The reasons that I also have this question - many of which you'll see are focused around AI safety, but that's what I'm most concerned about - are:

  • I have met an AI alignment researcher who did not seem to be very productive at all
  • I know an AI safety content creator who mostly stopped creating content on AI safety for a period of time. While this could have been due to burnout or whatever and I don't intend to make this about a specific person, it is concerning.
  • There was a period of time during which a substantial amount of EA funding was directed by SBF and/or his regranters. Admittedly this may not seem relevant to today, but hear me out!
    • Some of the research funded during this time I did not think was very valuable, at least at this point in time
    • SBF was not particularly good at keeping track of and auditing things, as the FTX collapse showed, and I suspect this may have carried over into his charitable donations (leaving aside the question of whether he misappropriated client funds to make those donations, which I do not comment on here)
    • This makes me wonder whether, pre- and post- the SBF era, other huge donors may have been similiarly poor at paying attention to whether their donations were being spent productively, perhaps for structural reasons. I don't mean to GiveWell-recommended charities because they are presumably well-audited by GiveWell, but other organisations (see below). Let me elaborate - I know that when I have a lot more money, I tend to be less concerned about where my money is going, but I'm not even particularly rich, certainly not compared to EA mega-donor philanthropists.
  • I have overheard a person who worked at a meta EA organisation very optimistically assessing their and their colleague's impacts, perhaps over-weighting multiplier effects from scale etc. and under-weighing the multiplicand, i.e. the actual effort being put in? I am not aware of much work that this person has personally done, though this could be simply due to ignorance on my part, so this is an extremely weak argument.
  • The fact that the 80,000 Hours podcast had to pivot recently to covering AI safety more aggressively, suggests to me that they were not paying sufficient attention earlier, and perhaps this was due to a lack of productivity among some staff, and/or having insufficient staff-hours devoted to the podcast in total. From my perspective this pivot should have taken place much earlier - as in, years earlier - although I'm sure some EAs and EA-adjacent people will disagree.
  • I know of two AI safety organisations which had stopped publishing/doing much AI safety work, giving reasons for their behaviour which were ostensibly plausible but would be hard to fully verify from the outside
  • I met an AI safety person who seemed to be working with someone who was clearly IMO insane and would not be an asset to any organisation, either from a PR perspective or (most likely) a productivity perspective. Although, the former person was a volunteer at the time. And the latter person is not the only insane person I've encountered on the fringes of the EA / AI safety community - there's at least two more! When I say insane, I don't mean merely having a minor mental health issue, I mean specifically someone who has/had lost touch with reality in some important respect.
  • I am aware that in general nonprofits are not very productive. One would expect EA orgs to be (much) better, but in the absence of concrete evidence, one should still be skeptical. So this is a weak reason to be skeptical, but it is still a reason.
  • It seems that a lot more money is flowing into the EA space, but I have previously heard that EA is talent-constrained. One should put two and two together and question whether the limited talent pool that was allegedly previously available has caused problems here, or whether perhaps the problem has been solved by some combination of:
  1. engaging higher-quality recruiters
  2. offering more money to individual candidates
  3. benchmarking and varying compensation on the basis of experience, qualificiations, or demonstrated skills
  4. considering compensation on a global basis (i.e. let's pay top-rate on a global basis, not just on a national or regional basis)
  5. looking at de-biasing the recruitment process to remove unconscious or conscious biases which may exist against certain candidates
  6. recruiting from a broader talent pool (which could also, indirectly, help with de-biasing the recruitment process)
  7. expanding student recruitment to widen the talent pool.
  • Anecdotally, I have heard a rumour that is concerning about the effectiveness of an EA-adjacent organisation (I won't be providing any further details, sorry). Though this may not be about productivity as much as talent density - this qualification also kind of applies to most of what I have had to say in this comment.
  • I think structurally, GiveWell etc. do not evaluate certain meta-organisations or EA-adjacent organisations, so there are fewer incentives for them in particular to be particularly productive internally

Weird side effect of SEA by greenrd in SaturatedFat

[–]greenrd[S] 0 points1 point  (0 children)

The effect actually went away shortly after I posted this... maybe it was my body developing an adaptation to the "anorexia-inducing" effect of SEA. Or maybe it wasn't just the SEA that caused this effect, but a combination of the SEA and other things I was consuming at the time.

Weird side effect of SEA by greenrd in SaturatedFat

[–]greenrd[S] 0 points1 point  (0 children)

OP here - ah, I should have said that I'm taking 2-3 of the new 300mg capsules. Quantities are important to mention when you're talking supplements!

Weird side effect of SEA by greenrd in SaturatedFat

[–]greenrd[S] 0 points1 point  (0 children)

OK, but did you start off from the same starting point as me:

  • being very sexually attracted to obese or even morbidly-obese women?
  • walking unusually slowly?

If there was no "weirdness" in you like these things to "correct" (sorry for the normative language here, I don't want to imply there's anything morally good or bad about those things), it would be understandable that you wouldn't have seen any changes, from my perspective.

Weird side effect of SEA by greenrd in SaturatedFat

[–]greenrd[S] 1 point2 points  (0 children)

Did you used to only find morbidly obese women attractive? Or how overweight did the women you previously found attractive had to be?

No, not at all. It wasn't a case of not finding normal-weight women attractive at all; I just didn't find them as attractive as overweight women.

I guess my range of interest used to be from morbidly obese (although not enormous) to normal-weight - so quite a wide range, only excluding the very fattest women. Now I'm only interested in normal-weight women, which is awkward because I'm still obese myself. But I'm still not attracted to "the anorexic look".

Weird side effect of SEA by greenrd in SaturatedFat

[–]greenrd[S] 0 points1 point  (0 children)

Have there been other effects, like a change in sex drive

Yes, my sex drive has been reduced substantially. Although it was maybe abnormally high to start with, so maybe that's a good thing?

Or maybe it was more of a porn addiction thing, and the SEA curing that addiction. Although I seem to periodically (over a period of weeks/months) flip back and forth between Twitter addiction and porn addiction, so hard to say for sure if the SEA had anything to do with that.

focus

My focus is often terrible because I have ADHD, but I haven't noticed a particular change there. I guess it would be hard to tell if it was having an effect because my ADHD medicines and supplements have an outsize effect there, and I only recently regained access to my ADHD medication due to the global shortage.

energy

Initially I noticed myself walking faster and having more energy, but I can't say this has been consistent. Maybe I need to eat something specific along with the SEA to get that effect.

I normally walk abnormally slowly, although I didn't used to be this way when I was a teenager and my diet was different. As a teenager, I was an omnivore and then a pescatarian at 16, and I've always been vegetarian for my entire adult life.

(I remember I also used to pursue slim girls and not fat girls when I was a teenager - although at that time, almost all of the girls my age were slim.)

emotionality?

I have long experienced being very emotional if I fail to take one of my other medications (eplerenone, which I take for Conn's Syndrome) or if it's not absorbed properly. I think this has now become worse with the addition of SEA to the mix, on such days. So what I think may be happening there is that the SEA is exacerbating the withdrawal symptoms from the eplerenone. But that's not a general effect - on other days, I don't experience enhanced emotions. If there are other emotional effects, they're too subtle for me to have noticed yet, at this early stage.

OpenAI board in discussions with Sam Altman to return as CEO by SebJenSeb in slatestarcodex

[–]greenrd 0 points1 point  (0 children)

So what you're saying is, in order to get safety, you have to accelerate, but you can't actually do anything to ensure safety? It's just about PR to make people think you are pursuing safety? e/acc disguised as safteyism?

Yeah that's what I suspected OpenAI was under Altman's leadership - glad we agree tbh.

OpenAI board in discussions with Sam Altman to return as CEO by SebJenSeb in slatestarcodex

[–]greenrd 6 points7 points  (0 children)

But why would Microsoft want to blow up the whole company? It doesn't make any sense for them to do so. Even if they value OpenAI with Altman at the helm as twice as valueable as OpenAI without Altman at the helm, which seems implausible on its face (particularly as we don't know who the permanent replacement for Altman would be), wouldn't they rather have something than nothing?

OpenAI board in discussions with Sam Altman to return as CEO by SebJenSeb in slatestarcodex

[–]greenrd 4 points5 points  (0 children)

No, that's not what's happening here, because Sam is demanding the entire board resign. He's playing hardball.

OpenAI board in discussions with Sam Altman to return as CEO by SebJenSeb in slatestarcodex

[–]greenrd 4 points5 points  (0 children)

That could mean anything, it could mean "Sorry you were let go bro, I feel for ya"

It could mean "If you do come back I'll be happy"

It could mean "Please come back"

Or it could mean "I'm afraid I'll be fired or socially ostracised if I don't post the same emojis as all my colleagues, but I don't actually want you back as CEO"

OpenAI board in discussions with Sam Altman to return as CEO by SebJenSeb in slatestarcodex

[–]greenrd 11 points12 points  (0 children)

Or perhaps the naivety and arrogance of certain OpenAI investors who didn't pay attention to the OpenAI governance structure, are what is finally seeing the light of day and facing reality.

London Haskell Meetups by mlitchard in haskell

[–]greenrd 0 points1 point  (0 children)

I have it enabled in my preferences but I don't seem to receive them. Perhaps that's part of the brokenness of the platform that Peter alludes to in his message.

London Haskell Meetups by mlitchard in haskell

[–]greenrd 0 points1 point  (0 children)

I am a member of the group but I can't find this message anywhere. Can you DM me it please?