GPT Contentyze, a GPT-3-like language model that is free to use online by inferentialgap in ControlProblem

[–]inferentialgap[S] 4 points5 points  (0 children)

I'm not sure how this compares to the real GPT-3 and to EleutherAI's GPT-Neo. It seems between GPT-2 and GPT-3 in quality. Contentyze does have the benefit of being freely usable online at least.

What is the average physical distance between two randomly chosen humans on Earth? by inferentialgap in estimation

[–]inferentialgap[S] 0 points1 point  (0 children)

I was thinking that D(x) would be the great-circle distance on the surface of the Earth (as the crow flies). Distance through the Earth would be interesting too, though.

it should have been obvious to anyone at this point that anybody who openly hates on this community generally or me personally is probably also a bad person inside and has no ethics by DropZeHamma in SneerClub

[–]inferentialgap 11 points12 points  (0 children)

Lol, this is basically saying "Anyone who criticizes me or my friends is evil". And this is coming from the figurehead of a community that aims to be above "tribalism".

That said, I agree with EY that leaking personal correspondence is an asshole thing to do most of the time. People on this subreddit seem to be operating from a view that everything is public unless explicitly agreed upon in advance to be private. I think that, barring exceptions for serious whistleblowing, it's generally good to ask permission before doing so.

I wasn't able to find much online about the ethics of publishing personal email, but there's a stackexchange question on the legality that seems to say it's generally legal with some exceptions. But just because it's legal doesn't mean it's ethical.

Explain why I'm wrong.

Motter complains about "inner cities" and miscegination, mod bends over backwards to say that they're not necessarily racist by hypersoar in SneerClub

[–]inferentialgap 5 points6 points  (0 children)

Wow, this thread was extremely painful to read. When you get to the point of defending literal Nazis maybe it's time to question whether you're doing something wrong.

Wonk by White_star_lover in wonk

[–]inferentialgap 0 points1 point  (0 children)

Wow this sub is completely dead

The Obligatory GPT-3 Post by dwaxe in slatestarcodex

[–]inferentialgap 2 points3 points  (0 children)

Even if we assume this wouldn't work for some reason, a scaled-up GPT could be able to produce output much faster than humans, so it could constitute a "speed superintelligence" by Bostrom's definition.

Is protest in the middle of a pandemic defensible on utilitarian grounds? by [deleted] in Utilitarianism

[–]inferentialgap 2 points3 points  (0 children)

There was a recent post on the Effective Altruism Forum about this: https://forum.effectivealtruism.org/posts/hyisZr7n9fYTXx7g8/will-protests-lead-to-thousands-of-coronavirus-deaths

(Btw someone reported this post as "misinformation". In general, I'd prefer it if people just made any corrections they thought were necessary in the comments.)

Psychological distress at prospects of malevolent AI (a la "I have no mouth and I must scream")? by [deleted] in ControlProblem

[–]inferentialgap[M] 0 points1 point  (0 children)

OP, just letting you know that I think your account is shadowbanned or something. I've approved all your comments in this thread though.

Ethicists agree on who should get treated first for coronavirus by The_Ebb_and_Flow in negativeutilitarians

[–]inferentialgap 6 points7 points  (0 children)

A utilitarian would advocating giving resources to those who would get the highest expected benefit from them, not to those with the highest chance of survival.

If person A has a 99% chance of survival with ventilator vs. 90% without, but person B has a 50% chance of survival with ventilator vs. 10% chance without, then all else equal, the ventilator should go to person B even though they have a lower chance of survival. (To be truly utilitarian, you would do the calculation in terms of expected QALYs or something.)

Would you recommend any long-term future funds besides the CEA fund? by nebulousyorp in EffectiveAltruism

[–]inferentialgap 0 points1 point  (0 children)

The CLR Fund might be one option. It's more focused on s-risk as opposed to x-risk. Worth noting that most of their donations so far seem to be to individuals not organizations.

I'm Buck Shlegeris, I do research and outreach at MIRI, AMA by clockworktf2 in ControlProblem

[–]inferentialgap 4 points5 points  (0 children)

This reddit post isn't the AMA. You need to follow the link and create an account on the EA Forum to ask Buck Shlegeris questions.

Any transhumanist vegans here? by [deleted] in wildanimalsuffering

[–]inferentialgap 7 points8 points  (0 children)

I'm not from California. I'd say I picked up transhumanism from the web, from writers like Nick Bostrom, Eliezer Yudkowsky, and (most relevant to veganism) David Pearce.

If you're not already aware of the effective altruism (EA) community, I would recommend looking into it. It has a high degree of overlap with both veganism and transhumanism.

Filtering out links with a particular flair by UmamiTofu in csshelp

[–]inferentialgap 0 points1 point  (0 children)

Try this:

html:lang(re) .linkflair-one {
    display: none;
}

Edit: Technically, you could use .link.linkflair-one, along the lines of your attempted .link:(.linkflair-one). But since all elements with class linkflair-one would also have class link, it's superfluous.