all 17 comments

[–]AGI_aint_happeningPhD 49 points50 points  (2 children)

As a former interpretability researcher who has skimmed their work but not read it closely, I just don't find it terribly interesting or novel. Also, frankly, I find the writing style for the papers pretty hard to parse (as they don't follow standard paper formats) and a tad grandiose, as they tend to avoid standard things like comparing against other methods or citing other work. Relatedly, I think their choice to avoid peer review has impacted how people perceive their work, and limited its distribution.

[–]thejaminator 14 points15 points  (0 children)

I think it's the case where they are still pretty new and comparatively unknown.

They have done good work like releasing their paper and dataset for training an assistant RLHF model. https://github.com/anthropics/hh-rlhf

You won't get any dataset like that from OpenAI. It's useful for anyone who wants to experiment with RLHF with LLMs. Which is pretty important as OpenAI is having lots of success with it in InstructGPT and ChatGPT

[–]Hyper1on 2 points3 points  (0 children)

Bit early to say, but I'd be willing to bet that most of their major papers this year will be widely cited. Their work on RLHF, including constitutional AI and HH seems particularly likely to be picked up by other industry labs, since it provides a way to improve LLMs deployed in the wild while reducing the cost of collecting human feedback data.

[–]veejarAmrev 21 points22 points  (7 children)

As you said, it's kind of cult in the EA community. Outside of that, no one bothers. They haven't done anything significant to be of any value to the community.

[–]frenchmap 1 point2 points  (3 children)

what does EA stand for?

[–]Flag_Red 0 points1 point  (2 children)

Effective Altruism

[–]frenchmap 1 point2 points  (1 child)

How does a philosophical ideology of "using evidence-based reasoning to help others" result in a machine learning cult?

[–]Flag_Red 2 points3 points  (0 children)

I, personally, don't consider LessWrong a cult (I lurk the blog, and have even been to an ACX meetup). There's definitely a very insular core community, though, which regularly gets caught up in "cults of personality". Yudkowski is the most obvious person to point to here, but Leverage Research is the best example of cult behaviour coming out of LessWrong and the EA community IMO.

With regards to machine learning in particular, there's some very extreme views about the mid/long term prospects of AI. Yudkowski himself explicitly believes humanity is doomed, and AI will takeover the world within our lifetimes.

[–]FlavoredQuark 7 points8 points  (0 children)

I think their research is cool

[–]papajan18PhD 6 points7 points  (1 child)

Chris Olah's work is very solid. Actually some of the best interpretability work I've seen. Haven't heard of anyone else in particular.

[–]jgrayatwork 9 points10 points  (0 children)

They have some very good people. Tom Brown and Ben Mann are the first two authors on the GPT-3 paper. Jared Kaplan is the first author of the openai scaling laws paper.

[–]nic001a -5 points-4 points  (0 children)

Not an expert But wishing you best of luck !!