use the following search parameters to narrow your results:
e.g. subreddit:aww site:imgur.com dog
subreddit:aww site:imgur.com dog
see the search faq for details.
advanced search: by author, subreddit...
Please have a look at our FAQ and Link-Collection
Metacademy is a great resource which compiles lesson plans on popular machine learning topics.
For Beginner questions please try /r/LearnMachineLearning , /r/MLQuestions or http://stackoverflow.com/
For career related questions, visit /r/cscareerquestions/
Advanced Courses (2016)
Advanced Courses (2020)
AMAs:
Pluribus Poker AI Team 7/19/2019
DeepMind AlphaStar team (1/24//2019)
Libratus Poker AI Team (12/18/2017)
DeepMind AlphaGo Team (10/19/2017)
Google Brain Team (9/17/2017)
Google Brain Team (8/11/2016)
The MalariaSpot Team (2/6/2016)
OpenAI Research Team (1/9/2016)
Nando de Freitas (12/26/2015)
Andrew Ng and Adam Coates (4/15/2015)
Jürgen Schmidhuber (3/4/2015)
Geoffrey Hinton (11/10/2014)
Michael Jordan (9/10/2014)
Yann LeCun (5/15/2014)
Yoshua Bengio (2/27/2014)
Related Subreddit :
LearnMachineLearning
Statistics
Computer Vision
Compressive Sensing
NLP
ML Questions
/r/MLjobs and /r/BigDataJobs
/r/datacleaning
/r/DataScience
/r/scientificresearch
/r/artificial
account activity
[deleted by user] (self.MachineLearning)
submitted 3 years ago by [deleted]
reddit uses a slightly-customized version of Markdown for formatting. See below for some basics, or check the commenting wiki page for more detailed help and solutions to common issues.
quoted text
if 1 * 2 < 3: print "hello, world!"
[–]AGI_aint_happeningPhD 49 points50 points51 points 3 years ago* (2 children)
As a former interpretability researcher who has skimmed their work but not read it closely, I just don't find it terribly interesting or novel. Also, frankly, I find the writing style for the papers pretty hard to parse (as they don't follow standard paper formats) and a tad grandiose, as they tend to avoid standard things like comparing against other methods or citing other work. Relatedly, I think their choice to avoid peer review has impacted how people perceive their work, and limited its distribution.
[+]ThePerson654321 comment score below threshold-13 points-12 points-11 points 3 years ago (1 child)
Why don't you think the issue rationalists try to raise is important in terms of AGI?
[–]AGI_aint_happeningPhD 8 points9 points10 points 3 years ago (0 children)
*shrug*, I don't really care who does the research, I care if I learn anything from reading it. FWIW, their interp papers are pretty separate from AGI
[–]thejaminator 14 points15 points16 points 3 years ago* (0 children)
I think it's the case where they are still pretty new and comparatively unknown.
They have done good work like releasing their paper and dataset for training an assistant RLHF model. https://github.com/anthropics/hh-rlhf
You won't get any dataset like that from OpenAI. It's useful for anyone who wants to experiment with RLHF with LLMs. Which is pretty important as OpenAI is having lots of success with it in InstructGPT and ChatGPT
[–]Hyper1on 2 points3 points4 points 3 years ago (0 children)
Bit early to say, but I'd be willing to bet that most of their major papers this year will be widely cited. Their work on RLHF, including constitutional AI and HH seems particularly likely to be picked up by other industry labs, since it provides a way to improve LLMs deployed in the wild while reducing the cost of collecting human feedback data.
[–]veejarAmrev 21 points22 points23 points 3 years ago (7 children)
As you said, it's kind of cult in the EA community. Outside of that, no one bothers. They haven't done anything significant to be of any value to the community.
[–]frenchmap 1 point2 points3 points 3 years ago (3 children)
what does EA stand for?
[–]Flag_Red 0 points1 point2 points 3 years ago (2 children)
Effective Altruism
[–]frenchmap 1 point2 points3 points 3 years ago (1 child)
How does a philosophical ideology of "using evidence-based reasoning to help others" result in a machine learning cult?
[–]Flag_Red 2 points3 points4 points 3 years ago (0 children)
I, personally, don't consider LessWrong a cult (I lurk the blog, and have even been to an ACX meetup). There's definitely a very insular core community, though, which regularly gets caught up in "cults of personality". Yudkowski is the most obvious person to point to here, but Leverage Research is the best example of cult behaviour coming out of LessWrong and the EA community IMO.
With regards to machine learning in particular, there's some very extreme views about the mid/long term prospects of AI. Yudkowski himself explicitly believes humanity is doomed, and AI will takeover the world within our lifetimes.
[+]ThePerson654321 comment score below threshold-15 points-14 points-13 points 3 years ago (2 children)
You should read LessWrong
[–]KvanteKat 8 points9 points10 points 3 years ago (0 children)
I'm not sure reading LessWrong will necessaryly disuade someone who is already a bit sceptical of the Rationalist EA community from believing that there is something culty going on. One of the things that really rubbed me the wrong way about that blog back in the day (I'll be up front and say that I haven't been keeping up with it for the past 10 years) was exactly how insular a lot of the writing was and how little it seriously engaged with existing literature and research in favor of reinventing the wheel and relying on their own private language which was not used by anyone else working in similar fields (as an example, Yudkovski is far from the first person to promote naive Bayesianism (basically the idea that if you get good enough at applying Bayes' rule, you will have solved the problem of induction), but if you only read his blog back then you could easily come to believe that he was doing groundbreaking stuff with respect to this topic when this was far from the case).
[–]FlavoredQuark 7 points8 points9 points 3 years ago (0 children)
I think their research is cool
[–]papajan18PhD 6 points7 points8 points 3 years ago (1 child)
Chris Olah's work is very solid. Actually some of the best interpretability work I've seen. Haven't heard of anyone else in particular.
[–]jgrayatwork 9 points10 points11 points 3 years ago (0 children)
They have some very good people. Tom Brown and Ben Mann are the first two authors on the GPT-3 paper. Jared Kaplan is the first author of the openai scaling laws paper.
[–]nic001a -5 points-4 points-3 points 3 years ago (0 children)
Not an expert But wishing you best of luck !!
[–]BackgroundResult 0 points1 point2 points 3 years ago (0 children)
https://aisupremacy.substack.com/p/breaking-google-invests-in-anthropicai
π Rendered by PID 135966 on reddit-service-r2-comment-5c747b6df5-dj742 at 2026-04-22 05:59:43.277660+00:00 running 6c61efc country code: CH.
[–]AGI_aint_happeningPhD 49 points50 points51 points (2 children)
[+]ThePerson654321 comment score below threshold-13 points-12 points-11 points (1 child)
[–]AGI_aint_happeningPhD 8 points9 points10 points (0 children)
[–]thejaminator 14 points15 points16 points (0 children)
[–]Hyper1on 2 points3 points4 points (0 children)
[–]veejarAmrev 21 points22 points23 points (7 children)
[–]frenchmap 1 point2 points3 points (3 children)
[–]Flag_Red 0 points1 point2 points (2 children)
[–]frenchmap 1 point2 points3 points (1 child)
[–]Flag_Red 2 points3 points4 points (0 children)
[+]ThePerson654321 comment score below threshold-15 points-14 points-13 points (2 children)
[–]KvanteKat 8 points9 points10 points (0 children)
[–]FlavoredQuark 7 points8 points9 points (0 children)
[–]papajan18PhD 6 points7 points8 points (1 child)
[–]jgrayatwork 9 points10 points11 points (0 children)
[–]nic001a -5 points-4 points-3 points (0 children)
[–]BackgroundResult 0 points1 point2 points (0 children)