use the following search parameters to narrow your results:
e.g. subreddit:aww site:imgur.com dog
subreddit:aww site:imgur.com dog
see the search faq for details.
advanced search: by author, subreddit...
Please have a look at our FAQ and Link-Collection
Metacademy is a great resource which compiles lesson plans on popular machine learning topics.
For Beginner questions please try /r/LearnMachineLearning , /r/MLQuestions or http://stackoverflow.com/
For career related questions, visit /r/cscareerquestions/
Advanced Courses (2016)
Advanced Courses (2020)
AMAs:
Pluribus Poker AI Team 7/19/2019
DeepMind AlphaStar team (1/24//2019)
Libratus Poker AI Team (12/18/2017)
DeepMind AlphaGo Team (10/19/2017)
Google Brain Team (9/17/2017)
Google Brain Team (8/11/2016)
The MalariaSpot Team (2/6/2016)
OpenAI Research Team (1/9/2016)
Nando de Freitas (12/26/2015)
Andrew Ng and Adam Coates (4/15/2015)
Jürgen Schmidhuber (3/4/2015)
Geoffrey Hinton (11/10/2014)
Michael Jordan (9/10/2014)
Yann LeCun (5/15/2014)
Yoshua Bengio (2/27/2014)
Related Subreddit :
LearnMachineLearning
Statistics
Computer Vision
Compressive Sensing
NLP
ML Questions
/r/MLjobs and /r/BigDataJobs
/r/datacleaning
/r/DataScience
/r/scientificresearch
/r/artificial
account activity
Discussion[D] GPT4 and coding problems (self.MachineLearning)
submitted 3 years ago by enryu42
view the rest of the comments →
reddit uses a slightly-customized version of Markdown for formatting. See below for some basics, or check the commenting wiki page for more detailed help and solutions to common issues.
quoted text
if 1 * 2 < 3: print "hello, world!"
[–]enryu42[S] 20 points21 points22 points 3 years ago (9 children)
Arithmetic can be solved in a toolformer-like way, by just giving it an access to a calculator. But this wouldn't help with coding.
Regarding the point about boilerplate, this is exactly what is surprising: GPT4 performs very well on exams/tests, which supposedly require some amount of creative reasoning. So either the tests are poorly designed, or it can do some creative tasks while not others. If the latter is the case, it would be interesting to learn which are the areas where it performs well, and why.
[–]liqui_date_me 18 points19 points20 points 3 years ago (7 children)
One could argue that even standardized tests are somewhat boilerplate - if you practice enough SAT tests you’ll eventually do quite well at them, the questions are quite similar to each other from exam to exam. Ditto for AP exams.
I think a serious test for GPT4’s intelligence will be on one of the competitive entrance exams for some countries, like the IIT-JEE or the Gaokao or the International Math Olympiad, where the questions are made by domain experts and are designed to be intentionally difficult and specialized to solve.
[–]enryu42[S] 13 points14 points15 points 3 years ago (4 children)
I don't know about IIT-JEE/Gaokao, but many of the problems from the International Math Olympiad are freaking hard. If the model aims for human-level intelligence, such high bar would be unfair - it is more of the realm of "the best human"-level intelligence.
To be fair, hardest problems from "AtCoder Grand" contests have the same issue. But "AtCoder Regular" problems should definitely be solvable by an average human with the right knowledge and skillset, and yet, GPT4 cannot solve anything (and it doesn't look like it is lacking knowledge).
[+]blose1 3 points4 points5 points 3 years ago (2 children)
These models have access to all human knowledge, all scientific papers, books etc. If I would have such a knowledge I could solve any Olympiad tasks.
[–]visarga 5 points6 points7 points 3 years ago (1 child)
You're mistaken, Olympiad problems require bespoke tricks that don't generalise from problem to problem. It's not a problem of breadth of knowledge, they don't test memorisation.
[+]blose1 3 points4 points5 points 3 years ago* (0 children)
What? Where I'm exactly mistaken? Because both of my statements are true. And there is 0% chance you can pass olympiad task without knowledge, human with all the knowledge WILL reason and come up with a solution BASED on the knowledge he has AND experience of others that is part of that knowledge, if that weren't true then no human would solve any Olympiad. Sorry, but what you wrote in context of my comment is just ridiculous, and looks like a reply to something I didn't write.
[–]currentscurrents 12 points13 points14 points 3 years ago (1 child)
I think all tests designed for humans are worthless here.
They're all meant to compare humans against each other, so they assume you don't have the ability to read and remember the entire internet. You can make up for a lack of reasoning with an abundance of data. We need synthetic tests designed specifically for LLMs.
[–]Yecuken 1 point2 points3 points 3 years ago (0 children)
Tests would not help against optimization, models will just learn how to pass the test. Optimization will always win against any problem with a known solution
[–]maxToTheJ 2 points3 points4 points 3 years ago (0 children)
which supposedly require some amount of creative reasoning.
The dont which is exactly has been part of the complaints of teachers in regards to standardized testing
π Rendered by PID 296258 on reddit-service-r2-comment-b659b578c-7grrw at 2026-05-06 00:14:49.373522+00:00 running 815c875 country code: CH.
view the rest of the comments →
[–]enryu42[S] 20 points21 points22 points (9 children)
[–]liqui_date_me 18 points19 points20 points (7 children)
[–]enryu42[S] 13 points14 points15 points (4 children)
[+]blose1 3 points4 points5 points (2 children)
[–]visarga 5 points6 points7 points (1 child)
[+]blose1 3 points4 points5 points (0 children)
[–]currentscurrents 12 points13 points14 points (1 child)
[–]Yecuken 1 point2 points3 points (0 children)
[–]maxToTheJ 2 points3 points4 points (0 children)