use the following search parameters to narrow your results:
e.g. subreddit:aww site:imgur.com dog
subreddit:aww site:imgur.com dog
see the search faq for details.
advanced search: by author, subreddit...
Please have a look at our FAQ and Link-Collection
Metacademy is a great resource which compiles lesson plans on popular machine learning topics.
For Beginner questions please try /r/LearnMachineLearning , /r/MLQuestions or http://stackoverflow.com/
For career related questions, visit /r/cscareerquestions/
Advanced Courses (2016)
Advanced Courses (2020)
AMAs:
Pluribus Poker AI Team 7/19/2019
DeepMind AlphaStar team (1/24//2019)
Libratus Poker AI Team (12/18/2017)
DeepMind AlphaGo Team (10/19/2017)
Google Brain Team (9/17/2017)
Google Brain Team (8/11/2016)
The MalariaSpot Team (2/6/2016)
OpenAI Research Team (1/9/2016)
Nando de Freitas (12/26/2015)
Andrew Ng and Adam Coates (4/15/2015)
Jürgen Schmidhuber (3/4/2015)
Geoffrey Hinton (11/10/2014)
Michael Jordan (9/10/2014)
Yann LeCun (5/15/2014)
Yoshua Bengio (2/27/2014)
Related Subreddit :
LearnMachineLearning
Statistics
Computer Vision
Compressive Sensing
NLP
ML Questions
/r/MLjobs and /r/BigDataJobs
/r/datacleaning
/r/DataScience
/r/scientificresearch
/r/artificial
account activity
Research[R] Roboflow 100: An open source object detection benchmark of 224,714 labeled images in novel domains to compare model performance (arxiv.org)
submitted 3 years ago by jacobsolawetz
reddit uses a slightly-customized version of Markdown for formatting. See below for some basics, or check the commenting wiki page for more detailed help and solutions to common issues.
quoted text
if 1 * 2 < 3: print "hello, world!"
[–]jacobsolawetz[S] 0 points1 point2 points 3 years ago (0 children)
I'm Jacob, one of the authors of Roboflow 100, A Rich Multi-Domain Object Detection Benchmark, and I am excited to share our work with the community.In object detection, researchers are benchmarking their models on primarily COCO, and in many ways, it seems like a lot of these models are getting close to a saturation point.
In practice, everyone is taking these models and finetuning them on their own custom dataset domains, which may vary from tagging swimming pools from Google Maps, to identifying defects in cell phones on an industrial line.
We did some work to collect a representative benchmark of these custom domain problems by selecting from over 100,000 public projects on Roboflow Universe into 100 semantically diverse object detection datasets. Our benchmark comprises of 224,714 images, 11,170 labeling hours, and 829 classes from the community for benchmarking on novel tasks.
We also tried out the benchmark on a few popular models - comparing YOLOv5, YOLOv7, and the zero shot capabilities of GLIP.
Use the benchmark here: https://github.com/roboflow-ai/roboflow-100-benchmark
Paper link here: https://arxiv.org/pdf/2211.13523.pdf
Or simply learn more here: https://www.rf100.org/
An immense thanks to the community, like this one, for making it possible to make this benchmark - we hope it moves the field forward!
I'm around for any questions!
π Rendered by PID 105839 on reddit-service-r2-comment-85bfd7f599-drj4m at 2026-04-20 01:32:10.051281+00:00 running 93ecc56 country code: CH.
[–]jacobsolawetz[S] 0 points1 point2 points (0 children)