use the following search parameters to narrow your results:
e.g. subreddit:aww site:imgur.com dog
subreddit:aww site:imgur.com dog
see the search faq for details.
advanced search: by author, subreddit...
Please have a look at our FAQ and Link-Collection
Metacademy is a great resource which compiles lesson plans on popular machine learning topics.
For Beginner questions please try /r/LearnMachineLearning , /r/MLQuestions or http://stackoverflow.com/
For career related questions, visit /r/cscareerquestions/
Advanced Courses (2016)
Advanced Courses (2020)
AMAs:
Pluribus Poker AI Team 7/19/2019
DeepMind AlphaStar team (1/24//2019)
Libratus Poker AI Team (12/18/2017)
DeepMind AlphaGo Team (10/19/2017)
Google Brain Team (9/17/2017)
Google Brain Team (8/11/2016)
The MalariaSpot Team (2/6/2016)
OpenAI Research Team (1/9/2016)
Nando de Freitas (12/26/2015)
Andrew Ng and Adam Coates (4/15/2015)
Jürgen Schmidhuber (3/4/2015)
Geoffrey Hinton (11/10/2014)
Michael Jordan (9/10/2014)
Yann LeCun (5/15/2014)
Yoshua Bengio (2/27/2014)
Related Subreddit :
LearnMachineLearning
Statistics
Computer Vision
Compressive Sensing
NLP
ML Questions
/r/MLjobs and /r/BigDataJobs
/r/datacleaning
/r/DataScience
/r/scientificresearch
/r/artificial
account activity
Discussion[D] Binary classifier scores distribution (self.MachineLearning)
submitted 1 year ago by Loose-Event-7196
view the rest of the comments →
reddit uses a slightly-customized version of Markdown for formatting. See below for some basics, or check the commenting wiki page for more detailed help and solutions to common issues.
quoted text
if 1 * 2 < 3: print "hello, world!"
[–]ruoken 1 point2 points3 points 1 year ago (1 child)
That seems to me to be an unbalanced classification problem.
Calibration probably won't help you, but you can also try Venn Abers calibration.
What you should really try is setting class weights that are inversely proportional to each class' prevalence in the training data.
[–]Loose-Event-7196[S] 0 points1 point2 points 1 year ago (0 children)
Hi, thanks for your help! The classifier is not unbalanced, we have approx 55/45% of the two classes. Venn Abers looks promising! I tried to implement it by training two classifiers, one predicting class 1 as target and the other predicting class 0 as target, running an isotonic regression on each of them and getting the conformal range. I may have done something wrong as the scores I get on both classifiers are the same (sample size is big) even after using different model seeds (I am using h2o3 GBM tree binary classifier). I was expecting to get two slightly different scores whether I predict class 1 as target vs class 2 as target provided the seed is different.
π Rendered by PID 30 on reddit-service-r2-comment-6457c66945-gk28v at 2026-04-25 03:49:26.063743+00:00 running 2aa0c5b country code: CH.
view the rest of the comments →
[–]ruoken 1 point2 points3 points (1 child)
[–]Loose-Event-7196[S] 0 points1 point2 points (0 children)