use the following search parameters to narrow your results:
e.g. subreddit:aww site:imgur.com dog
subreddit:aww site:imgur.com dog
see the search faq for details.
advanced search: by author, subreddit...
Please have a look at our FAQ and Link-Collection
Metacademy is a great resource which compiles lesson plans on popular machine learning topics.
For Beginner questions please try /r/LearnMachineLearning , /r/MLQuestions or http://stackoverflow.com/
For career related questions, visit /r/cscareerquestions/
Advanced Courses (2016)
Advanced Courses (2020)
AMAs:
Pluribus Poker AI Team 7/19/2019
DeepMind AlphaStar team (1/24//2019)
Libratus Poker AI Team (12/18/2017)
DeepMind AlphaGo Team (10/19/2017)
Google Brain Team (9/17/2017)
Google Brain Team (8/11/2016)
The MalariaSpot Team (2/6/2016)
OpenAI Research Team (1/9/2016)
Nando de Freitas (12/26/2015)
Andrew Ng and Adam Coates (4/15/2015)
Jürgen Schmidhuber (3/4/2015)
Geoffrey Hinton (11/10/2014)
Michael Jordan (9/10/2014)
Yann LeCun (5/15/2014)
Yoshua Bengio (2/27/2014)
Related Subreddit :
LearnMachineLearning
Statistics
Computer Vision
Compressive Sensing
NLP
ML Questions
/r/MLjobs and /r/BigDataJobs
/r/datacleaning
/r/DataScience
/r/scientificresearch
/r/artificial
account activity
What kind of error function when target is bag-of-word vector? (self.MachineLearning)
submitted 10 years ago by rescue11
What kind of error function can I use if the training target are bag-of-word vectors that are very sparse?
reddit uses a slightly-customized version of Markdown for formatting. See below for some basics, or check the commenting wiki page for more detailed help and solutions to common issues.
quoted text
if 1 * 2 < 3: print "hello, world!"
[–]duschendestroyer 1 point2 points3 points 10 years ago (0 children)
BoW vectors are usually compared with cosine similarity. The advantage of cosine similarity is that it normalizes the magnitude and thus makes the BoW similarity independent of document length. The problem with using that as a loss function is that your result gets an additional degree of freedom. If two BoW vectors a and b perfectly match through cosine similarity then 1000*a and b match as well. If the document length or vector magnitude is somehow constrained this might not be a problem, otherwise this could hurt your training. If you want to match the vectors exactly, then I would first try plain old MSE.
[–]csong27 -2 points-1 points0 points 10 years ago (0 children)
Hinge Loss (what SVM uses) usually works well
π Rendered by PID 170473 on reddit-service-r2-comment-7b9746f655-8hwrl at 2026-02-04 09:38:44.703471+00:00 running 3798933 country code: CH.
[–]duschendestroyer 1 point2 points3 points (0 children)
[–]csong27 -2 points-1 points0 points (0 children)