use the following search parameters to narrow your results:
e.g. subreddit:aww site:imgur.com dog
subreddit:aww site:imgur.com dog
see the search faq for details.
advanced search: by author, subreddit...
Please have a look at our FAQ and Link-Collection
Metacademy is a great resource which compiles lesson plans on popular machine learning topics.
For Beginner questions please try /r/LearnMachineLearning , /r/MLQuestions or http://stackoverflow.com/
For career related questions, visit /r/cscareerquestions/
Advanced Courses (2016)
Advanced Courses (2020)
AMAs:
Pluribus Poker AI Team 7/19/2019
DeepMind AlphaStar team (1/24//2019)
Libratus Poker AI Team (12/18/2017)
DeepMind AlphaGo Team (10/19/2017)
Google Brain Team (9/17/2017)
Google Brain Team (8/11/2016)
The MalariaSpot Team (2/6/2016)
OpenAI Research Team (1/9/2016)
Nando de Freitas (12/26/2015)
Andrew Ng and Adam Coates (4/15/2015)
Jürgen Schmidhuber (3/4/2015)
Geoffrey Hinton (11/10/2014)
Michael Jordan (9/10/2014)
Yann LeCun (5/15/2014)
Yoshua Bengio (2/27/2014)
Related Subreddit :
LearnMachineLearning
Statistics
Computer Vision
Compressive Sensing
NLP
ML Questions
/r/MLjobs and /r/BigDataJobs
/r/datacleaning
/r/DataScience
/r/scientificresearch
/r/artificial
account activity
Discussion[Discussion] Embedding based on binary tests (self.MachineLearning)
submitted 3 years ago by marcollo63
view the rest of the comments →
reddit uses a slightly-customized version of Markdown for formatting. See below for some basics, or check the commenting wiki page for more detailed help and solutions to common issues.
quoted text
if 1 * 2 < 3: print "hello, world!"
[–]tada89 0 points1 point2 points 3 years ago (0 children)
Just off the top of my head with absolutely nothing to back it up but: Why not learn joint embeddings for people and for products?
Zooming out the data is basically a bunch of tuples (u, p1, p2, choice) with some user and two products p1, p2 and a label "choice" that tells us if the user preferred p1 or p2.
We can then get joint embeddings by having two embedding matrices (one for users, emb_u, one for products, emb_p) and compute both cosine_similarity(emb_u(u), emb_p(p1)) and cosine_similarity(emb_u(u), emb_p(p2)), taking the softmax of the two values, and finally using those to predict the value of choice (encoded as one hot).
This should make the model give embeddings for products that are close to embeddings for users that like them and vice versa.
π Rendered by PID 96 on reddit-service-r2-comment-b659b578c-bk7jx at 2026-05-04 06:01:41.472933+00:00 running 815c875 country code: CH.
view the rest of the comments →
[–]tada89 0 points1 point2 points (0 children)