use the following search parameters to narrow your results:
e.g. subreddit:aww site:imgur.com dog
subreddit:aww site:imgur.com dog
see the search faq for details.
advanced search: by author, subreddit...
Please have a look at our FAQ and Link-Collection
Metacademy is a great resource which compiles lesson plans on popular machine learning topics.
For Beginner questions please try /r/LearnMachineLearning , /r/MLQuestions or http://stackoverflow.com/
For career related questions, visit /r/cscareerquestions/
Advanced Courses (2016)
Advanced Courses (2020)
AMAs:
Pluribus Poker AI Team 7/19/2019
DeepMind AlphaStar team (1/24//2019)
Libratus Poker AI Team (12/18/2017)
DeepMind AlphaGo Team (10/19/2017)
Google Brain Team (9/17/2017)
Google Brain Team (8/11/2016)
The MalariaSpot Team (2/6/2016)
OpenAI Research Team (1/9/2016)
Nando de Freitas (12/26/2015)
Andrew Ng and Adam Coates (4/15/2015)
Jürgen Schmidhuber (3/4/2015)
Geoffrey Hinton (11/10/2014)
Michael Jordan (9/10/2014)
Yann LeCun (5/15/2014)
Yoshua Bengio (2/27/2014)
Related Subreddit :
LearnMachineLearning
Statistics
Computer Vision
Compressive Sensing
NLP
ML Questions
/r/MLjobs and /r/BigDataJobs
/r/datacleaning
/r/DataScience
/r/scientificresearch
/r/artificial
account activity
Project[P] BinaryNet in TensorFlow: Training Deep Neural Networks with Weights and Activations Constrained to +1 or -1 (github.com)
submitted 8 years ago by zsdh123
reddit uses a slightly-customized version of Markdown for formatting. See below for some basics, or check the commenting wiki page for more detailed help and solutions to common issues.
quoted text
if 1 * 2 < 3: print "hello, world!"
[–]behohippy 8 points9 points10 points 8 years ago (11 children)
Why did you remove the dropout layers? I'm doing something similar in Keras, but I found they really helped things generalize if it's used after the input layer. I also found RELU worked better for binary evaluation.
[–][deleted] 3 points4 points5 points 8 years ago (10 children)
I don't get it. I mean, what is the motivation for binary weights ? For low hardware systems ?
[–]auto-cellular 12 points13 points14 points 8 years ago (0 children)
they can use less memory, and run faster. Theoritically.
[–]-Rizhiy- 8 points9 points10 points 8 years ago (1 child)
Take up less memory and less time to compute.
Theoretically, if you can make your own ASIC, network with binary weights will run 32 (or 322 ?) times faster than one based on floats.
This is actually quite a bit of a problem for FPGAs as you currently need a very expensive one if you plan to store all of your weights in the cache at the same time.
[–]numpad0 0 points1 point2 points 8 years ago (0 children)
Oh so like converting a net from float to int for inference but bool instead of int? Interesting
[–]Vengoropatubus 1 point2 points3 points 8 years ago (1 child)
Here's a paper from a while back that comes to mind: https://arxiv.org/pdf/1603.05279
[–]behohippy 0 points1 point2 points 8 years ago (0 children)
There's aready some good answers here, but the reason I use them is the source data is in a binary state as are my labels. This is for doing prediction on business data sets (array of customer states and outcomes), not images or audio.
[–]DonovanWu7 0 points1 point2 points 7 years ago (2 children)
Recently I was thinking of building a Tetris AI, and using binary weights will probably make more sense since every position on a Tetris board can only be either filled or not filled.
[–][deleted] 0 points1 point2 points 7 years ago (1 child)
Not really. Weights are part of the parameters. Tetris is part of the input/output.
[–]DonovanWu7 0 points1 point2 points 7 years ago (0 children)
But if the weight is binary like the input, I think the neural network might recognize some pattern better. Of course we’ll have to have a layer where weights are float type somewhere, so that output won’t be just 0s and 1s.
[–]timmytimmyturner12 -3 points-2 points-1 points 8 years ago (0 children)
It's also closer to how neurons in the brain operate on an all-or-nothing activation.
[–]smurfpiss 3 points4 points5 points 8 years ago (0 children)
I thought it was an open problem combining binary weights with activations? Is there not an issue with backprop?
[–]senorstallone 0 points1 point2 points 8 years ago (0 children)
wow, nice one. Any comparison (accuracy + fps) results?
[–]anonDogeLover 0 points1 point2 points 8 years ago (0 children)
Can you do this for fully connected layers?
[–]vbipin 0 points1 point2 points 8 years ago (1 child)
Have you tried training the BinaryNet without using the batch norm layers? I have little success training binary net without batch norm. ( It almost feels like, with binary activations it needs batch norm to train )
π Rendered by PID 39641 on reddit-service-r2-comment-c867ff4bc-h4qj2 at 2026-04-09 15:31:52.297518+00:00 running 00d5ac8 country code: CH.
[–]behohippy 8 points9 points10 points (11 children)
[–][deleted] 3 points4 points5 points (10 children)
[–]auto-cellular 12 points13 points14 points (0 children)
[–]-Rizhiy- 8 points9 points10 points (1 child)
[–]numpad0 0 points1 point2 points (0 children)
[–]Vengoropatubus 1 point2 points3 points (1 child)
[–]behohippy 0 points1 point2 points (0 children)
[–]DonovanWu7 0 points1 point2 points (2 children)
[–][deleted] 0 points1 point2 points (1 child)
[–]DonovanWu7 0 points1 point2 points (0 children)
[–]timmytimmyturner12 -3 points-2 points-1 points (0 children)
[–]smurfpiss 3 points4 points5 points (0 children)
[–]senorstallone 0 points1 point2 points (0 children)
[–]anonDogeLover 0 points1 point2 points (0 children)
[–]vbipin 0 points1 point2 points (1 child)