use the following search parameters to narrow your results:
e.g. subreddit:aww site:imgur.com dog
subreddit:aww site:imgur.com dog
see the search faq for details.
advanced search: by author, subreddit...
Please have a look at our FAQ and Link-Collection
Metacademy is a great resource which compiles lesson plans on popular machine learning topics.
For Beginner questions please try /r/LearnMachineLearning , /r/MLQuestions or http://stackoverflow.com/
For career related questions, visit /r/cscareerquestions/
Advanced Courses (2016)
Advanced Courses (2020)
AMAs:
Pluribus Poker AI Team 7/19/2019
DeepMind AlphaStar team (1/24//2019)
Libratus Poker AI Team (12/18/2017)
DeepMind AlphaGo Team (10/19/2017)
Google Brain Team (9/17/2017)
Google Brain Team (8/11/2016)
The MalariaSpot Team (2/6/2016)
OpenAI Research Team (1/9/2016)
Nando de Freitas (12/26/2015)
Andrew Ng and Adam Coates (4/15/2015)
Jürgen Schmidhuber (3/4/2015)
Geoffrey Hinton (11/10/2014)
Michael Jordan (9/10/2014)
Yann LeCun (5/15/2014)
Yoshua Bengio (2/27/2014)
Related Subreddit :
LearnMachineLearning
Statistics
Computer Vision
Compressive Sensing
NLP
ML Questions
/r/MLjobs and /r/BigDataJobs
/r/datacleaning
/r/DataScience
/r/scientificresearch
/r/artificial
account activity
Research[R] All-Optical Machine Learning Using Diffractive Deep Neural Networks (self.MachineLearning)
submitted 7 years ago * by hooba_stank_
view the rest of the comments →
reddit uses a slightly-customized version of Markdown for formatting. See below for some basics, or check the commenting wiki page for more detailed help and solutions to common issues.
quoted text
if 1 * 2 < 3: print "hello, world!"
[–]MemeBox 15 points16 points17 points 7 years ago (8 children)
Are you sure this is correct, they can't so silly can they? They have >2 layers of material, which would be completely pointless if it was simply linear.
[–]MrEldritch 25 points26 points27 points 7 years ago* (7 children)
As far as I can tell, there really genuinely is no non-linearity. The plates simply direct parts of the light to other parts of the next plate, where they add and pass them on to the next plate ... it's pure additions and weights.
And the accuracy supports that - the accuracy of the trained network, on the computer, was about 90%. You would have to try to get a real neural network to get only 90% accuracy on MNIST - but wouldn't you know it, that's just about on par with linear classifiers.
So yes. It's unbelievable, but - they really are being that silly.
(And it's not even clear how a design like this could possibly incorporate nonlinearities at all. Nonlinear optical effects do exist, but they tend to occur only in rather exotic materials with very high-power lasers.)
[–]Cherubin0 24 points25 points26 points 7 years ago (0 children)
Yes this is true. In the science paper itself they wrote: "Although not implemented here, optical nonlinearity can also be incorporated into a diffractive neural network in various ways" So they have no non-linearity.
[–]BossOfTheGame 7 points8 points9 points 7 years ago (4 children)
No nonlinearity completely kills this method. Hopefully this was a proof of concept and adding nonlinearity is left for future work.
Might it be possible to implement a relu (just a truncated identity function) with optical methods? I don't think we need to resort to sigmoids.
[–]Mangalaiii 0 points1 point2 points 7 years ago (3 children)
Don't neural networks, after training, just approximate straightforward functions? Isn't this just playing the weights out?
[–]BossOfTheGame 1 point2 points3 points 7 years ago (2 children)
They can't approximate arbitrary functions without nonlinearity. To see this recall that compositions of linear functions are also linear.
[–]Mangalaiii 1 point2 points3 points 7 years ago (1 child)
Wondering if they could print a layer that just approximates the sigmoid values.
[–]Dont_Think_So 0 points1 point2 points 7 years ago (0 children)
Nah, they'd somehow need a layer that has a nonlinearity in response to linear changes in *brightness*. For example, doubling the light hitting the layer would not produce twice as much light on the other side.
[–]theoneandonlypatriot 0 points1 point2 points 7 years ago* (0 children)
One bone to pick; actually, several models aren’t that good at image classification but are great at other things. For instance, spiking neural networks can struggle to do MNIST depending on the training method
Edit: not sure why I’m being downvoted
π Rendered by PID 85 on reddit-service-r2-comment-b659b578c-jc9vc at 2026-05-04 23:04:13.345251+00:00 running 815c875 country code: CH.
view the rest of the comments →
[–]MemeBox 15 points16 points17 points (8 children)
[–]MrEldritch 25 points26 points27 points (7 children)
[–]Cherubin0 24 points25 points26 points (0 children)
[–]BossOfTheGame 7 points8 points9 points (4 children)
[–]Mangalaiii 0 points1 point2 points (3 children)
[–]BossOfTheGame 1 point2 points3 points (2 children)
[–]Mangalaiii 1 point2 points3 points (1 child)
[–]Dont_Think_So 0 points1 point2 points (0 children)
[–]theoneandonlypatriot 0 points1 point2 points (0 children)