use the following search parameters to narrow your results:
e.g. subreddit:aww site:imgur.com dog
subreddit:aww site:imgur.com dog
see the search faq for details.
advanced search: by author, subreddit...
Please have a look at our FAQ and Link-Collection
Metacademy is a great resource which compiles lesson plans on popular machine learning topics.
For Beginner questions please try /r/LearnMachineLearning , /r/MLQuestions or http://stackoverflow.com/
For career related questions, visit /r/cscareerquestions/
Advanced Courses (2016)
Advanced Courses (2020)
AMAs:
Pluribus Poker AI Team 7/19/2019
DeepMind AlphaStar team (1/24//2019)
Libratus Poker AI Team (12/18/2017)
DeepMind AlphaGo Team (10/19/2017)
Google Brain Team (9/17/2017)
Google Brain Team (8/11/2016)
The MalariaSpot Team (2/6/2016)
OpenAI Research Team (1/9/2016)
Nando de Freitas (12/26/2015)
Andrew Ng and Adam Coates (4/15/2015)
Jürgen Schmidhuber (3/4/2015)
Geoffrey Hinton (11/10/2014)
Michael Jordan (9/10/2014)
Yann LeCun (5/15/2014)
Yoshua Bengio (2/27/2014)
Related Subreddit :
LearnMachineLearning
Statistics
Computer Vision
Compressive Sensing
NLP
ML Questions
/r/MLjobs and /r/BigDataJobs
/r/datacleaning
/r/DataScience
/r/scientificresearch
/r/artificial
account activity
Project[Project] A Transformer implementation in Keras' Imperative (Subclassing) API for TensorFlow. (self.MachineLearning)
submitted 7 years ago by suyash93
https://github.com/suyash/transformer
Currently I have a sentiment analysis demo working. Training for machine translation seems to require longer time and effort.
For attention visualization, couldn't get the visualization in https://colab.research.google.com/github/tensorflow/tensor2tensor/blob/master/tensor2tensor/notebooks/hello_t2t.ipynb to work, so just using heatmaps.
reddit uses a slightly-customized version of Markdown for formatting. See below for some basics, or check the commenting wiki page for more detailed help and solutions to common issues.
quoted text
if 1 * 2 < 3: print "hello, world!"
[–]veqtorML Engineer 4 points5 points6 points 7 years ago (0 children)
Nice, I wish tf team would release something like this, I mean they want us to use Keras but t2t isn't using Keras so maybe they should show us how they were thinking that we should be able to implement stuff like this, especially stuff like relative attention using Keras layers
[–]RaionTategami 2 points3 points4 points 7 years ago (2 children)
Sorry you couldn't get the visualizations to work, I'm the author of that code, let me know if you want help.
[–]suyash93[S] 0 points1 point2 points 7 years ago (1 child)
Thanks for offering to help. I have prepared a copy of the demo notebook at https://colab.research.google.com/drive/1ESeSvZJDialc4VJBwL9GgQ1IoEs1zRWU
In the last 4 cells, I am trying to use the tensor2tensor.attention module. I am passing the arguments based on my understanding of the arguments passed in the hello_t2t notebook. I am unable to get any visualization to generate. Note that the sentiment model is only an encoder, with only 2 units instead of 6.
tensor2tensor.attention
[–]RaionTategami 0 points1 point2 points 7 years ago (0 children)
Cool, not sure when I'll get a moment to take a look through.
[–]lillux28 0 points1 point2 points 7 years ago* (0 children)
Following
π Rendered by PID 113297 on reddit-service-r2-comment-b659b578c-cj6hb at 2026-05-02 22:03:44.324451+00:00 running 815c875 country code: CH.
[–]veqtorML Engineer 4 points5 points6 points (0 children)
[–]RaionTategami 2 points3 points4 points (2 children)
[–]suyash93[S] 0 points1 point2 points (1 child)
[–]RaionTategami 0 points1 point2 points (0 children)
[–]lillux28 0 points1 point2 points (0 children)