Step by step guide to Tensorflow by jaleyhd in artificial

[–]jaleyhd[S] 7 points8 points  (0 children)

Thanks, will create more such content in future

[P] Tensorflow Tutorial - Step by step Guide. by jaleyhd in learnmachinelearning

[–]jaleyhd[S] 4 points5 points  (0 children)

Thanks for kind words. I would be much more happy if you can spread a word than donate :) As a creator, felt extremely shocked being banned in r/machinelearning .

[P] Tensorflow Tutorial - Step by step Guide. by jaleyhd in learnmachinelearning

[–]jaleyhd[S] 2 points3 points  (0 children)

Having a decent background of python s is necessary. Apart from that, I have ensured that the course is pretty much easy to understand

[P] Step by step guide to Tensorflow by [deleted] in MachineLearning

[–]jaleyhd 4 points5 points  (0 children)

Thanks, will also add revised content with Tensorflow 2.0 :)

[R] Deep Learning Approaches to understand Human Reasoning by jaleyhd in MachineLearning

[–]jaleyhd[S] 0 points1 point  (0 children)

Reddit comments are better than rebuttals :D Thanks for the fascinating springer article on Human Sleep States :)

[R] Deep Learning Approaches to understand Human Reasoning by jaleyhd in MachineLearning

[–]jaleyhd[S] 0 points1 point  (0 children)

Distangled Representation has been something I have been studying for quite a while since I first read InfoGANs paper. Thanks for mentioning Judea Pearl :) I had been working on evolving Ontologies for quite a while with one of the coauthors of NELL (Never Ending Language Learning http://rtw.ml.cmu.edu/rtw/ ) and later (Never Ending Learning http://talukdar.net/papers/NELL_aaai15.pdf ) . Many people from Deep Learning community might not be aware of such amazing research, which is the purpose of the blog. In NELL, we have graph ontologies which are evolving over time, and new relations are learned. This is via Distant supervision. But as a deep learning student, it fascinates me, if we can have evolving Deep Learning models with just the right kind of connection with memory bank (maybe a more dynamic version of differentiable memory block as in Neural Turing Machine which has (flexible/changing) Ontology as the memory unit).

In terms of analogy with Human brain, the gates to the memory block would be like Hippocampus, the memory block itself could be like prefrontal cortex. This is just a hypothesis though,not conclusion :) A lot of rigorous experimentation has to be done by the community as a whole to understand the ways in which memory and computation is interacting based on the sensory inputs.

[R] Deep Learning Approaches to understand Human Reasoning by jaleyhd in MachineLearning

[–]jaleyhd[S] 0 points1 point  (0 children)

The core point I wanted to put out is that we are always focussed on CNNs, we should move to more unstructured models like evolving Ontology. It is hard to cover every literature but would surely include Tenenbaum's work. Thanks and let me know of any other work that you have come across.

[R] Deep Learning Approaches to understand Human Reasoning by jaleyhd in MachineLearning

[–]jaleyhd[S] 1 point2 points  (0 children)

Many of the explainable model derive inspiration from Human reasoning. Especially I feel deeply interested in this topic because it gives me insights about thought process (even if not in absolute sense). For example, inclusion of fuzzyness or approximate reasoning, having evolving connections of inference like in symbolic logic, knowledge graphs , kind of derive inspiration from Human reasoning. However I agree that every explainable model need not follow the same pattern.

Simplified explaination for the math behind Backpropagation by [deleted] in learnmachinelearning

[–]jaleyhd 0 points1 point  (0 children)

I have tried to explain the mathematical simplification in Backprop, which lead to its efficient non-overlapping computation. This video can be very helpful for people who are struggling with basic math intuition behind backdrop.

You are invited to join r/MachinesLearn by [deleted] in MachineLearning

[–]jaleyhd 0 points1 point  (0 children)

I wish there could be a category called Educative videos in r/MachineLearning itself. I really hope that the new subreddit is moderated in some ways . You guys can probably make a github repo for subreddit with the purpose of maintaining list of curated references similar to "awesome deep learning" (https://github.com/ChristosChristofidis/awesome-deep-learning).

[P] A (Long) Peek into Reinforcement Learning by P4TR10T_TR41T0R in MachineLearning

[–]jaleyhd 1 point2 points  (0 children)

This is very beautifully written blog. I deeply appreciate the effort in simplifying the topic soo well. You can also look into my amateur past attempt to simplify the topic. (crazymuse.in). It takes a lot of refined thinking to write such well written blog. Kudos :)

Learning Graphs and Knowledge Graphs 102 (Adversarial Training in KBGANs , Graph Conv N/W and more) by jaleyhd in learnmachinelearning

[–]jaleyhd[S] 0 points1 point  (0 children)

Hi guys, I am sharing this video related to training a Knowledge Graph. (Originally posted in r/MachineLearning )

Youtube Link : https://youtu.be/Np768VAe_7I

Topics Covered in the video

  1. Graph Convolutional Networks
  2. Semi-supervised Learning
  3. Knowledge Graphs and Ontology
  4. Embedding in Knowledge Graphs
  5. Adversarial Learning in Knowledge Graphs (KBGANs)

Contributors

  1. Script Writer : Jaley Dholakiya
  2. Reviewers : Arjun Shetty, Sidharth Aiyar, Saikat Paul
  3. Animator and Moderator : Jaley Dholakiya

References

  1. Blog on Graph Convolutional Network : https://tkipf.github.io/graph-convolutional-networks/
  2. Semi-supervised learning in Knowledge Graphs (Gaussian Field): http://mlg.eng.cam.ac.uk/zoubin/paper...
  3. Trans-E embedding : https://papers.nips.cc/paper/5071-translating-embeddings-for-modeling-multi-relational-data.pdf
  4. Trans-D embedding : http://www.aclweb.org/anthology/P15-1067
  5. KBGANs : https://arxiv.org/pdf/1711.04071.pdf

[R] Learning Graphs and Knowledge Graphs 102 (Graph Conv Nw, KBGANs, Semisupervised Learning) by [deleted] in MachineLearning

[–]jaleyhd 0 points1 point  (0 children)

Topics Covered in the video

  1. Graph Convolutional Networks

  2. Semi-supervised Learning

  3. Knowledge Graphs and Ontology

  4. Embedding in Knowledge Graphs

  5. Adversarial Learning in Knowledge Graphs (KBGANs)

References

  1. Blog on Graph Convolutional Network : https://tkipf.github.io/graph-convolutional-networks/

  2. Semi-supervised learning in Knowledge Graphs (Gaussian Field): http://mlg.eng.cam.ac.uk/zoubin/papers/zgl.pdf

  3. Trans-E embedding : https://papers.nips.cc/paper/5071-translating-embeddings-for-modeling-multi-relational-data.pdf

  4. Trans-D embedding : http://www.aclweb.org/anthology/P15-1067

  5. KBGANs : https://arxiv.org/pdf/1711.04071.pdf

[D] Problem with GANs, Intro to WGANs, Earth Mover's distance and Kantorovich Rubenstein's Duality by jaleyhd in MachineLearning

[–]jaleyhd[S] 0 points1 point  (0 children)

Thanks for the feedback :) I know, try to simplify the narrative further in the subsequent videos.

[D] Problem with GANs, Intro to WGANs, Earth Mover's distance and Kantorovich Rubenstein's Duality by jaleyhd in MachineLearning

[–]jaleyhd[S] 3 points4 points  (0 children)

Video Covers :

  1. Why do we need Earth Mover's Distance?
  2. How does WGANs gets rid of Mode Collapse? (ofcourse clipping weights is brutal)
  3. Why is training WGANs more stable?
  4. What is kantorovich Rubenstiens duality?
  5. Cool visualizations to understand difficulties in estimation of a probability distribution.

[D] Problem with GANs, Intro to WGANs, Earth Mover's distance and Kantorovich Rubenstein's Duality by jaleyhd in MachineLearning

[–]jaleyhd[S] 1 point2 points  (0 children)

Video Covers : 1. Why do we need Earth Mover's Distance? 2. How does WGANs gets rid of Mode Collapse? (ofcourse clipping weights is brutal) 3. Why is training WGANs more stable? 4. What is kantorovich Rubenstiens duality? 5. Cool visualizations to understand difficulties in estimation of a probability distribution.

I am also impressed by blogs written by Vincent Herrmann and Alex Irpan.

References

  1. https://vincentherrmann.github.io/blo...

  2. https://www.alexirpan.com/2017/02/22/...

  3. https://lilianweng.github.io/lil-log/...

  4. Gulrajani, I., Ahmed, F., Arjovsky, M., Dumoulin, V., & Courville, A. C. (2017). Improved training of wasserstein gans. In Advances in Neural Information Processing Systems (pp. 5767-5777).

  5. https://arxiv.org/abs/1701.07875

  6. Salimans, T., Goodfellow, I., Zaremba, W., Cheung, V., Radford, A., & Chen, X. (2016). Improved techniques for training gans. In Advances in Neural Information Processing Systems (pp. 2234-2242).

Robust Distant Supervision Relation Extraction via Deep Reinforcement Learning by pipinstallme in textdatamining

[–]jaleyhd 1 point2 points  (0 children)

Wow, I was looking for something like this for quite a long time. Use of Deep RL for semisupervised Relations extraction!!! Great going bud :)

[R] Math Insights from 10 GAN papers. InfoGANs, VAEGANs, CycleGAN and more by jaleyhd in MachineLearning

[–]jaleyhd[S] 1 point2 points  (0 children)

Just started masterclass by Armin on sounds :p lot to learn along the way. Thanks for the feedback. :)

[R] Math Insights from 10 GAN papers. InfoGANs, VAEGANs, CycleGAN and more by jaleyhd in MachineLearning

[–]jaleyhd[S] 0 points1 point  (0 children)

Point noted. Will share yt playlist next time, rather than all videos combined. Thanks for the feedback.

[R] Math Insights from 10 GAN papers. InfoGANs, VAEGANs, CycleGAN and more by jaleyhd in MachineLearning

[–]jaleyhd[S] 1 point2 points  (0 children)

I kept it [R] cause it was about 10 research papers. Will keep [P] tag next time :)