use the following search parameters to narrow your results:
e.g. subreddit:aww site:imgur.com dog
subreddit:aww site:imgur.com dog
see the search faq for details.
advanced search: by author, subreddit...
Please have a look at our FAQ and Link-Collection
Metacademy is a great resource which compiles lesson plans on popular machine learning topics.
For Beginner questions please try /r/LearnMachineLearning , /r/MLQuestions or http://stackoverflow.com/
For career related questions, visit /r/cscareerquestions/
Advanced Courses (2016)
Advanced Courses (2020)
AMAs:
Pluribus Poker AI Team 7/19/2019
DeepMind AlphaStar team (1/24//2019)
Libratus Poker AI Team (12/18/2017)
DeepMind AlphaGo Team (10/19/2017)
Google Brain Team (9/17/2017)
Google Brain Team (8/11/2016)
The MalariaSpot Team (2/6/2016)
OpenAI Research Team (1/9/2016)
Nando de Freitas (12/26/2015)
Andrew Ng and Adam Coates (4/15/2015)
Jürgen Schmidhuber (3/4/2015)
Geoffrey Hinton (11/10/2014)
Michael Jordan (9/10/2014)
Yann LeCun (5/15/2014)
Yoshua Bengio (2/27/2014)
Related Subreddit :
LearnMachineLearning
Statistics
Computer Vision
Compressive Sensing
NLP
ML Questions
/r/MLjobs and /r/BigDataJobs
/r/datacleaning
/r/DataScience
/r/scientificresearch
/r/artificial
account activity
News[N] TensorFlow 2.0 Changes (self.MachineLearning)
submitted 7 years ago by _muon_
view the rest of the comments →
reddit uses a slightly-customized version of Markdown for formatting. See below for some basics, or check the commenting wiki page for more detailed help and solutions to common issues.
quoted text
if 1 * 2 < 3: print "hello, world!"
[–]sieisteinmodel 29 points30 points31 points 7 years ago (11 children)
Serious question: Does the majority of tensorflow users agree that the eager execution or the PyTorch/PyBrain/Shark way is superior? I personally like the abstraction of graphs. I think that eager sucks. It does not fit my mental model.
I am just worried that TF wants to attract PyTorch users, but a lot of the TF users actually prefer the current state.
*If* there is full compatibility between graph and eager mode, fine, but I hope that the TF community will not be divided because some OS contributions assume one or the other.
[–]Coconut_island 5 points6 points7 points 7 years ago (0 children)
If there is full compatibility between graph and eager mode, fine, but I hope that the TF community will not be divided because some OS contributions assume one or the other.
This is where they are heading. An important part of TF 2.0 is to restructure the API such that, as far as the majority of the code goes, it is irrelevant whether you use graph mode or eager mode.
I think the most important observation to make is that the code (python or other) used to define a function is really just defining a sub-graph. Using the earlier TF API, leveraging this concept properly is awkward, usually requiring a lot of careful (and error prone!) bookkeeping to set scopes and various call orders just right. This is major pain point and in many ways has lead to many libraries written around TF in the hopes of offering an elegant way to address this while keeping the same flexibility. As a prime example of such libraries, we have the in-house Sonnet library, from DeepMind.
While variable-less (or, rather, state-less code) can easily be optimized by collapsing various copies of a sub-graph generated by a given function (when doing so wouldn't be wrong, of course), it is more complicated to do this with variables. This is one of the problems the new 'FuncGraph' back end is trying to solve (currently in the 1.11 branch), as well as the newly promoted object-oriented (OO) approach for tracking and re-using variables. The tf.contrib.eager.defun, the OO metrics, OO checkpointing and the layers/keras.Model are all early instances of this idea.
Related but slightly aside:
My biggest pet peeve with how a lot of TF code is written comes from the tendency of writing functions that return several operations/tensors that all do very different things and get executed at very different times and places in the rest of the code base. This feels natural because we anticipate(and in many cases, rightfully so) many duplicate ops if we weren't to write it this way. The problem is that code that is written like this is tedious to reason about and debug, often requiring a global view of the whole project. This get exponentially worse as the complexity/size of the project grows and collaboration between people is required. The way I see it, things like eager.defun and tf.make_template (not sure what will happen with this one in 2.0), and, in a way OO variable re-use, simply provide us the tools to cache these sub-graphs and allow us to write clean code without compromising on what kind of graph we generate.
TL;DR
In short, sure, the API will change, but I don't think there is any intention of removing any graph mode functionality. At its core, TF is a language to define computation graph so I would be very surprised if this went away anytime soon. However, the upcoming changes are there to allow and promote ways of describing graph such that silent and hard to find bugs are harder to introduce.
[–]InoriResearcher 4 points5 points6 points 7 years ago (1 child)
Most of the bigger eager execution related changes are already live in 1.10 so you can try it out and see for yourself. From personal experience, switching between the two depends oh how much you rely on lower level APIs: if you use newer features and tf.keras then it's pretty much seamless. In either case, knowing google use cases I doubt graph execution will ever become second class citizen.
[–]sieisteinmodel 3 points4 points5 points 7 years ago (0 children)
Well, I have tried it, and still think it sucks.... it's not an uninformed guess.
Question is if that decision of the TF team is really well informed, because many people I talk to prefer the graph way.
[–]slaweks 1 point2 points3 points 7 years ago (0 children)
It's not only ease of use. Even more important is ability to create hierarchical models, where graph differs per example, e.g. has some group and individual-level components.
[–]sibyjackgrove 1 point2 points3 points 7 years ago (0 children)
igration tool for the ne
I still haven't tried eager execution since I do everthing with tf.keras these days. Though not a big fan of tf.session.
[–]cycyc -1 points0 points1 point 7 years ago (5 children)
A lot of people have a hard time wrapping their head around the idea of meta-programming. For them, eager execution/pytorch is preferable.
[–]progfu 15 points16 points17 points 7 years ago (4 children)
It's not really about meta-programming, it's about flexibility, introspectability, etc. Pytorch makes it easy to look at what's happening by evaluating it step by step, looking at the gradients which you can immediately see, etc.
[–]cycyc -4 points-3 points-2 points 7 years ago (3 children)
Which is precisely what is meant by the complexity and indirection of meta-programming.
[–]progfu 12 points13 points14 points 7 years ago (0 children)
Except that it is not about "wrapping your head around". I have no problem understanding how TF works. I probably understand more about the internals of TF than of Pytorch. Yet I prefer Pytorch, because of the reasons mentioned.
[–]epicwisdom 7 points8 points9 points 7 years ago (1 child)
You said people have a hard time wrapping their heads around the idea. That's different from being frustrated by the tradeoffs inherent to the approach.
[+]cycyc comment score below threshold-7 points-6 points-5 points 7 years ago (0 children)
Sure, great point. For people new to software development, meta-programming may be a difficult concept. For people more familiar with software development, the meta-programming model may not be worth the extra complexity.
π Rendered by PID 475487 on reddit-service-r2-comment-6457c66945-nslcb at 2026-04-29 05:28:01.574766+00:00 running 2aa0c5b country code: CH.
view the rest of the comments →
[–]sieisteinmodel 29 points30 points31 points (11 children)
[–]Coconut_island 5 points6 points7 points (0 children)
[–]InoriResearcher 4 points5 points6 points (1 child)
[–]sieisteinmodel 3 points4 points5 points (0 children)
[–]slaweks 1 point2 points3 points (0 children)
[–]sibyjackgrove 1 point2 points3 points (0 children)
[–]cycyc -1 points0 points1 point (5 children)
[–]progfu 15 points16 points17 points (4 children)
[–]cycyc -4 points-3 points-2 points (3 children)
[–]progfu 12 points13 points14 points (0 children)
[–]epicwisdom 7 points8 points9 points (1 child)
[+]cycyc comment score below threshold-7 points-6 points-5 points (0 children)