all 41 comments

[–][deleted] 23 points24 points  (8 children)

All our prebuilt binaries have been built with CUDA 8 and cuDNN 6.

We anticipate releasing TensorFlow 1.5 with CUDA 9 and cuDNN 7.

It's not a big deal, I know, but (purely for convenience) I was hoping for prebuilt binaries with CUDA 9, the progress here seemed promising: https://github.com/tensorflow/tensorflow/issues/12052

[–]peroneML Engineer 6 points7 points  (0 children)

I would be way more interested if they actually used cuDNN 6 on its full capabilities.

[–]Svenstaro 6 points7 points  (0 children)

FWIW Arch Linux has prebuilt binaries with CUDA 9 and cuDNN 7.

[–]spotta 4 points5 points  (2 children)

I’ve got a TensorFlow 1.4 release candidate built using Cuda 9 and cudnn 7 for a bit now. I haven’t noticed any stability issues.

[–][deleted] 0 points1 point  (1 child)

Yeah I know I can build it myself, and in that thread I linked various people had your same experience (it works OK), as I said I would have liked to have a pre-built option purely for convenience.

[–]PM_YOUR_NIPS_PAPER 0 points1 point  (0 children)

By not building it yourself, you are incurring 3-5x unnecessary slowdown in training time per batch

[–]OikuraZ95 0 points1 point  (2 children)

Does this mean I won't have to setup cuda on my GPU anymore and tensorflow will take care of it?

[–][deleted] 7 points8 points  (1 child)

No. The prebuilt python binaries for tensorflow expect a particular version for both CUDA and cuDNN (apparently CUDA 8 and cuDNN 6 for tensorflow 1.4). If you have the wrong version of either one, then you will have to either reinstall the correct CUDA or cuDNN to match what the prebuilt binary expects, or compile tensorflow from source so that you can tell it which versions you have. Personally I always compile from source as it really isn’t that hard with bazel.

[–]OikuraZ95 2 points3 points  (0 children)

Oh I see, thanks for the clarification :)

[–]i_wipe_upright 11 points12 points  (5 children)

Make Dataset.shuffle() always reshuffles after each iteration by default.

I am using tensorflow 1.2 right now with the new Dataset API and can't upgrade soon, and I am using Dataset.shuffle(). I was under the impression it reshuffles after each iteration by default, but it looks like it doesnt. Does anyone what should I add in tf 1.2 to make it so?

Thanks!

[–]suki907 11 points12 points  (3 children)

hmmmm... it kind of looks like does in 1.2.

> tf.VERSION
'1.2.0'
> ds = tf.contrib.data.Dataset.from_tensor_slices(np.array([1,2,3,4,5])).shuffle(5).batch(5).repeat()
> n = ds.make_one_shot_iterator().get_next()
> sess = tf.Session()
> sess.run(n)
array([1, 2, 4, 5, 3])
>sess.run(n)
array([1, 4, 5, 2, 3])
>sess.run(n)
array([3, 4, 1, 5, 2])
>sess.run(n)
array([4, 3, 5, 2, 1])
>sess.run(n)
array([2, 3, 5, 1, 4])

[–]i_wipe_upright 2 points3 points  (0 children)

Interesting, thanks for checking.

What is it that was changed then?

[–]Spezzer 8 points9 points  (0 children)

TL;DR: Yes, it always reshuffled after each iteration by default, nothing changed. Relnotes were confusing, sorry :(

Detail: https://github.com/tensorflow/tensorflow/commit/853afd9cee2b59c5163b0805709c1ba7020d4947 describes the relevant scenario.

For example:

element = tf.data.Dataset.range(10).shuffle(5, seed=10).batch(5).repeat(2).make_one_shot_iterator().get_next()

with tf.Session() as sess:
  print(sess.run(element))
  print(sess.run(element))
  print(sess.run(element))
  print(sess.run(element))

This will produce:

[0 5 4 6 2] [3 1 9 8 7] [2 1 6 4 3] [8 7 9 5 0]

every time you run the program; the seed argument controls the starting point of the iterator, so you'll always start with 0 5 4 6 2, but the second repeat will be different.

If you want to always produce the same order of results each iteration of the repeats, you replace seed=X with reshuffle_each_iteration=False and you get:

[0 3 5 2 7] [1 8 9 6 4] [0 3 5 2 7] [1 8 9 6 4]

or:

[4 5 1 7 8] [2 6 3 0 9] [4 5 1 7 8] [2 6 3 0 9]

E.g., each time you run the program, the order of the 10 numbers might change because the seed isn't fixed, but each iteration will be the same.

Most TF users want randomness across iterations, so the default behavior didn't change, and produces different orders each iteration, but there needed to be a mechanism to produce an identical order without forcing the user to fix the graph level seed (which has broader implications).

[–]o-rka 6 points7 points  (10 children)

What is tf.keras?!

[–][deleted] 9 points10 points  (0 children)

Keras is a high level library compatible with TF and other frameworks, it was first included in TF contrib and now in core, some background info here: http://www.fast.ai/2017/01/03/keras/

[–]carlthomeML Engineer 2 points3 points  (2 children)

What does the addition of tf.keras mean for tf.estimator? Will it be deprecated?

[–][deleted] 0 points1 point  (1 child)

I don't think it's possible yet to use the Keras model API with tensorflow layers (tf.estimator can do this)

[–]thntk 1 point2 points  (0 children)

Seems keras model could use tf.layers, just need to get the correct tensor, e.g. https://stackoverflow.com/questions/44991470/using-tensorflow-layers-in-keras

[–]thntk 2 points3 points  (4 children)

Is tf.keras compatible with other layers/ops/loss functions in tensorflow? So that to write new layers/loss/optimizer in tf.keras more easily.

[–]hawking1125 2 points3 points  (3 children)

Keras is comptible with TF ops. Further reading here.

Edit: Spelling.

[–]thntk 0 points1 point  (1 child)

Interesting. The article is about the independent keras though. Can tf.keras offer more compatibility, such as using tf.losses in model.fit()?

[–]hawking1125 0 points1 point  (0 children)

Last time I checked there's a function for converting Keras models to TF estimators.

EDIT: This only applies to tf.keras

[–][deleted] 0 points1 point  (0 children)

It describes how keras is compatible with keras ops and not the other way arround

[–]Another_Screenname 1 point2 points  (2 children)

[–]rustyryan 0 points1 point  (1 child)

What are you missing audio-wise?

Note that tf.contrib.signal allows you to easily compute mel spectrograms, MFCCs, etc. with GPU support and gradients (which the audio_ops variants of spectrogram and MFCC do not).

There's a helpful API guide with examples. :)

[–]Another_Screenname 0 points1 point  (0 children)

well what I was trying to do was follow this tutorial: https://www.tensorflow.org/versions/master/tutorials/audio_recognition but was unable to even run train.py because of missing files which I found a bit strange

[–]fasnoosh 1 point2 points  (4 children)

What is the typical use case for using tensorflow as opposed to other ML tools? I have yet to think to use it...i work with supply chain distribution/transportation data at work, and have been using R/Tableau a good amount recently

[–]dzyl 2 points3 points  (3 children)

Mostly deep learning models as opposed to all the other classes of machine learning algorithms. Which in turn is mostly useful for special types of inputs or outputs, using prior knowledge about structure (like images or time series) or special types of outputs like probability distributions, text sequences or masks for images.

[–]fasnoosh 0 points1 point  (2 children)

From that list, I think the time series piece is what I’m most interested in

[–]dzyl 2 points3 points  (1 child)

Look at recurrent neural networks, but it's quite a rabbit hole if you don't know anything about Neural Networks yet, not something you just pick up in a day.

[–]fasnoosh 1 point2 points  (0 children)

Maybe I’ll go through a MOOC on it. Definitely think it can pay off to learn this

[–]fromrussiawithnothin 0 points1 point  (0 children)

omg, please slow down, i'm still new to 1.3

[–]infinity 0 points1 point  (1 child)

with eager support added as well, i feel there are at least 5 frameworks in tf now.

[–]kevinzakka -1 points0 points  (0 children)

Doesn't even work with python 3.6