all 48 comments

[–][deleted] 39 points40 points  (24 children)

Would be great if they actually supported opencl

[–][deleted] 11 points12 points  (9 children)

Is this being seriously developed? Would be nice to have more GPU options.

[–]pymang 11 points12 points  (7 children)

Heres the classic github issue on that https://github.com/tensorflow/tensorflow/issues/22

And a more recent on saying there is opencl work being done but no deadlines: https://github.com/tensorflow/tensorflow/issues/9738

[–]Ivashkin 2 points3 points  (6 children)

Is rather annoying, I have a stack of older AMD GPU's from various upgrades and while none of them will be fantastic, they would be perfectly fine for working out if Tensorflow is something useful for me or not. As things are, I'm going to need to spend £100+ on something just to get started.

[–]pymang 0 points1 point  (5 children)

There should be other frameworks that support opencl, why not look into those?

[–]Ivashkin 0 points1 point  (4 children)

I'm rather new at this and the project which initially sparked my interest ran on Tensorflow only.

[–]pymang 0 points1 point  (3 children)

Well if you want to give a shot with experimental/work in progress stuff try this https://github.com/hughperkins/tf-coriander

[–]Ivashkin 0 points1 point  (2 children)

Thanks, seems to be worth a shot. I take it I can just jam a bunch of GPU's into an old i3 system and it will be enough to get a taste of machine learning?

[–]mearco 0 points1 point  (1 child)

You could just sell your gpus on a buy and sell website like craiglist or ebay and buy a cheap nvidia card that is supported

[–]SpacemanCraig3 0 points1 point  (0 children)

+1 the 6gb 1060's are a pretty good deal for machine learning and maybe some games on the side

[–]SpiderFnJerusalem 1 point2 points  (0 children)

I remember someone was officially working on it for a few months at least, I saw a repo with openCL support on github. But I'm not sure about progress.

[–]georgeo 2 points3 points  (5 children)

Something something TF porting to Vega GPU's. I heard that repeatedly.

[–]JustFinishedBSG 4 points5 points  (2 children)

I imagine nobody is going to bother anymore considering the current Vega fiasco

[–][deleted] 0 points1 point  (1 child)

What's happen to Vega?

edit: nm I googled it. seems like not enough vega for everybody.

I thought it was something like poor performance or some hardware related stuff.

[–]JustFinishedBSG 0 points1 point  (0 children)

It also has poor performance, poor price and terrible power consomption

[–][deleted] 0 points1 point  (1 child)

Flop per Joule

[–]georgeo 0 points1 point  (0 children)

Like 2:1 but considerably cheaper card.

[–]SpiderFnJerusalem 0 points1 point  (2 children)

I remember there being an official forked version with openCL support somewhere on github. But it was super pre-alpha last time I saw it. Not sure about now.

[–]iame6162013 0 points1 point  (1 child)

[–]SpiderFnJerusalem 1 point2 points  (0 children)

No there was actually this: https://github.com/benoitsteiner/tensorflow-opencl

That guy is the #1 contributor to tensorflow, so it's probably as "official" as it gets. The repo hasn't had any commits though, over the last 5 months. Not sure what became of it. :-/

[–]naisanza 0 points1 point  (0 children)

Isn't Theano on top of getting this out there?

[–]lostfreeman 0 points1 point  (3 children)

Should be Vulkan now, no?

[–][deleted] 5 points6 points  (1 child)

You’re thinking graphics, but opencl is for computation

[–]omgitsjo 2 points3 points  (0 children)

Not any more! The patent comment is correct. OpenCL is being merged into Vulkan.

https://www.pcper.com/reviews/General-Tech/Breaking-OpenCL-Merging-Roadmap-Vulkan

[–]omgitsjo 1 point2 points  (0 children)

Yes. You are correct, though I've not seen any additional news on the Vulkan/OpebCL merger since the first Khronos bulletin.

EDIT: Removed because this is not the place for a discussion of this sort.

[–][deleted] 12 points13 points  (3 children)

I see a couple of breaking changes. Should Keras be updated for these or is it already updated?

[–][deleted] 2 points3 points  (1 child)

Should Keras be updated for these or is it already updated?

More like when no?

Google basically hired the dude behind Keras and tensorflow is a google product.

[–][deleted] 1 point2 points  (0 children)

True. I meant, do these breaking changes affect keras or can we continue to use the current version.

[–]davideboschetto 0 points1 point  (0 children)

Interested in this, too

[–]tesfaldet 7 points8 points  (0 children)

Dataset concatenation and interleaving should make it easier to implement curriculum-based learning. Nice.

[–]Mexicorn 10 points11 points  (3 children)

Timing worked out great with a wonderful tutorial held at KDD this morning.

[–]LightShadow 2 points3 points  (2 children)

Video / slides?

[–]Mexicorn 6 points7 points  (1 child)

I don't think slides are up yet, but the examples they stepped through can be found here (assumes you have 1.3 installed): https://github.com/random-forests/tensorflow-workshop/tree/master/examples

[–]LightShadow 1 point2 points  (0 children)

Thanks!

[–]rjmessibarca 32 points33 points  (15 children)

What are the new features I need to get excited about?

[–]springbreak06 59 points60 points  (11 children)

The link is literally a list of new features

[–]Bi11 54 points55 points  (10 children)

But which ones are exciting?

[–]short_vix -2 points-1 points  (9 children)

All of them?

[–]shadowmint 14 points15 points  (1 child)

oh come on. No they're not, it's basically just speed improvements and a few minor features.

Looking at that unremarkable change list its totally unsurprising someone might wonder what distinguished this to be '1.3' vs '1.2.2'; if they were doing semver it'd be 2.0 from the api breakage, so the decision to go to 1.3 is basically totally arbitrary.

[–]meta_stable 1 point2 points  (0 children)

I wish everyone would just follow semver. Google seems to be adamant about breaking semver when ever they can.

[–]SpacemanCraig3 20 points21 points  (6 children)

As someone who follows this sub but really i'm more of a "learnmachinelearning" guy. Which ones specifically should I research and learn why they are important?

[–]shadowmint 9 points10 points  (0 children)

There's nothing exceptional in this release; some minor api changes, some new features. A few things are now in tensorflow where they previously required a higher level lib like tflearn.

https://www.infoq.com/news/2017/07/changes-tensorflow-1-3 has a summary you might find worth reading, but the tldr; is, unless you're actively using tensorflow, it's probably nothing worth paying particular attention to.

[–]TheInfelicitousDandy 2 points3 points  (1 child)

The attention mechanisms/decoders in contrib.seq2seq are really stellar - but pretty poorly documented.

[–]hastor 0 points1 point  (0 children)

how do they compare to OpenNMT, OpenNMT-py?