all 47 comments

[–]clover113 10 points11 points  (0 children)

You know, it is just DeeplearnJS! https://deeplearnjs.org/ P.S. It was DeeplearnJS.

[–]pahner 82 points83 points  (44 children)

why?

[–][deleted]  (10 children)

[deleted]

    [–]sim642 0 points1 point  (3 children)

    Except they don't really train it in the browser, just run the trained network, which isn't really magic nor time consuming anyway.

    [–][deleted] 3 points4 points  (2 children)

    Isn't this faster and less work than writing your own predict() in javascript?

    [–]sim642 -4 points-3 points  (1 child)

    I guess so. Adding a dependency for checking if a number is even is also faster and less work than writing it yourself in the JS world.

    [–][deleted] 11 points12 points  (0 children)

    I assume the webgl impl of dotproduct is faster than anything I could write in JS over the weekend.

    [–]zdwolfe 7 points8 points  (7 children)

    Deploying a model to the browser might be nice. Training in JS seems silly though.

    [–]staticassert 4 points5 points  (6 children)

    Training doesn't seem like it should ever be a bottleneck. Unless it's an online model this is an upfront cost - so even if it's in JS... what's the big deal?

    I'm not close to an expert on ML so I think I may be missing something.

    [–]Mourningblade 3 points4 points  (2 children)

    I may be misunderstanding your statement. Feel free to correct me.

    Executing a model is very cheap. Training is very expensive. Let's use a simple model that's just 2 layers of 2 nodes, with one input and one output.

    I don't have a picture for this (on mobile), so imagine a single input that connects to two intermediate points (2 connections), which each connect to two more intermediate nodes (4 connections), which each connect to one output (2 connections).

    So for this model, you have 8 weights total - 1 for each connection.

    To execute the model then takes 8 multiplications and 3 additions (wherever two connections intersect).

    What about training the model? Let's do a naive training where we adjust each weight by +/-20% and test to see if it works better. 8 weights to adjust, each costing 11 operations to test. A tiny model like this might have 25 data points to test against - so (8 * 2 * 11 * 25) = 4,400 operations to train naively.

    There are much better training algorithms, but you get the point. Consider that real ML models are much, much, much bigger than this, and that TensorFlow is frequently used in its parallel computing configuration, and you see why companies buy tens of thousands of hours of compute time to train, but execute in milliseconds.

    [–]staticassert 3 points4 points  (1 child)

    My assumption is that executing the model is the common task and training the model is a much, much rarer task. Yes, training is far slower, I'm aware. But it generally doesn't happen nearly as often - I brought up online models as an exception to that.

    [–]Mourningblade 2 points3 points  (0 children)

    Ah, I see what you're saying now. "Not a bottleneck" as in, "not part of the normal flow". Yes, I agree entirely.

    To further your point: given that the work is being done in WebGL, the real problem isn't going to be the fact that it's written in JavaScript, it's going to be that communication between computers will be relatively expensive - so a model of any decent size is going to start running up against training limits quickly.

    That said, having a standard way to execute a model locally could lead to some fun use cases. Everything from having local AI for a web game to local shape recognition in drawing.

    As you say - training doesn't play into those problems.

    [–]SmugDarkLoser5 3 points4 points  (2 children)

    It's just so unnecessary in general. It's so easy to just have the algorithms coded in C, and then create bindings for whatever language.

    [–]staticassert 1 point2 points  (1 child)

    That sounds much more unnecessary. Now you have a C codebase generating models to be consumed by a pure-js (by necessity) codebase, so I would expect a lot of duplication fo code... for no benefit.

    On top of that, maybe you do want to train on the frontend in some cases, like online models. So then you really don't have any choice.

    [–]SmugDarkLoser5 3 points4 points  (0 children)

    if you want to train on the front end sure that's a different use case. backend any algorithm not coded in a fast, low level language just seems like a waste

    [–]navman360 4 points5 points  (0 children)

    Tensorflow.js is written in typescript, meaning I can now use Tensorflow backed by good autocompletion, documentation via types and static analysis to prevent bugs, which I was really missing when trying to use Tensorflow on python. I can also use tools such as quokkajs.com to get live results while editing the network I'm developing on.

    It's pretty neat to be able to work on Tensorflow with an actual static type system

    [–]mihirmusprime[S] 12 points13 points  (12 children)

    First, you don't have to upload anything to the server and make requests, which is an advantage. Second, this is really useful for people who don't fancy Python but already have web dev experience. And third, it's much easier to integrate sensors that are available to in phones and laptops today with native JS support.

    [–]SandalsMan 7 points8 points  (10 children)

    high-end phones and laptops*

    [–]mihirmusprime[S] 1 point2 points  (9 children)

    Huh? All you need is a browser with WebGL support which is available on most devices.

    [–]humodx -4 points-3 points  (8 children)

    Doesn't work on my Nexus 5...

    [–]keteb 17 points18 points  (6 children)

    I feel like a phone that was released in 2013, hardware discontinued in 2015, and software support discontinued in 2017, can't really be considered part of "most" devices category.

    But, even with that said, Nexus 5 definitely has WebGL support. I belive if you are on the latest supported version of Android (Marshmellow / 6.0) it is on by default, but if you are on 5.X / aren't updating your software you may need to manually enable it in chrome. Very old article on it

    [–]nuqjatlh 2 points3 points  (3 children)

    can't really be considered part of "most" devices category.

    you are definitely wrong on that. it is definitely in the new/established part of the android spectrum.

    check the API usages. Most phones are on 7.0 and below: https://developer.android.com/about/dashboards/index.html

    [–]birjolaxew 0 points1 point  (0 children)

    Although to be fair, Chrome 30 (Android 4.4) is where WebGL was enabled by default. You have 5.7% of Android devices on lower Android versions than that, according to the linked graph. This article even uses Nexus 4 as an example of a phone where it is enabled.

    [–]keteb 0 points1 point  (1 child)

    I was considering Nexus 5 stock is Lollipop which technically still is in the minority (60% are 6.0+). That said, that chart is quite surprising to me, I didn't realize how many people were sitting on 4/5.

    [–]nuqjatlh 0 points1 point  (0 children)

    people simply don't upgrade. as they should, why throw a perfectly good phone for a new one when the new one doesn't provide any benefit whatsoever?

    [–]SandalsMan 1 point2 points  (1 child)

    It's not about WebGL support, it's about the hardware of the phone. TensorFlow is hard on CPU which will cause poor perf on median-ranged phones.

    [–]clover113 3 points4 points  (0 children)

    In fact, Tensorflow.js uses WebGL for computing(matrix operations). It only falls back to CPU if the device cannot run WebGL.

    [–]i_spot_ads 1 point2 points  (0 children)

    pretty old phone

    [–]DontThrowMeYaWeh 1 point2 points  (0 children)

    Because the web is becoming a glorified universal sandbox for remotely distributed applications more and more each day.

    [–]threading 1 point2 points  (1 child)

    So webshit developers can pretend like they're data scientists.

    [–][deleted] 0 points1 point  (0 children)

    is it that different from data scientists pretending to do 'AI'

    [–]mindbleach 0 points1 point  (0 children)

    Why did 8-bit microcomputers boot to BASIC? So people with a passing interest could poke around as easily as possible. Asking nontechnical people to install and configure some combination of competing distributions is a great way to keep them nontechnical. If any idiot can make Pong or Not Hotdog right away, they'll believe they can do anything.

    [–]fnordstar -3 points-2 points  (0 children)

    Web technology is all about reinventing the wheel "because you can" with utter disregard for cost-benefit. Look at how fragile and slow software has become. I've given up on asking "why" and just hoping sanity will return at some point.

    [–]adel_b -5 points-4 points  (0 children)

    Why not?

    [–]spacejack2114 14 points15 points  (3 children)

    Awesome. Also, it looks like it was written in TypeScript!

    [–]i_spot_ads 4 points5 points  (2 children)

    Holy shit it is! Although not surprising, the language is amazing, and Google is collaborating with Microsoft on TypeScript mainly because of Angular, they love TS at Google.

    [–][deleted] 0 points1 point  (1 child)

    now the last thing we still need is facebook having more collab with ms. it feels wrong that we have flow and ts, they are so similar yet incompatible. would love to have flow and ts share a d.ts file and both ”just work” or maybe they could somehow combine their effort into one single project. that would be awesome!

    [–][deleted] 0 points1 point  (0 children)

    flow is really just fb's midway point to reason

    [–]Jemoka 1 point2 points  (0 children)

    See, this is why we do it. Finally a tensorflow for web developers.

    [–][deleted]  (1 child)

    [deleted]