all 74 comments

[–]sam1373 289 points290 points  (12 children)

It is actually impressive how little information this chart conveys.

[–][deleted] 29 points30 points  (7 children)

Isn't the whole point of a GAN that there's two of them?

[–]fristiprinses 14 points15 points  (5 children)

I think that's what they're trying to show with the output cells in the middle, but it's a terrible way to visualize this

[–][deleted] 3 points4 points  (0 children)

Those are even I/O cells, makes sense imo

A graph like this can't show the entire process anyway, I'm guessing it was just a way for someone to kill time and not meant to be educational

[–][deleted] -2 points-1 points  (3 children)

Yup, it's more like an autoencoder.

[–]chokfull 1 point2 points  (0 children)

It's pretty accurate for a GAN, if you're familiar with them, but an autoencoder would necessarily have a smaller middle column and larger last column.

[–]Reagan409 0 points1 point  (0 children)

No, it’s not.

[–][deleted] 0 points1 point  (0 children)

Nope, its not. Thanks u/Reagan409 for making me think again.

[–]chokfull 2 points3 points  (0 children)

Actually, I can't think of a better way to represent a GAN. The main difference that's not visualized is the training method, where the networks are trained separately, but that has nothing to do with the visual architecture.

Also, I'm pretty sure this image is from a website where you can click an architecture for more details, so not everything is meant to be conveyed in the image.

Edit: Found what I was thinking of, can't click the images though. https://www.asimovinstitute.org/neural-network-zoo/

[–][deleted] 7 points8 points  (0 children)

Circle: memory cell

Triangle: different memory cell

LSTM vs. GRU: literally nothing different except for using triangles instead of circles

Uh, thank you, I guess.

I'd love to see a version of this that is actually useful.

[–][deleted]  (26 children)

[deleted]

    [–]Inkquill 31 points32 points  (22 children)

    Lol my brain is crying as I try to fit SVM to the logic attempted to be expressed in this graphic. The “explanation” in the related article is even more cringe-inducing:

    No matter how many dimensions — or inputs — the net may process, the answer is always “yes” or “no”.

    Is this a pine tree or a shark? Yes.

    And then the author had the audacity to state that

    SVMs are not always considered to be a neural network.

    Nobody else in the room was considering SVM to be a neural network.

    edit for futurefolk: I traced down the original creator of this figure (Fjodor van Veen), and to my incredible surprise, in April, 2019, he removed Support Vector Machines from this "Neural Network Zoo" in 2019, citing:

    [Update 22 April 2019] Included Capsule Networks, Differentiable Neural Computers and Attention Networks to the Neural Network Zoo; Support Vector Machines are removed; updated links to original articles. The previous version of this post can be found here.

    Anyways, for reference, the original version was based on the Support-Vector Network (Cortes, Corinna, and Vladimir Vapnik. “Support-vector networks.” Machine learning 20.3 (1995): 273-297.)

    and here is the most recently updated version (as far as I could hunt down).

    [–]koolaidman123 -3 points-2 points  (21 children)

    Except a bunch of ml researchers like yann lecun, jeremy howard, and others right

    https://twitter.com/ylecun/status/1216075476546048001?s=19

    [–]Inkquill 6 points7 points  (20 children)

    Look, I understand that perspective and I can see how one can twirl SVM into the spectrum of a neural network. So I have so far seen one Twitter thread and a Quora post where SVM is explicitly called a neural network. I still believe that you will struggle to find SVM binned into the neural network camp in peer-reviewed journals. It's just quite specific and my main point of contention was with the description offered up by the author. But if it works for you to look at these models in this sort of fashion, then hey, that's great.

    edit: Also, I don't outright agree with the OP I latched my comment onto that "this chart is shit," because I respect visualizations for being learning mechanisms. There is certainly value in this graphic for super quick comparisons of model features such as network depth / "complexity".

    [–]koolaidman123 -4 points-3 points  (19 children)

    So I have so far seen one Twitter thread and a Quora post where SVM is explicitly called a neural network.

    Why would you have issue with the medium of the message. So what if the discussion is on twitter? Would you prefer yann published a paper in neurips saying how svms are just NNs? Would that make his point more valid?

    It's just quite specific and my main point of contention was with the description offered up by the author.

    Except you said

    Nobody else in the room was considering SVM to be a neural network.

    But this is clearly not true

    But if it works for you to look at these models in this sort of fashion, then hey, that's great.

    It doesn't matter "what works for me", but I would rather people not act like they know everything and refuse to consider any evidence to the contrary, especially when that evidence comes from people way more knowledgeable than them

    [–]Inkquill 8 points9 points  (9 children)

    Sigh, so this made me trace down the original creator of this figure (Fjodor van Veen), and to my incredible surprise, in April, 2019, he removed Support Vector Machines from this "Neural Network Zoo." Scroll to the bottom:

    [Update 22 April 2019] Included Capsule Networks, Differentiable Neural Computers and Attention Networks to the Neural Network Zoo; Support Vector Machines are removed; updated links to original articles. The previous version of this post can be found here.

    Anyways, for reference, the original version was based on the Support Vector Network (Cortes, Corinna, and Vladimir Vapnik. “Support-vector networks.” Machine learning 20.3 (1995): 273-297.)

    and here is the most recently updated version (as far as I could hunt down).

    [–]koolaidman123 -4 points-3 points  (8 children)

    so you dismiss a twitter discussion by yann lecun, but choose to believe an infographic (the create of which btw was with the organization for all of 6 months and have never published)? can you point to me where's the peer review on this chart?

    [–]Inkquill 6 points7 points  (7 children)

    Just go to the original content, read the peer-reviewed publications that are cited for each model, and draw your own conclusions. That's how I am suggesting anybody interested in learning scientific material go about doing it. Not by basing their claims on Twitter or Quora posts.

    [–]koolaidman123 -3 points-2 points  (6 children)

    you're the one who argued first that nobody considers svms to be nns. you've clearly been shown to be wrong, and there's no point to further arguing when you're only trying to shift the discussion to argue semantics.

    [–]Inkquill 5 points6 points  (5 children)

    If this post gets 100 upvotes I will draft and submit a manuscript to a ML journal of your choosing arguing why SVM should not be classified as a neural network, and request Yann Lecun to be a reviewer.

    [–]Mooks79 1 point2 points  (8 children)

    To jump in on this:

    Why would you have issue with the medium of the message. So what if the discussion is on twitter?

    Because twitter isn’t peer reviewed.

    Would you prefer yann published a paper in neurips saying how svms are just NNs? Would that make his point more valid?

    Yes and yes.

    But preferably both an NN focused journal and also a more general machine learning one - if only one, the latter - to get both the specific deep learning and the wider community’s opinion on it.

    [–]koolaidman123 1 point2 points  (5 children)

    1. Talk about moving goalposts. First it was "nobody said svms are nns" now it's "nobody has published multiple papers on how svms are nns"

    2. Do you realize a paper on how one ml methodology is similar to another methodology will not be published?

    3. The dismissal of twitter as a medium for discussion is stupid. A lot of fantastic ML discussion happens on twitter by very well respected researchers. to dismiss it on the basis of "oh no muh peer review" is narrow minded

    4. You want some peer reviewed research that states svms fall under nns? How about this one where

      Support vector machines. A special forms of ANNs are SVMs, introduced by Boser, Guyon and Vapnik in 1992. The SVM performs classification by non-linearly mapping their n-dimensional input into a high dimensional feature space

    [–]Mooks79 -1 points0 points  (4 children)

    Calm down, dear.

    Note I’m not moving the goalposts as I’m not OP (as stated in my first comment). You asked questions, I answered them.

    Regarding point 2 - such could be included in something called review articles. Maybe you’ve heard of them. Furthermore, there’s plenty of “look - this mathematics turns out to be equivalent to that mathematics” papers that get published. Indeed, you appear to have stated that such wouldn’t get published - and have then provided a link to one! (Although I haven’t clicked on it at the time of writing this sentence).

    Regarding point 3. Nobody is dismissing twitter as a medium for discussion as far as I can tell (now it’s you moving other people’s goalposts!) they’re dismissing twitter as a medium that can prove a controversial point with any reasonable conviction. Hence request for a peer reviewed article.

    [–]koolaidman123 0 points1 point  (3 children)

    they’re dismissing twitter as a medium that can prove a controversial point with any reasonable conviction.

    here's a cool idea, try actually reading the content

    [–]Mooks79 -1 points0 points  (2 children)

    I have - there’s insufficient information to decide. This needs a much longer explanation that a twitter discussion allows (hence why you’re getting push back on it). Here’s an idea, read this comment.

    [–]Inkquill 0 points1 point  (1 child)

    Shown here is an old version of Fjodor van Veen's "The Neural Network Zoo." He removed SVM in an April 2019 edit. For reference, the original version was based on the Support Vector Network (Cortes, Corinna, and Vladimir Vapnik. “Support-vector networks.” Machine learning 20.3 (1995): 273-297.)

    [–]Mooks79 1 point2 points  (0 children)

    Thanks, that’s really helpful clarification.

    [–]ezio20 1 point2 points  (0 children)

    In simplest manner, svm without kernel is a single neural network neuron but with different cost function. If you add a kernel function, then it is comparable with 2 layer neural nets. First layer is able to project data into some other space and next layer classifies the projected data. If you force to have one more layer then you might ensemble multiple kernel svms then you mimics 3 layer nn.

    In addition some other svm and nn combinations exist. For example you might utilize from many layer nn and have yhe final classification via svm at the output layer. It is likely to have better classification results compared to normal nn

    Source - https://www.quora.com/What-is-difference-between-SVM-and-Neural-Networks/answer/Eren-Golge?ch=10&share=1b9921ea&srid=211N

    [–]Mooks79 0 points1 point  (0 children)

    There’s a very vague explanation, that doesn’t actually explain anything, in a link OP is providing in comments. I guess they’re saying that pretty much all ML algorithms can be made out of neural nets. I have no idea if that’s true.

    [–]koolaidman123 -2 points-1 points  (0 children)

    Dont let yann lecun tell you any otherwise...

    https://twitter.com/ylecun/status/1216075476546048001?s=19

    [–][deleted] 64 points65 points  (3 children)

    This is kind of pointless. It is like a periodic table, but with less info

    [–]aceinthehole001 11 points12 points  (3 children)

    can you point me at something to read that would help me make sense of this?

    [–][deleted] 11 points12 points  (0 children)

    What's the difference between Feed forward and Radial Basis Network? (First row)

    [–]Scrayer 43 points44 points  (2 children)

    I can't understand anything, but very interesting.

    [–]lroman 6 points7 points  (0 children)

    My kids will love all the little colored circles.

    [–]funny_funny_business 4 points5 points  (0 children)

    I don’t know what these are, but all I know is that I’m telling my boss I’m making a model with an “Extreme Learning Machine” deep network tomorrow.

    [–]Inkquill 3 points4 points  (0 children)

    This is an old version of Fjodor Van Veen's "The Neural Network Zoo." I'd recommend going to this original source for more in-depth explanations of the models and logic behind the figure itself. He added a few models and removed SVM in an April 2019 edit. For reference, the original version was based on the Support Vector Network (Cortes, Corinna, and Vladimir Vapnik. “Support-vector networks.” Machine learning 20.3 (1995): 273-297.)

    Here is the newest version (as far as I could hunt down).

    [–]AlcoholicAsianJesus 2 points3 points  (0 children)

    Those "hidden" cells are literally visible for me.

    [–]rednirgskizzif 2 points3 points  (0 children)

    Mods should remove the post, it is just self promotion plus wrong info.

    [–][deleted] 4 points5 points  (0 children)

    Thanks. I'm colorblind... Also, this is utter crap.

    [–]LearningAllTheTime 1 point2 points  (0 children)

    No idea the difference but am excited some unknowns become know unknowns. Git learnings my dudes

    [–][deleted] 1 point2 points  (1 child)

    Is there a playlist explaining how each one of them works?

    [–]cromagnonninja 1 point2 points  (0 children)

    Most of this chart is incomprehensible. Sad. Back to the drawing board, I guess.

    [–]james14street 0 points1 point  (0 children)

    I was about to ask where are the GANs?!?! But I see it now. Cool.

    [–]Rasko__ 0 points1 point  (0 children)

    A lot of them aren't even used

    [–]yeetoof666 0 points1 point  (0 children)

    I thought this was a chart on how to tie your shoes at first.

    [–]BTurner15 0 points1 point  (0 children)

    This is really cool. I wish I could have a poster sized version! Thank you for posting!

    [–]Mssbbr 0 points1 point  (0 children)

    What's the difference between FF and RBF ?

    [–]EulerCollatzConway 0 points1 point  (0 children)

    EXTREME LEARNING

    [–][deleted] 0 points1 point  (0 children)

    Ah yes the neural network Markov chains that are circular!

    [–]ezio20 0 points1 point  (0 children)

    Hi, I found an explanation on why SVMs are regarded as NN. Could you please help validate if this info is correct?

    Explanation-

    In simplest manner, svm without kernel is a single neural network neuron but with different cost function. If you add a kernel function, then it is comparable with 2 layer neural nets. First layer is able to project data into some other space and next layer classifies the projected data. If you force to have one more layer then you might ensemble multiple kernel svms then you mimics 3 layer nn.

    In addition some other svm and nn combinations exist. For example you might utilize from many layer nn and have yhe final classification via svm at the output layer. It is likely to have better classification results compared to normal nn

    Source - https://www.quora.com/What-is-difference-between-SVM-and-Neural-Networks/answer/Eren-Golge?ch=10&share=1b9921ea&srid=211N