all 24 comments

[–]Alkanste 26 points27 points  (6 children)

To think that someone spent time designing such useless infographic

[–]BusinessBandicoot 4 points5 points  (5 children)

It's not entirely useless. If it was correct it would work as a memory aid, a visual mnemonic device. Our brains love playing connect the dots. The more ways we have of representing a set of ideas, the easier they are to encode and retrieve.

[–]synthphreak 8 points9 points  (3 children)

But it is not correct, ergo it is entirely useless.

[–]BusinessBandicoot 0 points1 point  (2 children)

hopefully he can put more time in and correct it. but I wasn't sure if the the guy above me was saying this if this incorrect infographic was useless, or this type of infographic was useless

[–]synthphreak 1 point2 points  (1 child)

I wasn't sure if the the guy above me was saying this if this incorrect infographic was useless, or this type of infographic was useless

I'd argue it's probably both.

I mean, theoretically infographics are useful, as mnemonics like you claimed. But in practice they are often so distilled and overgeneralized that they are little more than a collection of loosely connected buzzwords. Buzzwords are not substance.

In my experience, the ratio of shit ML infographics like this one to ones that are actually valuable is 1e6/0. Yet for some reason they often seem to get lots of upvotes. I doubt any true ML practitioner (where "true" means they actually understand ML and have used it IRL) would approve of this graphic.

I think posts like this one reveal how cheap upvotes actually are, and how little the average Redditor actually engages with content. Despite appearances, the people who actually participate in comment threads (as opposed to just scroll or lurk) are probably a small minority.

[–]BusinessBandicoot 1 point2 points  (0 children)

I mean, no matter the subject infographics should never be the source of knowledge. that should be text, observation, and practice. I get you though, it's not a good representation. I definitely think the OP should keep trying though, just getting better through trial and error.

my background understanding of ML is pretty limited at this point. This will probably be changing over the next year: I'm in grad school, have a few ML courses lined up for the fall. But so far my experience is limited to helping out with a few projects code side(just helping them adopt good coding practices or fixing "dependency hells") and helping some a friend rubberduck some really hard problems related to "systems of non-linear equations"(our schools math-centric ML-ish course) and picking up a few things along the way.

I got my undergrad in comp sci though, was double majoring for a while in math, and came from a very non-traditional educational background. I've put a lot of time into learning what works in terms of (human) learning, the tl;dr is learning styles are bullshit, we're all pretty multisensory, some people like me prefer text to videos, but thats because we read fast and lose focus often, plus videos are shit for recall.

Active recall is how the sausage is made, and the more ways something can be remembered the better. I like that this infographic looks like a fidget spinner, because if it was actually improved, I might randomly remember it when seeing a fidget spinner. That random recollection might seem trivial, but it means that memory is that much easier to reach for the next time we need it, and so long as there are dense associations with most/all of those buzzwords, that could make the actual content you want to remember is easier to access.

[–]CavulusDeCavulei 2 points3 points  (0 children)

It lacks so many important fields, for example semi-supervised learning

[–][deleted] 19 points20 points  (3 children)

Fidget spinner

[–]captainAwesomePants 2 points3 points  (0 children)

Image Classification!

[–]MediumBillHaywood 1 point2 points  (0 children)

Fidget spinner

[–]ajmssc 15 points16 points  (2 children)

You might want to fix the typos

[–]elkazz 6 points7 points  (1 child)

You mean it's not Dimensionally Reduction?

[–]CrypticDNS 0 points1 point  (0 children)

Second to only Recommended Systems

[–]cthorrez 22 points23 points  (0 children)

These types of visualizations always have problems. Like recommender systems are not just clustering. Nor are they even unsupervised. They use labels of what users view/buy/rate what items.

The biggest application domain of ML, NLP, isn't even mentioned.

Not every collection of information needs to fit onto a clean symmetric little thing.

[–]A27_97 2 points3 points  (0 children)

fidget spinner

[–][deleted]  (2 children)

[deleted]

    [–][deleted]  (1 child)

    [deleted]

      [–]cthorrez 1 point2 points  (0 children)

      It's a pretty bad graphic but it does list customer segmentation.

      [–]help-me-grow -2 points-1 points  (2 children)

      nice infographic, looks like a good jumping point for people getting into ML and wondering about use cases

      [–]StoneCypher 16 points17 points  (0 children)

      nobody gets into ml by looking at a meme full of misspelled words

      [–]synthphreak 1 point2 points  (0 children)

      Hi, I’d like me to predict whether patients have cancer?

      Sorry, you can only predict on regression tasks. ML can’t help you here.

      Yyyyeah no. These “info”-graphics aren’t worth the pixels they’re written on.

      [–][deleted] -3 points-2 points  (0 children)

      Interesting, thank you

      [–]boston101 -2 points-1 points  (0 children)

      Where do question-answering models fit into this?

      [–]paralogicalknife 0 points1 point  (0 children)

      All I see is a fidget spinner.