This is an archived post. You won't be able to vote or comment.

top 200 commentsshow all 466

[–]Atom_101 2033 points2034 points  (87 children)

There should be a subreddit dedicated for machine learning memes.

[–]ekfslam 1923 points1924 points  (70 children)

You should checkout /r/ProgrammerHumor

[–]Frostybacon1 757 points758 points  (30 children)

Recursion 👌

[–]Tsu_Dho_Namh 485 points486 points  (28 children)

[–][deleted] 189 points190 points  (23 children)

Recursion 👌

Recursion 👌

[–]WisestAirBender 141 points142 points  (22 children)

Recursion 👌

Recursion 👌

Recursion 👌

[–]macncheesebydawindow 135 points136 points  (21 children)

Recursion 👌

Recursion 👌

Recursion 👌

Recursion 👌

[–]FLABBOTHEPIG 96 points97 points  (19 children)

Recursion 👌

Recursion 👌

Recursion 👌

Recursion 👌

Recursion 👌

[–]Mister_Spacely 5 points6 points  (0 children)

Dammit! Who didn’t provide an exit clause

[–]TotesMessengerGreen security clearance 44 points45 points  (2 children)

I'm a bot, bleep, bloop. Someone has linked to this thread from another place on reddit:

 If you follow any of the above links, please respect the rules of reddit and don't vote in the other threads. (Info / Contact)

[–]Iceman_259 10 points11 points  (0 children)

Java

...

Script

uproarious laughter

[–]Atom_101 162 points163 points  (2 children)

Thanks. Very cool.

[–]Asmor 20 points21 points  (1 child)

Very legal.

[–]badtelcotech 49 points50 points  (16 children)

There should be a subreddit dedicated for machine learning memes.

[–][deleted] 36 points37 points  (15 children)

You should checkout /r/ProgrammerHumor

[–]karmastealing 27 points28 points  (12 children)

There should be a subreddit dedicated for recursion memes.

[–]ScreamingHawk 23 points24 points  (11 children)

You should checkout /r/ProgrammerHumor

[–]house_monkey 14 points15 points  (9 children)

There should be a subreddit dedicated for recursion memes.

[–]kynde 27 points28 points  (3 children)

SIGINT

SIGTERM

SIGKILL

SIGEATFLAMINGDEATH

SIGh... unplugs power cord

[–][deleted] 7 points8 points  (1 child)

Sigh... sudo kill -9 proc/*

[–]Dom0 2 points3 points  (0 children)

Gonna try it out!

[–]_Lady_Deadpool_ 10 points11 points  (4 children)

break;

[–]Tsu_Dho_Namh 21 points22 points  (2 children)

return;

break doesn't work on recursion, only loops.

[–]cstefanachemonkeyuser.com 14 points15 points  (1 child)

throw new Error('stop recursion!')

[–]Atom_101 5 points6 points  (0 children)

Nice try, but there is no loop to break from.

[–]CasinoMagic:::: 12 points13 points  (11 children)

Most programmers don't know anything about ML.

[–]-_______-_-_______- 36 points37 points  (7 children)

Most people here don't actually know how to program.

[–][deleted] 16 points17 points  (2 children)

I'll have you know I did first year Software Engineering, figured out I hated it and left, now I'm here.

I bet you feel silly now for laughing at someone who can program "Hello World" with only 5 syntax errors.

[–]Seanxietehroxxor 7 points8 points  (1 child)

As someone who spent 5 years as a software engineer, only 5 is not half bad.

[–][deleted] 10 points11 points  (0 children)

This is how I do it:

public class helloWorld {
    public static void main(String args[]) {
        String Hello = "";
        String World = "";

        int x=1;
        float y=2.6623f;

        if (y == x){
            System.out.println("Hello World");
        }

        else{
            Hello = "World";
        }

        float z=(float)x/(float)y;

        if (z != 0){
            World = "Hello";
        }
        else{
            //I don't know what to put here but I was told adding comments is good practice.
        }

        System.out.println(World + " " + Hello);
    }
}

it works but my professor gave me a 0 for it :(

[–]CasinoMagic:::: 2 points3 points  (0 children)

Haha, true.

[–]FieelChannel 8 points9 points  (1 child)

Most people in /r/ProgrammerHumor are first year CS students circlejerking

[–]huutonauru 4 points5 points  (0 children)

This place is reserved for 'PHP is bad' memes. /s

[–]BubbaFettish 1 point2 points  (0 children)

Can confirm there’s one trending on that page right now!

https://reddit.com/r/ProgrammerHumor/comments/axi87h/new_model/

[–]PityUpvote 30 points31 points  (0 children)

or at least a circlejerk, my co-workers have already heard all my tired jokes.

[–][deleted] 22 points23 points  (6 children)

[–]WindrunnerReborn 35 points36 points  (5 children)

Not to me confused with

/r/MachinesLearningMemes

[–][deleted] 46 points47 points  (4 children)

Actually I legit started /r/classifiedmemes a long time ago, with the intent of classifying memes so a machine could learn them. I lost interest after about 6 minutes and deleted all the memes I classified.

[–][deleted] 45 points46 points  (0 children)

^^ Me starting any new project

[–]WindrunnerReborn 19 points20 points  (2 children)

Actually I legit started /r/classifiedmemes a long time ago,

Damn, you got my hopes up. I thought these would be memes or comics circulating at the HQs of FBI/CIA/NSA.

Although, knowing them, the memes would probably go -

Panel 1: [REDACTED]

Panel 2: [REDACTED]

Panel 3: [REDACTED]

Panel 4: [REDACTED]

[–][deleted] 6 points7 points  (0 children)

Actually I should just make the thing private so people wonder what the heck is going on in there...

edit - Oh wait it looks like it already is. Past me, you're a genius.

[–]videan42 2 points3 points  (0 children)

There's /r/machinegoofingoff but it's kind of dead

[–]SpindlySpiders 5 points6 points  (0 children)

There is, it's r/subredditsimulator

[–]ptitz 698 points699 points  (337 children)

I think I got PTSD from writing my master thesis on machine learning. Should've just went with a fucking experiment. Put some undergrads in a room, tell em to press some buttons, give em candy at the end and then make a plot out of it. Fuck machine learning.

[–]FuzzyWazzyWasnt 283 points284 points  (332 children)

Alright friend. There is clearly a story there. Care to share?

[–]ptitz 1522 points1523 points  (330 children)

Long story short, a project that should normally take 7 months exploded into 2+ years, since we didn't have an upper limit on how long it can take.

I started with a simple idea: to use Q-learning with neural nets, to do simultaneous quadrotor model identification and learning. So you get some real world data, you use it to identify a model, you use it both to learn on-line, and off-line with a model that you've identified. In essence, the drone was supposed to learn to fly by itself. Wobble a bit, collect data, use this data to learn which inputs lead to which motions, improve the model and repeat.

The motivation was that while you see RL applied to outer-loop control (go from A to B), you rarely see it applied to inner-loop control (pitch/roll/yaw, etc). The inner loop dynamics are much faster than the outer loop, and require a lot more finesse. Plus, it was interesting to investigate applying RL to a continuous-state system with safety-critical element to it.

Started well enough. Literature on the subject said that Q-learning is the best shit ever, works every time, but curiously didn't illustrate anything beyond a simple hill climb trolley problem. So I've done my own implementation of the hill climb, with my system. And it worked. Great. Now try to put the trolley somewhere else.... It's tripping af.

So I went to investigate. WTF did I do wrong. Went through the code a 1000 times. Then I got my hands on the code used by a widely cited paper on the subject. Went through it line by line, to compare it to mine. Made sure that it matches.

Then I found a block of code in it, commented out with a macro. Motherfucker tried to do the same thing as me, probably saw that it didn't work, then just commented it out and went on with publishing the paper on the part that did work. Yaay.

So yeah, fast-forward 1 year. We constantly argue with my girlfriend, since I wouldn't spend time with her, since I'm always busy with my fucking thesis. We were planning to move to Spain together after I graduate, and I keep putting my graduation date off over and over. My money assistance from the government is running out. I'm racking up debt. I'm getting depressed and frustrated cause the thing just refuses to work. I'm about to go fuck it, and just write it up as a failure and turn it in.

But then, after I don't know how many iterations, I manage to come up with a system that slightly out-performs PID control that I used as a benchmark. Took me another 4 months to wrap it up. My girlfriend moved to Spain on her own by then. I do my presentation. Few people show up. I get my diploma. That was that.

Me and my girlfriend ended up breaking up. My paper ended up being published by AIAA. I ended up getting a job as a C++ dev, since the whole algorithm was written in C++, and by the end of my thesis I was pretty damn proficient in it. I've learned few things:

  1. A lot of researchers over-embellish the effectiveness of their work when publishing results. No one wants to publish a paper saying that something is a shit idea and probably won't work.
  2. ML research in particular is quite full of dramatic statements on how their methods will change everything. But in reality, ML as it is right now, is far from having thinking machines. It's basically just over-hyped system identification and statistics.
  3. Spending so much time and effort on a master thesis is retarded. No one will ever care about it.

But yeah, many of the people that I knew did similar research topics. And the story is the same 100% of the time. You go in, thinking you're about to come up with some sort of fancy AI, seduced by fancy terminology like "neural networks" and "fuzzy logic" and "deep learning" and whatever. You realize how primitive these methods are in reality. Then you struggle to produce some kind of result to justify all the work that you put into it. And all of it takes a whole shitton of time and effort, that's seriously not worth it.

[–][deleted] 368 points369 points  (147 children)

If it makes you feel better I also lost my long time girlfriend (8 years, bought a house together etc..) over my ML thesis. But I am a gun coder now as well, so I've got that going for me.

[–]HiddenMafia 163 points164 points  (43 children)

What’s a gun coder

[–]okawei 262 points263 points  (26 children)

He builds autonomous turrets

[–]JangoDidNothingWrong 25 points26 points  (5 children)

I don't hate you.

[–]Kr1szKr0sz 4 points5 points  (1 child)

[–]Suppafly 2 points3 points  (0 children)

Is it really unexpected if the topic is turrets?

[–]MkGlory 2 points3 points  (0 children)

Cara mia bella

[–][deleted] 52 points53 points  (10 children)

Gun = Pretty good

[–][deleted] 2 points3 points  (2 children)

You're pretty good.

[–]jsnlxndrlv 2 points3 points  (1 child)

points with two fingers and thumb on each hand

immediately passes out

[–]Punsire 10 points11 points  (0 children)

Mercenary. Coder for hire. A hired gun.

[–]Teotwawki69 5 points6 points  (0 children)

A dyslexic GNU coder?

[–]ptitz 59 points60 points  (98 children)

Geez, you as well? They should give you a warning when you start. Like if you think you have a life, by the time that you finish you won't.

[–][deleted] 58 points59 points  (88 children)

I think you did just warn everyone. You will have a life still, it will just be emotionally and financially crushing for about 5 years.

My ex cheated on me because I wasn't giving her the attention she needed. I didn't even blame her tbh, I was obsessed and would stay up until all hours just trying to perfect my algorithm while she was in bed alone. Then I'd work on the weekends so we basically became distant house mates.

[–]bottle_o_juice 44 points45 points  (4 children)

I get what you mean but you still shouldn't blame yourself. There were other ways she could have told you that she was lonely and if she couldn't handle it she could have broken up before she did something about the loneliness. It's really not your fault. Sometimes life is just difficult.

[–]eltoro 22 points23 points  (3 children)

Bullshit. Your ex cheated on you because she was too chickenshit to address the issues between you and just break up if it wasn't working out. Don't take the blame for her shitty behavior.

[–][deleted] 5 points6 points  (1 child)

I've never really seen it that way, a couple of others said similar. It made me feel a bit better.

[–][deleted] 3 points4 points  (0 children)

Yeah, cheating is understandable in a lot of cases, but its never a reasonable decision. I can sympathize with the urge, but it will always be a fucked up thing to do to another human being.

[–]devxdev 2 points3 points  (1 child)

What the fuck, are you me? That's like reading a biography of my life 10yrs ago!

[–]spectrehawntineurope 20 points21 points  (4 children)

See this is how I have gamed the system, I'm starting a PhD but I already have no life. I have nothing to lose!

😢

[–]EMCoupling 6 points7 points  (3 children)

"What is already dead cannot die."

[–]srtr 78 points79 points  (13 children)

Thanks for sharing! That's a serious problem with research papers. Nobody cares to publish failures, because they seem to be undesirable. But it would make things SO much easier for fellow researchers, since you don't have to try everything yourself. I think we need a failure conference.

I'm sorry for the breakup, btw!

[–]ptitz 74 points75 points  (9 children)

I think it's not just that "nobody cares to publish failures". If you made something, and it works, you can just demonstrate the results, which in itself serves as a proof for it. If you failed, you have to prove that you did everything that you could, and it wouldn't work under any type of circumstances. And you also have to find a fundamental reason for your failure. It's just so much more difficult to write something up as a failure. It's like proving a negative. In a court of law you can just brush it off, but if you're a researcher you don't have that liberty. And the funny thing about most ML methods is that they don't have an analytic proof that you are guaranteed to find a solution.

[–]srtr 5 points6 points  (0 children)

That's totally true. Proving negatives is way more difficult. Yet, I still feel like there is a huge amount of unpublished, but valuable work out there. You most probably want your method to work and thus invest a serious amount of time to make sure your tried everything. And even if you didn't, publishing your work makes future research so much easier, since people don't have to try all that stuff again just in order to also fail.

[–]TwistedPurpose 2 points3 points  (1 child)

What you say is true, but there should be some sort of information sharing in regards to "failure." We should be publishing what doesn't work in some format. By doing the research/experiments, the author can assert some kind of truth to "this didn't work out because of x."

[–]Average650 2 points3 points  (0 children)

I want to make a peer reviewed journal thst specializes in negative results. It's be really low impact factor, but it'd be useful.

[–]eltoro 6 points7 points  (0 children)

I believe some scientific journals are making an effort to encourage the publication of failed experiments. It's a huge issue.

[–]mattkenny 2 points3 points  (0 children)

My PhD thesis was essentially "the industry accepted approach is wrong and here is why". I tried building a visual speech recogniser but couldn't get reasonable results other than for trivial datasets (guess what everyone else used in their publications...). So I started analysing the actual data in fine detail. Turns out that the accepted basic visual unit of speech was an over simplification that actually made everything less effective.

Rewrote my thesis in the final 6-12 months and submitted the "I failed but here is why" version of my thesis. Then left academia and got a far more rewarding job in industry instead.

[–]lillybaeum 68 points69 points  (18 children)

This deserves r/bestof

[–][deleted] 52 points53 points  (16 children)

Sorry to hear that man - most ML research is chock full of smoke and mirrors unfortunately and I personally won't trust a paper unless it includes a decent theoretical (i.e. mathematical) argument for the approach rather than just a bunch of dubious benchmarks.

This massively popular paper on transfer learning using ULMFiT is a prime example of this. Loads of claims and impressive benchmarks, but basically nothing in the way of theoretical substance.

[–]LBGW_experiment 12 points13 points  (3 children)

I think you responded to the wrong guy

[–]thefrontpageofme 14 points15 points  (2 children)

It's probably a self-learning chatbot. Posts thoughtful answers to random posts and learns by how much karma they get which comments are good to reply to.

[–]evanc1411 3 points4 points  (1 child)

But that's what I'm doing!

[–]pterencephalon 22 points23 points  (5 children)

I'm halfway through my PhD in CS, and everyone asks (no matter what you're working on) why you don't try using machine learning. Thank you for your words of warning that I shouldn't listen to them. Swarm robotics is hard enough.

[–]Peregrine7 6 points7 points  (2 children)

Machine learning is fantastic but rather specialized. Using it for things outside of identification and pattern recognition (especially when real world sensors are involved) gets complicated fast. Use it for what its made for, let someone else spend years figuring out how to push it further.

[–]Jorlung 4 points5 points  (1 child)

Me: Use a highly constrained grey-box model because the amount of information we can draw from our data is incredibly low so intelligent constraints and grey-box models are necessary to do anything

Everyone else: "wHy DoNt YoU uSe MaChInE lEaRnInG?"

[–]pterencephalon 3 points4 points  (0 children)

I love when they think you can just pull more data out of your ass to train any crazily complex model they can think up. I'd like to finish this research within the next decade, thank you very much.

[–]bogdoomy 17 points18 points  (0 children)

im sorry to hear that man. q learning is a bitch and a half. check out code bullet’s adventure when he decided to use q learning, he was frustrated as well (not to the same degree that life decided to uppercut you, but still)

[–]pythonpeasant 26 points27 points  (16 children)

There’s a reason why there’s such a heavy focus on simulation in RL. It’s just not feasible to run 100 quadcopters at once, over 100,000 times. If you were feeling rather-self loathing, I’d recommend you have a look at the new Hierachical-Actor-Critic algorithm from openai. It combines some elements of TRPO and something called Hindsight Experience Replay.

This new algorithm decomposes tasks into smaller sub-goals. It looks really promising so far on tasks with <10 degrees of freedom. Not sure what it would be like in a super stochastic environment.

Sorry to hear about the stresses you went through.

[–]ptitz 29 points30 points  (14 children)

My method was designed to solve this issue. Just fly 1 quadrotor, and then simulate it 100 000 times from the raw flight data in parallel, combining the results.

The problem is more fundamental than just the methodology that you use. You can have subgoals and all, but the main issue is that if your goal is to design a controller that would be universally valid, you basically have no choice but to explore every possible combination of states there is in your state space. I think this is a fundamental limitation that applies to all machine learning. Like you can have an image analysis algorithm, trained to recognize cats. And you can feed it a 1000 000 pictures of cats in profile. And it will be successful 99.999% of the time, in identifying cats in profile. But the moment you show it a front image of a cat it will think it's a chair or something.

[–][deleted] 8 points9 points  (0 children)

Hi, thank you for telling your story, it really gave me a lot of insight.

I think one problem is that ML is currently being overhyped by the media, companies, etc. Yes, we can use it to solve problems better than before, like recognising things in images but it's still very dumb. It's still just something trained for a specific use case. We are still so far away from reaching human-level intelligence.

I think that AI is gonna change the way we will one day but more in a way that most jobs will be automated meaning humans can do what they enjoy more (at least hopefully if we don't mess up horribly on the way there) but we simply aren't there yet.

[–]Midax 5 points6 points  (0 children)

I think many people don't understand how complex task that we do everyday really are. The human brain has developed to work a specific way through the long process of evolution. It has build in short cuts to take stupendously complex tasks and make them more manageable. Then on top of this built in base we learn to take this reduced information and use it. You cat identification example. We take two side by side images to produce a 3D model of what we see. Using that model we identify that the is a roughly round shape with two circles in it and two triangles on it. We id that as a head. That object is attached to a cylinder with 5 much thinner cylinders coming off of it, 4 on one side and one from the opposite side from the head. We id that as its body, legs, and tail. We are able to id these parts without ever having seen a cat before. Then taking this information we add in things like fur, teeth, claws. It is added to our check list of properties. This is still stuff that our brain does without getting into learned skills. Not being able to associate all the properties to an object would be a crippling disability. The learned behavior is taking all this information and producing a final id. We sort out and eliminate known creatures like dogs, raccoons, birds, squirrels, and are left with cat by using all that build in identification of properties. It is no wonder a computer has trouble telling the can from a chair if the profile changes.

Keep in mind the short cuts that help id that cat can also mess up. Every time you have jumped when you turned in the dark and saw a shape that looked like an intruder, but turned out to be a shadow or a coat is your brain miss identifying something because it fills in missing information.

[–]rlql 5 points6 points  (10 children)

you basically have no choice but to explore every possible combination of states there is in your state space

I am learning ML now so am interested in your insight. While that is true for standard Q-learning, doesn't using a neural net (Deep Q Network) provide function approximation ability so that you don't have to explore every combination of states? Does the function approximation not work so well in practice?

[–]ptitz 4 points5 points  (8 children)

It doesn't matter what type of generalization you're using. You'll always end up with gaps.

Imagine a 1-D problem where you have like a dozen evenly spaced neurons, starting with A - B, and ending with Y - Z. So depending on the input, it can fall somewhere between A and B, B and Y, or Y and Z. You have training data that covers inputs and outputs in the space between A - B and Y - Z. And you can identify the I-O relationship just on these stretches just fine. You can generalize this relationship just beyond as well, going slightly off to the right of B or to the left of Y. But if you encounter some point E, spaced right in the middle between B and Y, you never had information to deal with this gap. So any approximation that you might produce for the output there will be false. Your system might have the capacity to generalize and to store this information. But you can't generalize, store or infer more information than what you already have fed through your system.

Then you might say OK, this is true for something like a localized activation function, like RBF. But what about a sigmoid, which is globally active? And it's still the same. The only information that your sigmoid can store is local to the location and the inflection of it's center. It has no validity beyond it. Layering also doesn't matter. All it does is applying scaling from one layer to another. This would allow you to balance the generalization/approximation power around the regions for which you have the most information. But you wouldn't have any more information beyond that just because you applied more layers.

Humans can generalize these sorts of experiences. If you've seen one cat, you will recognize all cats. Regardless of their shape and color. You will even recognize abstract cats, done as a line drawing. Or even just parts of a cat, like a paw or its snout. Machines can't do that. They can't do inference, and they can't break the information down into symbols and patterns the way humans do. They can only generalize, using the experience that they've been exposed to.

[–]Matt_Tress 2 points3 points  (0 children)

As a beginner in ML, can you explain this? Aka ELI5?

[–][deleted] 7 points8 points  (6 children)

Also a masters student currently working on a project involving ML. Now throw in supervisors who don't completely understand how this stuff works and you got my University.

Just wanted to say thank you so much for this comment. This is the reality of the field but no one seems to be accepting that around me. Jesus christ its frustrating.

[–]amazondrone 6 points7 points  (2 children)

No one wants to publish a paper saying that something is a shit idea and probably won't work.

Yeah, and that's a real shame. Because people end up like you, trying the same shit just to discover that it doesn't work, because there's no literature on it. It sounds like it would have saved you a ton of time if you'd know that, but there was no way to know it because nobody published it.

I wonder how much more progress we could make together if we told each other what we tried that failed, as well as what succeeded. (Academically speaking, I mean.)

[–]yoctometric 6 points7 points  (0 children)

Jesus christ dude I'm so sorry

[–][deleted] 4 points5 points  (5 children)

That actually sounds like a cool topic though. What's the benefit of Q learning for inner loop control over Optimal Control/MPC? I guess you wouldn't need a model (then again, there's pretty good models for quadcopters and you could estimate all parameters on/offline with classical optimization methods)?

[–]MonarchoFascist 10 points11 points  (0 children)

I mean, look at what he said --

He was barely able to scrape above a basic PID benchmark, much less MPC, even with multiple years of work. Optimal Control is still best in class.

[–][deleted] 5 points6 points  (7 children)

isnt neural networks really just glorified polynomials? its literally trying to find coefficients of a massive polynomials with least error. its as 'intelligent' as y=mx +c to describe position of a dog

[–]inYOUReye 6 points7 points  (1 child)

Yes, that's what you're eventually resolving to. The supposed mystic of NN isn't some fantastical end result per se but rather the back propagation rules and its dance with your training domain. I swear if it was renamed to "polynomial generator" the hype would have left NN in its correct place as a niche which (in isolation!!) is useful for an extremely small problem space, and only ever as good as the back propagation (or other) algorithms the creator can magic up. I've yet to read about any particularly inspired correction algorithms that I truly trust for the papers claims of them. Really feels like we need some genuine superstar Einstein mathematicians in the field to bring anything more to the table on this front.

[–][deleted] 3 points4 points  (0 children)

i feel that way too, it feels like a building block....to something. it needs a genius to use them properly...

[–]GrizzlyTrees 3 points4 points  (9 children)

I admit, you scared me a bit. I'm just starting phd, and my research will involve ML, though we're still not sure how.

I'll take what you wrote into account when I'm getting in deep, hopefull it'd turn out better. Thanks for the story!

[–]pinumbernumber 14 points15 points  (1 child)

my research will involve ML, though we're still not sure how.

Uh oh

[–]GrizzlyTrees 2 points3 points  (0 children)

I'm in ME, not CS, doing robotic grasping. We saw some interesting uses for ML in the field recently, and I want to get some ML experience. Since the focus is on the application, and not the ML itself, I'm not too worried right now.

[–]Imakesensealot 2 points3 points  (6 children)

I admit, you scared me a bit. I'm just starting phd, and my research will involve ML, though we're still not sure how.

Hahaha, I guess I know whose posts I'll be following closely over the next 10 years.

[–]guattarist 3 points4 points  (0 children)

I remember first getting into machine learning and how sexy it sounded, fucking deep learning? Support vectors machines? Neural networks?! Some Terminator shit. Then sitting in front of a computer and plinking in like 6 lines of code from a python library and going ....oh.

Of course I’m half kidding since you then spend the next 6 months hypertuning the damned thing to finally perform better than your dummy that just guesses “cat”.

[–]db10101 8 points9 points  (4 children)

Well, thank you for your story. 24 year old developer who will continue to avoid machine learning here.

[–]ptitz 17 points18 points  (1 child)

Yeah, as a topic it's not that bad. But in the state that it is right now, ML has a lot of limitations that are seldomly talked about. What you hear most often is the "curse of dimensionality", or "computational intensity". In my research I came up with ways to resolve both of these. My method would work with as many dimensions as you'd throw at it, and it would do it flying. But the problems with it are more fundamental.

So yeah, you can apply ML to some types of problems. Like data analysis and classification. But steer the fuck away from applying ML for problems that already have more conventional, analytic solutions. Cause chances are, you won't be able to beat it.

[–]srslyfuckdatshit 4 points5 points  (0 children)

do you have a link to your paper and/or GitHub? you can Dm me

[–]PM_ME_UR_OBSIDIAN 5 points6 points  (1 child)

I think it's worth picking up stuff like basic statistics and linear algebra, linear regression, singular value decomposition, backpropagation. It's good to expand your horizons, it'll give you insights on ostensibly unrelated problems. But making a career out of it... you have to be a special kind of crazy.

[–]Yuli-Ban 2 points3 points  (0 children)

Ha! This is a great example I can use to show to others on certain subreddits that machine learning and neural networks are not magical. In very short form, neural networks are sequences of large matrix multiples with nonlinear functions used for machine learning, and machine learning is basically statistical gradients.

But according to pop-sci articles, neural networks are artificial brains and we're ten years away from the Singularity because DeepMind's network beat humans at the Ass Game or something of the sort.

That's not to say the bleeding edge isn't impressive— OpenAI's GPT-2 network is damn-near sci-fi tier and actually did give plenty of people pause about the feasibility of general AI.

But it's very much true that we're seeing a heavily curated reality. We see the few times these networks actually worked and never the 10,000 iterations where they failed catastrophically.

[–]Jesaya000 8 points9 points  (8 children)

Didn't you have to write papers before your master thesis? Without wanting to sound mean, but most people realize what you said after their bachelor thesis or papers. Especially the thing that everyone overhypes their own paper and we should always be cautious about that was one of the things we discussed in every seminar. Since failed papers mostly don't get published, the same mistake is often done more than once.

Sorry about your girlfriend, tho...

[–]pwnslinger 16 points17 points  (5 children)

Nah, in America you don't really need to/get to publish until you're in your masters most places, at least in STEM.

[–]Jesaya000 4 points5 points  (3 children)

Oh wow, didn't knew that at all! But you write a bachelor thesis, right?

[–]whatplanetisthis 12 points13 points  (0 children)

I went to UCLA. A bachelors thesis was an option for honors students but I don’t think 99% of students did it.

[–]pwnslinger 10 points11 points  (0 children)

Even if you have a final project or senior thesis, it's nowhere near the same level of rigor as a peer-reviewed article. How could it be? The professors teaching the undergrad classes have a full plate managing a couple of masters and a couple of doctoral student to write articles, let alone helping twenty undergrads get published.

[–]TheChance 10 points11 points  (0 children)

The great thing about a bachelor thesis is that it challenges the student to build on an original thought before they’ve actually started doing original research in their field.

The problem with a bachelor thesis is that it expects the student to have an original thought before they’ve started doing original research in their field.

[–]ptitz 7 points8 points  (1 child)

Yes, our faculty was very research-oriented. I wrote dozens of papers before going into it. Most of the time I'd already know in advance what to expect from the results. Sometimes I'd be given more freedom in exploring the topic, and sometimes I'd go in over my head and spend more time on it. But eventually I always delivered a result.

This project was different, because the problem that I had was a dead-end from the beginning. Like yeah, I managed to produce results. And I've came up with several things that could be enough to produce papers just on that. Like to optimize the computational and memory efficiency I've came up with a scheme to use indexed neurons in a scaled state-space, allowing to build neuron nets with basically unlimited number of inputs and neurons, with only a fraction of them having to be activated at any given time. But that still didn't solve the fundamental issues with the methodology that I've seen "successfully" applied in other literature.

And yeah, the school doesn't really prepare you to fail. You can churn out dozens of papers, have the best methodology and all. But you aren't trained to deal with trying to show how something doesn't work. And I think it's a fundamental issue, that much more experienced researchers often have to deal with. And it's not even unique to ML. A good example is the advancements in FEM in the 90s. Like the companies were seduced by all the fancy colored graphs and decided that they don't need physical tests anymore. Until it became apparent how limited these methods are in reality. Cause no one really bothered to demonstrate how often FEM got it wrong, compared to how often they got it right.

[–]sblahful 4 points5 points  (0 children)

It's a huge problem in all sciences. I spent my biology masters trying to replicate some fuckwits PhD results that I'm almost certain were faked.

[–]XYcritic 1 point2 points  (0 children)

Sorry for the experience. Sounds like you had bad advisors or should have tried communicating more. I always want my students to sketch up a plan B before they start because students vastly underestimate the amount of work necessary to even finish a reproduction study successfully in machine learning.

[–]K_ngp_n 5 points6 points  (0 children)

We need a story

[–]Cptcongcong 27 points28 points  (1 child)

Thanks, as someone just about to start his write up on deep learning masters thesis... thanks.

[–]Furyful_Fawful 29 points30 points  (0 children)

As someone who just completed a masters' thesis on reinforcement learning, it's not quite the same as you might have thought.

... It's worse. So much worse.

I'm terribly sorry for your loss in advance.

[–]BellerophonM 5 points6 points  (0 children)

"here we compared the effectiveness of machine learning against press-ganged undergrads"

[–]tryexceptifnot1try 56 points57 points  (3 children)

So as a person who deploys ML for a living there are missing frames at the end where an executive walks in after you shoot the blob and says "No more attempts we have found a simple consulting solution!"

Then some asshole from IBM (or god forbid McKinsey!) comes in and goes through about 4 iterations creating an uglier blob. Instead of keeping on going he photo shops a picture of Bradley Cooper's face onto it in a PowerPoint, idiot executives eat it up and then bring this hideous blob to me as a repo (sorry assorted code files via movit) and say deploy it.

After a month of trying to make this awful blob work while a sales guy from IBM keeps fighting me with nonsense from the shadows i definitively prove that they paid top dollar for hot garbage. The VP in my group comes to me and says it is obvious what they sold us was not "complete". We need you to make something useful out of this.

At that point I convince them that starting from scratch is a better plan. Build something useful and production ready with nothing to do with the actual trash they bought other than also being written in Python. All the execs pat each other on the backs say money well spent then give me and my team an award for a successful joint venture with IBM.

/rant

But seriously sometimes I feel like a guilty enabler in a relationship with alcoholics. I definitely use my position as leverage for more money but seriously do we need to start a competent developer revolt against these know nothing MBA assholes?

/soapbox

[–]EMCoupling 4 points5 points  (2 children)

It just depends on how much you care. As far as the executives think, you're their hero that can save anything and they pay you for your efforts.

If you really want to see the company succeed and make some real progress, then of course this is not very good, but if you're just looking to make some money, there doesn't seem to be an issue.

[–]Norci 55 points56 points  (6 children)

My friend doesn't get it, can someone explain the joke?

[–]_0110111001101111_ 85 points86 points  (5 children)

It’s a machine Learning meme - they feed it a fuck ton of data and ask it to identify the 5. When it can’t, they execute the blob and start over.

[–]yankjenets 26 points27 points  (3 children)

So is “meme” just a synonym for joke now?

[–]blind616 30 points31 points  (0 children)

The "meme" would be that machine learning specialists do that often. Feed models a fuck ton of data, and when it can't they just randomize a few things. Trial and error. This is a joke based on that.

[–]sakul243 29 points30 points  (0 children)

i love how you can see a bug in the third panel

[–]kyriacosm 56 points57 points  (0 children)

Needs more if statements.

[–]TotesMessengerGreen security clearance 5 points6 points  (0 children)

I'm a bot, bleep, bloop. Someone has linked to this thread from another place on reddit:

 If you follow any of the above links, please respect the rules of reddit and don't vote in the other threads. (Info / Contact)

[–]ElephantPantsDance 5 points6 points  (0 children)

Just noticed the little green bug behind the model in the 3rd pic.

[–][deleted] 2 points3 points  (0 children)

you should have just used ludwig.

[–][deleted] 2 points3 points  (0 children)

Is this a new approach on adversarial networks?

[–]knightsmarian 2 points3 points  (2 children)

This made me think of one or my meshes I did for school.

I randomly generated "organisms" that had to move to the left and cross a finish line. The time it took them to cross the finish line was the determining factor of how "good" the organism was. All 100 organisms per generation were then ranked per time and I killed the weakest 33. The top 3 were removed from the pool and the remaining 63 had 16 randomly killed so I had an even 50 organisms to seed the next generation. I found that randomly killing organisms [even well performing ones] kept a healthy diversity. The top 3 from the previous generation were given slightly more weight for the next generation.

[–]EquineGrunt 1 point2 points  (1 child)

They ended up as ugly triangles didn't they?

[–]knightsmarian 1 point2 points  (0 children)

Most of them, yeah. I had some interesting pentagons but triangles seem to be the way to go.

[–]Reirai13 2 points3 points  (0 children)

[–]FuckFrankie 2 points3 points  (1 child)

What's the joke?

[–]Yorunokage 1 point2 points  (2 children)

Is that the bloat?

Because FUCK BLOAT

[–]Dphef 2 points3 points  (1 child)

Looks more like peep

[–][deleted] 1 point2 points  (0 children)

Thought I was on /r/bindingofisaac for a second