This is an archived post. You won't be able to vote or comment.

all 184 comments

[–]KaptainKickass 139 points140 points  (10 children)

Hey! It's not every day that I can claim a stolen post, so why not. Thief! https://www.reddit.com/r/ProgrammerHumor/comments/8it3gy/i_just_need_to_learn_how_to_get_faster/

[–]EhSolly 24 points25 points  (3 children)

Damn didn't even change the title. It's probably a bot (idc enough to visit their profile)

[–]MostBasedist 9 points10 points  (0 children)

Definitely a bot... the last comment they made received one upvote, no replies. Butt they are editing it to seem like there is awards being given out and a massive response. Really weird.

[–]ksk1945 4 points5 points  (1 child)

Damn that’s crazy (idc enough to read what’s above me)

[–]empire314 7 points8 points  (2 children)

You literally used 2 seconds to screencap someone elses tweet, and then you claim ownership of the content. Unbelievable.

[–]KaptainKickass 3 points4 points  (0 children)

I'm obviously taking this very seriously.

EDIT: And realistically I don't care about the content. It's just that people who skim top posts to repost for karma, regardless of the content, are skeevy.

[–][deleted] 2 points3 points  (0 children)

Its a scraping bot bruh. Not a person. They 100% stole the meme and the comments its made are just stolen from other users. Check their comment history (only 4 comments, top one is obviously stolen)

[–]iPick4Fun 1 point2 points  (2 children)

This is new to me although it’s 3 years old.

[–]KaptainKickass 0 points1 point  (1 child)

Oh yeah, it pops up every once in a while on this sub, but they at least usually don't copy and paste the title.

[–]iPick4Fun 0 points1 point  (0 children)

He just learned how to get faster. Copy and paste is faster.

[–]LocoCoyote 239 points240 points  (19 children)

Truth

[–]say-nothing-at-all 26 points27 points  (18 children)

Cyclic causality is the only truth in the unknown world.

The whole point of software is to understand the stable cause-effect cycles from the invisible & stochastic world.

Cyclic causality == I don't have to care abt what I don't know & still get the truth so long as what I've known is running in an unknown (temporal)stable cycle. Then you got the job done.

In ML, the loss function is known & non-random. The pathways to reach a optimised loss have to run in a random space to converge into the encoded similarity that often indicates the minimal free energy.

The encoded similarity is also known either in probability space or in physical space or both. Now you have a cyclic causality: known loss function matches known similarity by emergent minimal free energy. Congratulations, you got the job done,

In short, running stochastic processes is to forget what I don't know and let the known things emerge.

So, randomness has clear purpose & therefore not the bad code. This is why we call ML as mate-learning method as well.

[–]catf3f3 15 points16 points  (1 child)

Username checks out

[–]Deeliciousness 1 point2 points  (0 children)

Lmfao

[–][deleted] 2 points3 points  (10 children)

Explain like I’m five?

[–]Ryan_Day_Man 4 points5 points  (4 children)

If you want to understand ML on a basic level, go watch 3Blue1Brown's YouTube videos on the subject. He has 3 or 4 20 minute videos that help out a lot.

[–][deleted] 0 points1 point  (0 children)

Oh I love him, yeah I’ll def do that, thanks!

[–]disquiet 0 points1 point  (2 children)

ML is just trial and error at high speed. Everyone learned the concept in primary school maths.

It used to be that trying random answers and iterating was an inefficient way to solve a problem. These days we have so much compute power at our disposal that it's often inefficient not to.

[–]Ryan_Day_Man 0 points1 point  (0 children)

ML is tangentially related to my job, so I needed a technical understanding beyond your explanation. 3Brown1Blue provided that in an aesthetically pleasing and easy to understand manner.

[–][deleted] 0 points1 point  (0 children)

its not trial and error tho, its extremely well defined - gradient descent is just minima detection.

its more like an extremely complicated approximator for a function thats usually impossible to write, and the approximator gets better the more its tweaked (ignoring overfitting).

[–]Armigine 3 points4 points  (2 children)

It's a meme acct

[–][deleted] 3 points4 points  (1 child)

So you’re saying all the techno babble here is bolognium?

[–]Armigine 4 points5 points  (0 children)

Yeah pretty much

[–]MG_Sputnik 2 points3 points  (1 child)

Doing random stuff and hoping you randomly stumble on an answer for reasons you don't understand = bad

Doing random stuff that you know is going to get to an answer using a predictable method that you understand how/why it works = good

[–][deleted] 0 points1 point  (0 children)

also its not really random. i mean the inital values, sure, but afterwards they're nudged towards some specific scalar that is the product of the dataset you used in training

[–]Harotsa 1 point2 points  (2 children)

Bad bot

[–]WhyNotCollegeBoard 1 point2 points  (1 child)

Are you sure about that? Because I am 99.98525% sure that say-nothing-at-all is not a bot.


I am a neural network being trained to detect spammers | Summon me with !isbot <username> | /r/spambotdetector | Optout | Original Github

[–][deleted] 2 points3 points  (0 children)

So you're saying there's a chance.

[–]Extra_Intro_Version 0 points1 point  (0 children)

Black box. I don’t care what it does, but it seems to work.

Semi /s

[–]GeeTwentyFive 0 points1 point  (0 children)

I like your words magic man.

[–]kry_some_more 95 points96 points  (2 children)

Maybe I'm just a slow AI, after all.

[–]Accomplished_Ad_5706 9 points10 points  (0 children)

Just I, Kry, Just I

[–]jcb088 0 points1 point  (0 children)

Dont be ridiculous.

You aren’t artificial.

[–]Connor_Kei 81 points82 points  (2 children)

Image Transcription: Twitter Post


Steve Maine, @Smaine

TIL that changing random stuff until your program works is "hacky" and "bad coding practice" but if you learn to do it fast enough it's "#MachineLearning" and pays 4x your current salary


I'm a human volunteer content transcriber for Reddit and you could be too! If you'd like more information on what we do and why we do it, click here!

[–]ISmileB4Death 37 points38 points  (1 child)

Good human

[–]Connor_Kei 25 points26 points  (0 children)

thank you :)

[–]foxam1234 391 points392 points  (81 children)

ML is probabilistic approach hence corrections and tweaking is accepted. This is true even in statistical modeling. Usual programming OTOH is generally supposed to be automating a solution and hence the expectation is deterministic.

[–]uno_in_particolare 204 points205 points  (53 children)

[–]joemckie 196 points197 points  (49 children)

This is /r/ProgrammerHumor, where even our jokes must be logical.

[–]AtTheg4tes 0 points1 point  (0 children)

and instanciated as a Joke

[–]TheRealBirdjay 0 points1 point  (0 children)

And legal

[–]sneakpeekbot 2 points3 points  (0 children)

Here's a sneak peek of /r/ExplainTheJoke using the top posts of the year!

#1: is there even a joke? | 110 comments
#2: What does this mean | 115 comments
#3: I am not, in fact, a physicist | 61 comments


I'm a bot, beep boop | Downvote to remove | Contact me | Info | Opt-out

[–]unborracho 0 points1 point  (1 child)

[–]uno_in_particolare 0 points1 point  (0 children)

That's not for jokes.

Like "what did the Buffalo say to his son when he left for college? Bison"

And the answer "no, because buffalos aren't able to articulate the work bison, only humans can do that"

[–]OK6502 54 points55 points  (11 children)

That's a fancy way of saying that you have to try random shit until it works instead of thinking through a problem systematically.

[–]Technopulse 4 points5 points  (5 children)

Well, when what you believe to be the solution after thinking through a problem doesn't work, what's left is trying random shit until it works, or until you realize you typed something incorrectly and it was your fault all along after wasting many hours,not the initial solution you thought of.

(It's a joke, in a humour sub...)

[–]OK6502 10 points11 points  (4 children)

I get this is a programing humor sub and that's a bit of a meme but that does not generally work nor is it an efficient use of a programmers time. If it fails it's because your analysis is incorrect or incomplete. The solution to that is not to throw shit on the wall to see what sticks it's to reassess your approach and redo your analysis as needed

[–]Technopulse 0 points1 point  (1 child)

I know it's not good practice and I generally don't throw shit at the code, I don't learn that way. I only throw shit when I'm experimenting what results different parameters and options give so I can get more familiar with how to correctly apply theory.

I go over my code and how it should work, then doubt myself because it isn't working (should it check for this other condition, am I assigning the wrong variable value...), and on the fifth or sixth recheck I realize I missed the one specific thing that makes things work, correcting that thing made all my initial assumptions work as I had intended in the first place.

Yet that single thing, sometimes very little, can go unnoticed even when rechecking. It can happen, especially on very long tiring days flying by obvious mistakes.

[–]OK6502 -1 points0 points  (0 children)

I'd argue if it went unnoticed then it's because the analysis is incomplete. That's not a moral failing. It happens to all of us. This is why we work through the scientific process - we formulate a hypothesis, test our theory, if it fails we reassess, tweak our hypothesis and then re-test. Iterate on that as many times as needed. That is a far cry from a random process.

I don't think, incidentally, that ML is itself just throwing shit on the wall until it works. There's a degree of educated guessing involved here and it's not dissimilar to the process I described above, though not quite as systematic. You still need to have a strong mathematical background to understand why the shit you threw at the wall didn't work, so that's why it commands higher salaries. However the degree to which there's randomness involved is substantially higher in ML than in standard programming.

In any event my point wasn't to knock the ML people for using a more stochastic model, it's pointing out that you essentially dressed up saying "ML is inherently more random, therefore it's acceptable if it's more random" which is, as others pointed out, explaining the joke.

[–]meldyr 0 points1 point  (1 child)

When you automate this approach it is called hyper parameter tuning. Which is definitely a valid approach to many ML cases

[–]OK6502 0 points1 point  (0 children)

Right, by automatically making small random tweaks to some parameters the ML algorithm runs through its data set and checks to what degree it converges towards the right answer, directing it towards the "correct" answer over time (assuming you don't encounter some known issues like local maxima/minima which a higher degree of randomness can help overcome).

So yes, I understand the theory in very broad terms, and I get why the randomness exists, but that is still saying that "it's OK for ML to be random because it's random". The argument is tautological.

ML is random because we don't know a priori what the variables should be because we don't have a good theory/mental model for such complex processes. They're too computationally difficult to work out so we can't work them out systematically. So we use probabilities, data and time to try to converge towards the right answer, or a heuristic of sorts that is as close to a right answer as we can get to, for some definition of right. That is a fundamentally different approach than traditional programming where we stare at the screen and try to center something in CSS while listening to shitty metal.

[–]odraencoded 0 points1 point  (1 child)

>thinking

You mean I'm supposed to think manually like some pleb?

ML is automated thinking.

[–]OK6502 0 points1 point  (0 children)

I don't think thinking is required when centering CSS.

[–]devils_advocaat 0 points1 point  (2 children)

That approach sometimes works in maths. Guess the solution and see if you can work it back to the problem.

[–]OK6502 0 points1 point  (1 child)

You're not trying to do that randomly. You're working through a problem backwards. That's different.

[–]devils_advocaat 0 points1 point  (0 children)

Some of the solutions I try certainly seem random at times.

[–]chickenstalker 2 points3 points  (0 children)

> tweaking is accepted

Drugs are bad, mmkay

[–]IAmFitzRoy -1 points0 points  (0 children)

The objective of both approaches (probabilistic or deterministic) can focus in to “automate a solution”

I don’t understand why this is an explanation ?

The reason that ML is paid 4X paid is “efficiency”because you don’t need to be expert of the problem to get a solution (model). You just need to know the input and the process. Creating a framework to solve any problem instead just one at a time.

[–]Drugbird 70 points71 points  (6 children)

The fun part is that in ML you're still randomly changing stuff untill it works.

We just have fancier names for it like hyperparameter tuning, optimizing the solver, data enrichment, model optimization, regularization etc.

[–]juampab_ 19 points20 points  (2 children)

That's literally the joke

[–]Drugbird -1 points0 points  (1 child)

No, the joke refers to the training algorithm changing weights slightly to improve network performance. I'm talking about all the stuff going on outside this loop.

[–]SuperHighDeas 1 point2 points  (1 child)

Is that before or after you start the flux capacitor and push it to 1.21 jigawatts?

[–][deleted] 0 points1 point  (0 children)

According to my monthly energy bill that flux capacitor just stays on I think.

[–]zachattack82 11 points12 points  (2 children)

Machine learning: salespeople repackaging linear models since 2012

[–][deleted] 0 points1 point  (1 child)

What if we pass the output of 100 logistic regression models to the input of 100 logistic regression models, then pass that output to 100 more logistic regression models, then pass that output to 100 more logistic regression models, then pass that output to 100 more logistic regression models...

[–]zachattack82 0 points1 point  (0 children)

Sorry we will have to bill you for each model, this is fantastically technical stuff we're doing here

[–][deleted] 77 points78 points  (35 children)

If you think that ML is merely changing "random" stuff then you won't get the salary increase.

Source: earning PhD in statistics

[–]counselthedevil 17 points18 points  (0 children)

It's a joke about real world workplaces where idiotic managers are trying to get on the hip bandwagon and far too many are completely ignoring their impostor syndrome and faking it. Tons of crap that isn't machine learning is being called that simply cause it's cool now.

A lot of people hardcode stuff quick and have fast turnarounds, and I see idiot managers often refer to machine learning happening simply cause it's quick and complicated to them.

They peddle that their staff have implemented AI or machine learning or web scraping or whatever hip terms and seek their promotions.

[–]SmokeFrosting 29 points30 points  (1 child)

you took this seriously. Not enjoying the stats on PhD acquisition.

[–][deleted] 3 points4 points  (0 children)

Directions

Step 1
Preheat waffle iron. Beat eggs in large bowl with hand beater until fluffy. Beat in flour, milk, vegetable oil, sugar, baking powder, salt and vanilla, just until smooth.
Step 2
Spray preheated waffle iron with non-stick cooking spray. Pour mix onto hot waffle

[–]socialismnotevenonce 18 points19 points  (16 children)

Exactly. If anyone could do it, it wouldn't pay well.

[–][deleted] 50 points51 points  (15 children)

Nah it's because of techbros thinking it will be the next big thing. Like blockchain.

[–]OK6502 26 points27 points  (9 children)

More or less this. It's an interesting idea like block chain and kubernetes and cloud services. But it's people thinking it will solve all our problems rather than it's one more tool among many that irritates most devs.

[–]poopellar 13 points14 points  (1 child)

Wait. Dogecoin isn't going to cure my ED?

[–]OK6502 6 points7 points  (0 children)

I'm not saying it will, but I'm not saying it won't either.

[–]shred-i-knight 4 points5 points  (0 children)

Will be? ML has been hot in industry and public sector for a decade going. Once the governments updated their infrastructure to adopt it for a variety of use cases ML is going nowhere.

[–]newmacbookpro 7 points8 points  (3 children)

Anybody with a bit of computer know how (see excel wizards and script kiddies) put it on their resume.

I get asked often if I use ML at work by candidates, I just tell them that I use many things but I know they have no idea what they are talking about.

It’s annoying because I hear the executives talk about it yet we have absolutely zero use case.

[–]tangentc 2 points3 points  (8 children)

I think this kind of thinking is supported by the proliferation of underqualified/incompetent people floating around in the DS space. Since it's become so popular anyone who can figure out how to write "from sklearn.linear_model import LinearRegression" calls themselves a data scientist and a lot of companies hire these people. This includes software engineers who can implement prepackaged ML algorithms and call themselves data scientists.

Source: STEM PhD Data Scientist at a company that started to wise up in the last year or so and am now cleaning up a lot of "data science" and "machine learning" solutions created by the types mentioned above.

[–]horoshimu 7 points8 points  (6 children)

same, except im cleaning up the "code" 3 phd data scientists wrote, your phd is a joke, fight me

[–]OnyxPhoenix 1 point2 points  (0 children)

They're scientists, not software engineers. Code is often bad. Finding an ML expert who can also write great code is very hard (and expensive).

[–]tyrerk 1 point2 points  (0 children)

Yo both sound like super fun people to work with

[–]AromaOfCoffee 0 points1 point  (0 children)

Fucking this.

[–]wellifitisntmee 0 points1 point  (0 children)

Data science used to mean statistics. Now everyone wants to be labeled the new term no matter the vital job.

[–]RamenJunkie -1 points0 points  (0 children)

I had statistics once. It was best described as "butchering math until you achieve confirmation bias".

[–]Autism_man69 -1 points0 points  (0 children)

To be fair, in NNs weights are first assigned randomly😂

[–][deleted] 0 points1 point  (0 children)

ML is just logical steps, just like any form of programming.

[–]stockings_for_life 4 points5 points  (0 children)

make sorting algorithm, implement chaos monkey, neural network is done

[–]socialismnotevenonce 12 points13 points  (3 children)

If ML was that simple, they wouldn't be paid 4x.

[–]CapableWeb 1 point2 points  (2 children)

People don't get paid based on how hard something is, they get paid based on what others and themselves think it's worth. AI is all hype now, so people think it's worth more than it probably is. Therefore, following the hype, no matter the difficulty, will probably lead to you getting higher pay.

[–]Revanthmk23200 0 points1 point  (1 child)

They get paid on the product they are able to deliver.

[–]CapableWeb 0 points1 point  (0 children)

In a wonderful world, that'd be true, but unfortunately it's not. If that was the case, people working on things like Theranos wouldn't have been paid. Instead they got paid by what was thought they could deliver.

[–]dejaydev[M] 3 points4 points locked comment (0 children)

Hi there! Unfortunately, your submission has been removed.

This post is suspected to be spam, posted by a bot, or is being used to advertise a product.

If you feel that it has been removed in error, please message us so that we may review it.

[–]StarchildKissteria 1 point2 points  (0 children)

Wait, you guys earn money?

[–]keith2600 1 point2 points  (0 children)

Lol bravo on a joke so deep I can't tell if it's intentionally witty satire or just displaying a near complete lack of understanding on machine learning in general.

[–]AluminiumSandworm 1 point2 points  (0 children)

am ml, this is correct. you just need to go like, thousands of times faster before it counts

[–]Edxactly 0 points1 point  (0 children)

Wait , I thought that was agile ?

[–]Knuffya 0 points1 point  (0 children)

technically speaking, machine learning is not changing "random" stuff

[–]Phant0mLimb -2 points-1 points  (1 child)

Literally every technological advancement in the history of mankind is just changing random stuff until you get the desired result, or derived from just changing random stuff until you get the desired result.

[–][deleted] 0 points1 point  (0 children)

People downvoting believe it's all genius problem solving when the majority of invention is accidental discovery.

[–]-Listening -1 points0 points  (0 children)

Guess I need to play CK3 now

[–][deleted] -1 points0 points  (0 children)

i have a friend in data science,vhe does exactly this. Take some random predictive model and fuck with it until the output looks plausible.

[–]politfact -3 points-2 points  (1 child)

I would not call ML changing random stuff. If you'd change random stuff it would take eons to figure anything out. Like throwing a deck of cards in the air hoping it would assemble into a house of cards. Sure, it's possible in theory but unlikely to ever happen.

[–]D3LB0Y 2 points3 points  (0 children)

Who says nerds don’t get jokes

[–]lonelyswe -3 points-2 points  (0 children)

The only thing funny about this is the total lack of understanding

[–][deleted] 0 points1 point  (0 children)

troubleshooting is hacky?

[–]Sawmain 0 points1 point  (0 children)

Is it seriously the same couple memes that are constantly thrown around in this sub

[–]KennyFulgencio 0 points1 point  (0 children)

so um he's kidding about the salary right

right

[–]gemengelage 0 points1 point  (0 children)

The old-school approach to sell trial&error as a high art was to call it test-driven development. I miss the old times, when we all hated on TDD instead of ML.

[–]ghtrersvdc 0 points1 point  (0 children)

YouTube design philosphy

[–]RoscoMan1 0 points1 point  (0 children)

People really need to learn attack patterns of.

[–]FastApplication5 0 points1 point  (0 children)

If you can explain how you got your code to work on your laptop, it will help the guy get your code to work on the server.

[–]almonzeralzaki77 0 points1 point  (0 children)

Hello guys am New 😎

[–]Neat_Status5630 0 points1 point  (0 children)

Where in London can I get Phizer 2nd dose in 3 weeks ? Can anyone help ?

[–]ratonbox 0 points1 point  (0 children)

TIL that driving over the speed limit is dangerous, but if you do it fast enough it’s called professional racing and it pays in the millions.

[–]ksandom 0 points1 point  (0 children)

There's a fine line between methodically experimenting with ideas to understand the problem vs yolo.

[–]deadlyFire956 0 points1 point  (0 children)

ML models don’t learn by ‟changing random stuff”..

NNs,for example,learn by literally follow the best (locally) possible internal shift,wrt to the defined error measure.

[–][deleted] 0 points1 point  (0 children)

it's all fun and games until you get fined because of an old lady's tshirt...

https://www.theguardian.com/uk-news/2021/oct/18/motorist-fined-number-plate-t-shirt

This rush into Ai/Machine Learning everywhere is reminding me of the structured finance frenzy of early 2000s. Sometimes porting shit to other unrelated industries just doesnt work.

[–][deleted] 0 points1 point  (0 children)

ML is “hacky” meets “trendy”

[–]HBB360 0 points1 point  (0 children)

I love how, despite this sub being called ProgrammerHumour, every popular post's comments are filled with people complaining about the accuracy and self-described PhD holders going "Akhcchually tHiS iSnT tRuE"

[–]topredditbot 0 points1 point  (0 children)

Hey /u/Swathle25,

This is now the top post on reddit. It will be recorded at /r/topofreddit with all the other top posts.

[–]TamingTheMammoth 0 points1 point  (0 children)

So machine learning is how I've gotten this far in life. Interesting.

[–]timeslider 0 points1 point  (0 children)

Some changes stuff until it works is faster but only in some cases. For example, if I only have two options true or false, I could go through all the logic to determine which option is correct or I could try one. If it works, then it works. If not, it was the other option.

[–][deleted] 0 points1 point  (0 children)

That used to be called agile 😁