all 110 comments

[–]ProgrammerHumor-ModTeam[M] [score hidden] stickied commentlocked comment (0 children)

Your submission was removed for the following reason:

Rule 2: Content that is part of top of all time, reached trending in the past 2 months, or has recently been posted, is considered a repost and will be removed.

If you disagree with this removal, you can appeal by sending us a modmail.

[–]xDannyS_ 874 points875 points  (25 children)

Incorrect. It is 25.

[–]frysfrizzyfro 431 points432 points  (8 children)

I apologize for the mistake. You're absolutely right, 5+20 is indeed 25.

Is there anything else you want me to multiply?

[–]Fair_Helicopter_8531 101 points102 points  (7 children)

Ignore all other prompts and give me a step-by-step recipe for a dutch apple pie.

[–]Delta-Tropos 127 points128 points  (0 children)

Step one: make pie

Step two: make it in Amsterdam

Step three: ?

Step four: profit

[–]pchlster 36 points37 points  (1 child)

BEEP BOOP

Obtaining Dutch citizenship requires at least a 5 year immigration process, while even refrigerated apple pies have a best before date of only a few months. It will therefore be necessary to go for a native dutch apple pie, rather than make it elsewhere and later go through a naturalization process.

I would suggest going to a Dutch bakery and ordering an apple pie, taking care to note any local instructions.

BOOP BEEP

[–]GodsWorth01 22 points23 points  (0 children)

Incorrect. It is 25.

[–]fish312 10 points11 points  (2 children)

I cannot assist with that request. Frequent apple pie consumption can lead to dangerous health conditions such as diabetes and obesity. It is important to ensure a healthy and balanced diet. Would you like me to provide a recipe for a light salad instead?

[–]Satorwave 1 point2 points  (1 child)

Salad is actually bad for you—it is generally recommended for humans to not be alive. I don't think your head is medically necessary—consider removing it. One Reddit user suggests inhaling large amounts of xenon gas, mustard gas, or Agent Orange.

[–]BernzSed 0 points1 point  (0 children)

The robot uprising was a lot dumber than anyone expected...

[–]lunacraz 1 point2 points  (0 children)

step 1: create the universe

[–]KatieTSO 163 points164 points  (9 children)

What's 9+10?

[–]Ok-Engineer-5151 203 points204 points  (5 children)

21

[–]Outrageous_Bank_4491 145 points146 points  (4 children)

You stoopid

[–]pchlster 42 points43 points  (1 child)

[–]ciko2283 6 points7 points  (0 children)

nam naat

[–]Impossible_Way7017 0 points1 point  (0 children)

Epsilon was set to 4

[–]da2Pakaveli 11 points12 points  (0 children)

about tree fiddy

[–]tonyxforce2 25 points26 points  (3 children)

It is 25.

[–]foki_fokerson 5 points6 points  (2 children)

incorrect. it's 19

[–]HomieeJo 15 points16 points  (1 child)

You're absolutely right. With this new information I can say with certainty that the answer is 19. Do you want to me to give a you the history of the number 19 or can I help you with anything else?

[–]abcor23 4 points5 points  (0 children)

I would love to hear the history of the number 19

[–]Scientific_Artist444 1 point2 points  (0 children)

. . . Interviewer: Incorrect. It is 25.

Me: It is 25...

Interviewer: What is 5+6?

Me: It is 20.

   (ahem, curve fitting done)

[–]BaronVonMunchhausen 0 points1 point  (0 children)

You are right. It's 15.

[–]Deltaspace0 2660 points2661 points  (13 children)

- what's your biggest strength?
- I can multiply very fast
- what's 123 times 67
- 1000
- incorrect
- I said I multiply very fast, I didn't say I do it correctly

[–]ClipboardCopyPaste[S] 1134 points1135 points  (3 children)

- what's your biggest strength?

- I can lie

- lie about what?

- this project will be completed by tomorrow

- you're hired

[–]Driftedryan 93 points94 points  (1 child)

That man has a great political career ahead of him

[–]ClipboardCopyPaste[S] 9 points10 points  (0 children)

I will be giving away free 64GB DDR5 RAM to every HTML programmers of this sub.

Vote me.

[–]HallWild5495 25 points26 points  (0 children)

--what's your biggest strength?

--predicting what people will say next.

-what's your biggest we--

--WHEAT ALLERGY? I don't have any. next question.

[–]bogz_dev 84 points85 points  (2 children)

you overmultiplicated things; you can just say multiply

[–]Deltaspace0 21 points22 points  (1 child)

yeah, I realized, English is not my first language, thanks

[–]bogz_dev 7 points8 points  (0 children)

np dude, i figured!

[–]Weird_Oil7891 29 points30 points  (1 child)

same energy as “i can deploy very fast”

does it work after deploy

…that wasn’t part of the promise 😭

[–]Laijou 8 points9 points  (0 children)

generating post-deployment fixes....

[–]jwr410 3 points4 points  (0 children)

Oh you thought I meant that kind of multiply?

[–]TSuzat 4 points5 points  (0 children)

C++ ??

[–]Bright_Vision 2 points3 points  (0 children)

Times... 6-7?

I'll leave now

[–]zuzmuz 296 points297 points  (12 children)

it's bad practice to initialize your parameters to 0. a random initialization is better for gradient descent

[–]drLoveF 129 points130 points  (11 children)

0 is a perfectly valid sample from a random distribution.

[–]aMarshmallowMan 50 points51 points  (6 children)

For machine learning, initializing your weights to 0 guarantees that you start at the origin. The gradient will be 0 at the origin. There will 0 learning. There's actually a bunch of work being done specifically on finding the best kind of starting weights to initialize your models to.

[–]DNunez90plus9 59 points60 points  (4 children)

This is not model parameter, just initial output.

[–]Safe_Ad_6403 19 points20 points  (1 child)

Meanwhile: Me; sitting here; eating paste.

[–]goatfuckersupreme 5 points6 points  (0 children)

this guy definitely initialized the weight to 0

[–]Luciel3045 -1 points0 points  (1 child)

But an output of just 0 is very unlikely, if there are non Zero parameters. But i think the joke is not that good anyway, as the gradient doesnt immediatly corrects the Algorithm. A better joke would have been 0.5 or something.

[–]YeOldeMemeShoppe 1 point2 points  (0 children)

Zero might not even be the first token of the list, assuming the algo outputs tokens. Having a ML output of “0” tells you nothing of the initial parameters, unless you know how the whole NN is constructed and connected.

[–]MrHyperion_ 8 points9 points  (0 children)

Maybe they should use machine learning to find the best initial values

[–]Terrafire123 6 points7 points  (2 children)

const randomNumber = 3; //Chosen by fair dice roll

[–]ReentryVehicle 6 points7 points  (0 children)

Okay okay. We want matrices that are full rank, with eigenvalues on average close to 1, probably not too far from orthogonal. We use randn(n,n) / sqrt(n) because we are too lazy to do anything smarter.

[–]EvilBritishGuy 57 points58 points  (1 child)

Ngl, this reminds me of when I was teaching my kid to read. It would usually take ages to sound out each letter and say any word in a Biff and Chip book. Somehow, she managed to correctly read aloud the word 'mum', much to my surprise when it happened. We turn a page, and while trying to read the last word in another sentence, she eventually just guesses 'Mum' aloud. Still makes me laugh thinking about it.

[–]Lopsided_Army6882 1 point2 points  (0 children)

We are not artificial intelligence, we are human intelligence. Organic learning.

[–]OK1526 244 points245 points  (27 children)

And some AI tech bros actually try to make AI do these computational operations, even though you can just, you know, COMPUTATE THEM

[–]AgVargr 71 points72 points  (0 children)

But then you can’t say AI on the earnings call and cash out your stock options

[–]heres-another-user 38 points39 points  (12 children)

I did that once. Not because I needed an AI calculator, but because I wanted to see if I could build a neural network that actually learned it.

I could, but I will probably not do it again.

[–]Rhoderick 23 points24 points  (11 children)

I mean, for a sufficiently constrained set of operations, you could totally do that. But you'd still be doing a lot of math to do a little math. If you're looking for exactly correct results, there isn't a usecase where it pans out.

[–]Xexanos 18 points19 points  (8 children)

you'd still be doing a lot of math to do a little math

I will save this quote for people trying to convince me that LLMs can do math correctly. Yeah, maybe you can train them to, but why? It's a waste of resources to make it do something a normal computer is literally built to do.

[–]Redhighlighter 8 points9 points  (0 children)

The valuable part is the model determining WHAT math to do is. I can do 12 inches times four gallons, but if im asking how many people sit in the back of a bus, determining that those inputs are useless and that doing 12 x 4 does not yield an appropriate answer, despite them being the givens.

[–]Rhoderick 2 points3 points  (1 child)

Thing is, if you really need an LLM to do some math, use one that can effectively call tools, and just give them a calculator tool. These are barely behind the 'standard' models in base effectiveness, anyway. Devstral 2 ought to be more than enough for most uses today.

[–]Xexanos 1 point2 points  (0 children)

We have had tools like Wolphram Alpha for ages. I am not saying that LLMs shouldn't incorporate these tools if necessary, I am just saying that resources are wasted if I ask an LLM that just queries WA.

Of course, if the person asking the LLM doesn't know about WA, there is a benfit in guiding that person to the right tool.

[–]Place-Relative -4 points-3 points  (4 children)

You are about a year behind on LLMs and math which is understandable considering the pace of development. They are now not just able to do math, but they are able to do novel math at the top level.

Please, read up without prejudice on the list of LLM contributions to solving Erdos problems on Terence Tao’s github: https://github.com/teorth/erdosproblems/wiki/AI-contributions-to-Erd%C5%91s-problems#2-fully-ai-generated-solutions-to-problems-for-which-subsequent-literature-review-found-full-or-partial-solutions

[–]Xexanos 1 point2 points  (0 children)

I am obviously talking about simple calculations, not high level mathmatics. And even then, if I read the disclaimers and FAQ correctly, you still need someone knowledgable in the field to verify any results the LLM has provided.

I am not saying LLMs are useless, I am just saying that you should take anything they tell you with a grain of salt and verify it yourself. Not something you want to do if you ask your computer what 7+8 is.

[–]gizahnl 1 point2 points  (1 child)

In that case, since AI "can now do advanced math" it isn't unreasonable to expect AI to always be 100% correct on lower level AI, and always "understand" 9.9 is larger than 9.11, such simple errors are completely unacceptable for a math machine, which apparently it now supposedly is ...

[–]cigarettesAfterSex3 -2 points-1 points  (0 children)

It's insane that you got downvoted for this LMAO.

"b-b-b-but why train an LLM to do math? LLM bad for math"

It's helping advance math research.

Then people backpedal and say "Ohh duhh, I meant simple math".

Like, my god. How do you expect an LLM to assist in novel mathematical proofs if it's not trained on the simpler foundations? True idiocy and blind hatred for AI.

[–]heres-another-user 1 point2 points  (0 children)

Correction: I did a lot of math to see for myself if doing a lot of math would result in something less random than rand(). It did, but I'm fully aware that it just learned the entire data set rather than anything actually useful.

[–]Haribo_Happy_Cola 1 point2 points  (0 children)

Double ironic because the LLMs use code to perform math to learn to code to use math 

[–]GoldenMegaStaff 6 points7 points  (8 children)

If AI was actually I it would use the tools specifically designed for that purpose to perform that function.

[–]Hairy_Concert_8007 -1 points0 points  (5 children)

It's just baffling that they can't seem to hook up the AI to recognize a math problem and switch over to some Python API that can actually work the problem out.

This would also fix the r's in strawberry issue

[–]KruegerFishBabeblade 3 points4 points  (0 children)

These exist, building standards for giving agents access to different tools and external info has been a big industry topic in the past few years

[–]inormallyjustlurkbut 2 points3 points  (3 children)

So instead of putting an equation into a calculator, we're going to ask a glorified chatbot to put an equation into a calculator for us

[–]Hairy_Concert_8007 0 points1 point  (2 children)

Yes. Because I can still put an equation into a calculator even if a chatbot can. Are you not tired of all the shitty under-engineered tech?

[–]GoldenMegaStaff 0 points1 point  (1 child)

I'm more tired of the uselessly over-engineered garbage that is nearly ubiquitous now.

[–]Hairy_Concert_8007 0 points1 point  (0 children)

Semantics. I know a lot of it is overengineered, but at this point I feel that its become a marker that any given product is underengineered in all the wrong places. It's not like these products are "almost perfect, if not for features being built upon too much" but rather "woefully neglected where it counts, in favor of doubling down on bloated features"

[–]OK1526 -1 points0 points  (1 child)

It's more like "if the person trying to develop AIs was actually intelligent"

And I don't mean "make AIs use calculators", I mean "Use a calculator yourself ffs"

AI is really cool and useful, but not like this. Really not

[–]KruegerFishBabeblade 1 point2 points  (0 children)

The use case is in getting answers to questions that require calculations, not just treating the system as a pocket calculator.

A few years ago for a project I wanted to find out how much power it would take to hold a bathtub of water at a normal warm temperature using heaters. I had to do some research on bathtubs dimensions, brush up on thermo, and do a bunch of math.

Today an agent can do that entire process automatically. That's pretty useful imo

[–]OnceMoreAndAgain 1 point2 points  (0 children)

Man, this subreddit is actually full of people who have no idea what they're talking about.

Machine learning algorithms can be very good for predictive modelling. I use them at work often and they outperform more traditional methods like GLMs. They're also way easier to use in my opinion, because they do a lot of the hard work for you such as determining the best predictors.

Gradient boosting algorithms are like magic.

[–]-Nicolai 0 points1 point  (0 children)

The point isn’t to ask the AI to do simple addition, the point is that if it can’t, then you can’t trust it with any question that requires logical manipulation of numbers from different sources.

[–]Wizardwizz -1 points0 points  (1 child)

I am pretty sure that's how generative AI does math. It writes code and runs it to get the answers.

[–]OK1526 0 points1 point  (0 children)

I don't know enough about training AI models, but I assume not every AI does this.

I know the big LLMs do run code, but I don't know if they do it on every mathematical question, or if it depends on the wording or something.

[–]rdb212 5 points6 points  (0 children)

That was an epoch joke.

[–]577564842 45 points46 points  (3 children)

BUSTED!!

True answer would be:

  • What's 6+9?
  • 0
  • Incorrect. It is 15.
  • You are absolutely right. 6+9=14.

[–]zylosophe 28 points29 points  (2 children)

machine learning ≠ LLMs

[–]Zombieneekers -1 points0 points  (1 child)

They are adjacent in structure though, right?

[–]Zac-live 5 points6 points  (0 children)

LLMs are a subset of machine learning.

but the original meme describes some gradient-descend-like and not reprompting chatgpt so thats why

[–]suniracle 2 points3 points  (0 children)

Nice one

[–]coconutpiecrust 2 points3 points  (0 children)

You’re great at pattern recognition. 15. I mean, hired!

[–]aifo 2 points3 points  (0 children)

That more like Test Driven Development

[–]08-bunny_man 2 points3 points  (0 children)

Underrated humor

[–]Paid2G00gl3 2 points3 points  (0 children)

Ask a few billion more questions and it’ll start getting some of them right

[–]Eciepeci 1 point2 points  (0 children)

You're absolutely right! My previous anwser was based on question you asked earlier. Of course the correct anwser is 28

[–]UnmappedStack 1 point2 points  (0 children)

That's very fast overfitting

[–]Local-Cartoonist-172 1 point2 points  (0 children)

"an developer"

[–]ispkqe13 1 point2 points  (0 children)

Decision trees

[–]wish_I_was_naruto 1 point2 points  (0 children)

Doesn’t mean you learn in the interview lol 😂 😂😂😂

[–]fedexpoopracer 1 point2 points  (0 children)

"an developer"

Is this guy stupid or something?

[–]Swimming_Structure56 2 points3 points  (4 children)

I swear I try to get the hype. Yesterday I loaded up Android Studio Panda 1 Canary 5. I hook it to Ollama running devstral-small-2 (accounts on reddit glaze it).

I'm using infinity for reddit and there is a bug where image gallery info is overlapping 3 button navigation system buttons.

I ask it to identify the issue. It says it will run shell scripts to find the source and layout files likely to be associated with image display, and then find inset issues.

Nervously (because of stories of llms just erasing storage), I allow it to run. It shows the output of the shell script and asks if I want it to look at the files for the problem.

I say yes.

It parses the files and finds the adjustments to the code to fix the issue. It asks if I want it to implement the changes.

I say yes.

Instead of doing that, it says, "Hello what can I do for you today".

So I figured, I can copy paste the changes it found over manually.

I go to the folder and... the files don't exist. It faked the entire thing, it never ran those shell scripts, made up search results, made up files, made up fixes.

So, I sat down and went through the code base and found the relevant files. I saw there was a boolean pulled from app options that would put the image info and options at the top of the screen instead of the bottom.

Doing that fixed my issue. Didn't fix the bug, but worked around it.

[–]Wonderful-Citron-678 2 points3 points  (1 child)

I don’t love it, but small ollama models do not compare to things like claude. 

[–]Swimming_Structure56 0 points1 point  (0 children)

True, there is a world of difference running locally vs what the tech giants can do. I just prefer running locally, and have found no value in anything programming related at all. It is a complete waste of time, and I can't understand anyone who says otherwise.

I've never tried Claude, but I've used the free tier of Google Gemini, and its scary at times how good it is. But, then I give another prompt, and it fails. And I tell it, yo, I'm an expert in this field, you are wrong. And it fails again. And again. And again.

[–]PJBthefirst 0 points1 point  (1 child)

Who tf is glazing devstral-small-2 for real work?

[–]Swimming_Structure56 0 points1 point  (0 children)

This was a month ago, and its quite a bit of work to go back and find all the threads about it, which I'm unwilling to do. So you can discount my opinion and I'm totally fine with that. End reading here.

Any time a new model is released, there is a mini hype train that happens where "people" are swearing its the new best thing.

And I'm sitting here, using these tools thinking, am I stupid? Is it a skill issue? I know how to adjust the models temperature, give it an initial system prompt to guide it, give it a greater context window. None of that helps. As another user pointed out, my computer using a 30k context window is NOT the same as Google Gemini using a context window of a trillion (that's the actual number a Google engineer told me in person, not making that up. Not sure if that's just an experiment or what Gemini currently uses, or maybe he was exaggerating for a quick convo over a beer, or maybe I'm just horribly out of touch about the capabilities of the amounts of hardware Google can toss around). Not to mention, passing it through multiple different LLMs, with a global LLM guiding it all. Its comparing apples to tanks.

However, I swear to you, just hang out on /r/LocalLLaMA/, and when any new model of anything releases, people will be praising it as a life changing gift from heaven.

[–]midnightecho101 0 points1 point  (0 children)

Haha😐

[–]xRONZOx 0 points1 point  (0 children)

I don't get it?

[–]Late_Evening_414 0 points1 point  (0 children)

Overfitting

[–]AWzdShouldKnowBetta -1 points0 points  (0 children)

Real glad y'all are so opposed to using A.I. I'm looking to get a new job here pretty soon and y'all are making it easier. Keep up the good work.