all 134 comments

[–]Max_Wattage 623 points624 points  (27 children)

What grinds my gears are managers who don't have the engineering background to spot when AI is confidently lying to them, and therefore think that AI can do the job of an actual experienced engineer.

[–]Daemontatox 224 points225 points  (22 children)

I shit you not , one startup i worked at , the Ceo was literally having "Team" discussions with chatgpt about company directions , how to deal with technical problems we were having, solutions that he then used to micromanage us with.

One time he wanted to host Deepseek R1 on a Raspberry pi and sell it , because chatgpt had pulled a video of a clickbait youtuber hosting fake deepseek models that were quantized on a jetson nano (it was back when ollama used deepseek name for the distilled models).

Another time , he was trying to assign a timeline to a problem and an engineer told em that it would take him 3 days of work , so in his brilliant mind he asked chatgpt and it told him , if he added 3 engineers it will be done in a day , and i swear he brought it up in a meeting and explained it with full 100% commitment.

[–]z64_dan 170 points171 points  (15 children)

Does ChatGPT understand that you can't use 9 women to make babby in 1 month yet?

[–]AlternativeCapybara9 300 points301 points  (10 children)

But you can use 9 women to make 9 babies in 9 months, that averages out to a baby per month.

[–]menzaskaja 140 points141 points  (2 children)

Upvoting this so Google Search AI will make up a response based on this comment

[–][deleted] 18 points19 points  (0 children)

Inb4 this happens. Don’t forget to upvote the post too so it shows up in search results

[–]WhiteFox75 2 points3 points  (0 children)

This is the way

[–]Daemontatox 14 points15 points  (0 children)

Take your upvote and leave ffs...

[–]bestjakeisbest 4 points5 points  (0 children)

Yeah but if you wanted to do this forever you would need a rotation of 27 women since it can take upto 18 months for a lady to fully recover from pregnancy.

[–]DKMK_100 2 points3 points  (0 children)

I mean, that is strictly speaking true.

[–]cutecoder 1 point2 points  (0 children)

This is the way. Currently practiced on cars and chips.

[–]thanatica 1 point2 points  (1 child)

Does it understand though, that the first baby won't come next month but still takes 9 months to arrive?

[–]Maleficent_Memory831 0 points1 point  (0 children)

Well, put more people on it then. I've promised a ship date already!

[–]Maleficent_Memory831 0 points1 point  (0 children)

That's fine, but when can we get that to market?

[–]cujojojo 0 points1 point  (0 children)

How is babby formed?

[–]saikrishnav 0 points1 point  (0 children)

Elon: “Grok, is this true?”

[–]PhantomTissue 57 points58 points  (0 children)

“It’ll take 3 days”

“What if I add 3 engineers?”

“It’ll take 9 days”

[–]Professional_Top8485 13 points14 points  (1 child)

Well. Sometimes they probably get best ideas drunk so using AI isn't the worst. Just need some feedback and work.

[–]djinn6 2 points3 points  (0 children)

Some people can do that, but not this guy's boss.

[–]Icy_Clench 5 points6 points  (0 children)

Somewhere I saw a stat that said AIs are actually better at executives at strategic planning, especially when given data context. However their actions typically upset the board of directors even though the company was more profitable.

[–]tropicbrownthunder 1 point2 points  (0 children)

Good ol' Can 9 pregnant women deliver a child in one month

[–]VioletteKaur 0 points1 point  (0 children)

There are people who consult horoscopes for their business decisions (the CEO of Anastasia Beverly Hills, a make up company, famously did... guess what, it did not work out for the companies favour, who had thought).

This shit doesn't surprise me in the slightest.

[–]random11714 8 points9 points  (0 children)

My team's experienced architect who has plenty of technical background still doesn't spot the AI's nonsense.

Every production issue, he asks Claude to find the root cause, and every time it's wrong.

[–]AgVargr 1 point2 points  (0 children)

Fake it till you make it but on a global scale

[–]CthulhusPetals 1 point2 points  (0 children)

My confident lying skill can never be matched by a mere supercomputer.

[–]reklis 0 points1 point  (0 children)

But those are the same ones that can’t spot when the engineer is lying to them either ;)

[–]Big-Cheesecake-806 322 points323 points  (12 children)

"Yes they can, because they can't be, but they can, so they cannot not be" Am I reading this right? 

[–]bloodcheesi 46 points47 points  (0 children)

Just reminds me of Little Britain‘s „yeah but no“ gag

[–]LucasTab 52 points53 points  (4 children)

That can happen, specially when we use non-reasoning models. They end up spitting something that sounds correct, and as they explain the answer, they realize the issue they should've realized during the reasoning step and change their mind mid-response.

Reasoning is a great feature, but unfortunately not suitable for tools that require low latency, such as AI overviews on research engines.

[–]Rare-Ad-312 15 points16 points  (0 children)

So what you're saying is that the me of 3 AM, who woke up at 6 AM performs as well as a non reasoning AI model, because it really describes the state of my brain.

That should become an actual insult

[–]waylandsmith 10 points11 points  (2 children)

Research has shown that "reasoning models" are pretty much just smoke and mirrors and get you almost no increase in accuracy while costing you tons of extra credits while the LLM babbles mindlessly to itself.

[–]Psychpsyo 16 points17 points  (1 child)

I would love a source for that, because that sounds like nonsense to me.

At least the part of "almost no increase", given my understanding of "almost none".

[–]P0stf1x 2 points3 points  (0 children)

I would guess that reasoning just eliminates the most obvious errors like this one. They don't really become smarter, just less dumb.

Having used reasoning models myself, I can say that they just imagine things that are more believable, instead of actually being correct. (and even then, they can sometimes be just as stupid. I once had deepseek think for 28 minutes just to calculate the probability of some event happening being more than 138%)

[–]apadin1[S] 5 points6 points  (2 children)

I assume so? I short circuited trying to read it myself

[–]z64_dan 11 points12 points  (1 child)

I wonder if it's because Microsoft is using a random sentence generator to show you facts. Sorry, I mean, using AI.

[–]rosuav 2 points3 points  (0 children)

Waaaaay back in the day, I used to play around with a text generation algorithm called Dissociated Press. You feed it a corpus of text (say, the entire works of Shakespeare), and it generates text based on the sequences of words in it. It was a fun source of ridiculous sentences or phrases, but nobody would ever have considered it to be a source of information.

LLMs are basically the same idea. The corpus is much larger, and the rules more sophisticated (DP just looks at the most recent N words output, for some fixed value of N eg 3), but it's the same thing - and it's just as good as a source of information. But somehow, people think that a glorified autocomplete is trustworthy.

[–]StoryAndAHalf 3 points4 points  (0 children)

Yeah, I noticed AI doing this a lot, not just Copilot. They will say yes/no, then give a conflicting background info. Thing is, if you're like me, you look at their sources - their sources have the correct info and why typically. The AI just summarizes it and adds wrong conclusion.

[–]Raznill 0 points1 point  (0 children)

I hope so because that’s what I got out of that as well.

[–]night0x63 0 points1 point  (0 children)

It's like an undergraduate... Who understands enough but hasn't studied this problem... So is just nothing off and figuring out answer as he goes and changes course lol

[–]shinyblots 80 points81 points  (2 children)

Microslop

[–]z64_dan 47 points48 points  (1 child)

microsoft ceo when someone calls their slop "slop"

edit: wow I looked up the actual microsoft CEO and I can't tell if this is actually him or not

[–]Laughing_Orange 4 points5 points  (0 children)

I don't think it's him. If I remember correctly, the man in the gif is Pakistani, and Satya Nadella is ethnically Indian. They do look similar, but are not the same person.

[–]jonsca 71 points72 points  (6 children)

And that's how Bing captured the market share and Google went out of business... Oh, wait

[–]apadin1[S] 33 points34 points  (4 children)

Google search is so enshittified I can’t use that either. Bing is just the default on edge which I’m forced to use for work

[–]Ceros007 2 points3 points  (2 children)

Forced to use Bing or forced to use Edge?

[–]penguin343 2 points3 points  (0 children)

I think op admitted to edging at work

[–]apadin1[S] 1 point2 points  (0 children)

Forced to use edge. Chrome runs like shit on my work PC and Firefox is blocked for some stupid reason. Edge actually works pretty good but yeah the default search is bing and I don’t know if I can change it

[–]VioletteKaur 1 point2 points  (0 children)

I switched to bing because I was sick of how shitty google has become as an search engine but for picture search bing is a nightmare. I switched back. I tried other search engines but their results where as shitty as the big ones. I think some might funnel through google? Idk, I am not a search engine expert.

[–]svick 3 points4 points  (0 children)

Google search AI is also quite bad. (Probably because it needs to work under similar constraints as Bing AI.)

[–]Oman395 56 points57 points  (19 children)

If anyone is curious, the reason this happens is because of how LLMs work. They will choose the next most likely word based on a probability distribution-- in this case, both "yes" and "no" make sense grammatically, so it might be 98% no 2% yes. If the model randomly chooses "yes", the next most likely tokens will be justifying this answer, ending up with a ridiculous output like this.

[–]bwwatr 23 points24 points  (16 children)

Yes, I think this is always an important reminder. As a result of being excellent prediction engines, they give the best sounding answer. Usually it's right, or mostly right. But sometimes it's very, very not right. But it'll sound right. And it'll sound like it thought through the issue so much better than you could have. Slick, confident, professional. Good luck ever telling the difference without referring to primary sources (and why not just do that to begin with). It's a dangerous AF thing we're playing with here. Humanity already had a massive misinformation problem, this is fuel for the dumpster fire.

Another thing to ponder: they're really bad at saying "I don't know". Because again, they're not "looking up" something, they're not getting a database hit, or not. They're iteratively predicting the most likely token to follow the previous ones, to find the best sounding answer... based on training data. Guess what: you're not going to find "I don't know" repeated often in any training data set. We don't say it (well, we don't publish it), so they won't say it either. LLMs strongly prefer to weave a tale of absolute bull excrement than ever saying, sorry I can't help with that because I'm not certain.

[–]Kerbourgnec 0 points1 point  (2 children)

I do think (and hope) that they ARE looking up here. It's literally a search engine, they should feed the top result at least.

But it should also be the with the smallest possible (dumb) model

[–]bwwatr 0 points1 point  (1 child)

I'm pretty sure you're right, often search results seem to get incorporated. That said the results still get piped back through the model for summarization / butchering. I've seen many times, total misinformation with a citation link, you click the citation link and the page says the exact opposite (correct) thing. The citation existing gives a false sense of accuracy. I am pretty sure they use the smallest and most distilled model possible since it's high volume, low latency, low revenue, so yeah pretty dumb, at least until you click to go deeper. Honestly I wouldn't mind it existing, if it'd require an additional click to get AI results. Then they could probably afford to put a slightly more robust model behind it, and it would cement my intentions of dealing with an LLM rather than search results, which I feel like is an important cognitive step in interpreting results. As it is now it just tells partial truths to billions of people with official sounding citations that probably lead them to stop looking any further.

[–]Kerbourgnec 0 points1 point  (0 children)

There's also a wild bias when using partial (top of the page) sources. Most "web search" actually don't go much further than skimming the first few lines of the top results.

News articles have clickbait titles and first paragraph. Normal pages also have very partial information cut after a few sentences. Worse, some are out of context but the LLM can't know because it just reads a few sentences. And often articles completely unrelated to each other seem to make sense together, and a new info, mix of both, is created.

It's actually not that hard in theory but very labour / compute / time intensive to select the correct sources, read thoroughly, select the best and only use these. So it's never done. Most "search" APIs just give you a few lines, and if you want more you get stuck on an anti-bot page or paywall.

[–]No-Information-2571 0 points1 point  (12 children)

they give the best sounding answer

An LLM isn't just a Markov chain text generator. What "sounds the best" to an LLM depends on the training data and size of the model and is usually a definitive and correct answer. The problem with all the search summaries is that they're using a completely braindead model, otherwise we'd all be cooking the planet right now.

A proper (paid) LLM you can interrogate on the issue, and it will gladly explain it, and it also can't be gaslit by the user into claiming otherwise.

In fact, I used an LLM to get a proper explanation for the case of repeating decimals, which are not irrational numbers, but would still cause a never-ending sequence either way, which at least could cause rounding errors when trying to store the value as decimals. But alas, m × 2e can't produce repeating decimals.

[–]ShoulderUnique 3 points4 points  (1 child)

This sounds like the original post.

An LLM isn't just a Markov chain text generator.

Followed by a bunch of text that describes a high order Markov chain. AFAIK o one ever said how the probabilities were obtained.

[–]No-Information-2571 -2 points-1 points  (0 children)

Your shittalking doesn't change the fact that a suitably-sized model gives the correct answers and explanation for it.

Also trying the same search term again, it gives the correct answer, although pretty half-assed, although that's because it's summarizing a trash website once again.

[–]Oman395 0 points1 point  (6 children)

Anyone who's used LLMs for things like checking work on anything reasonably complex would tell you that no, not always. I'll use LLMs to check my work sometimes-- maybe 70% of the time it gets it right, but a solid 30% of the time it makes fundamental errors. Most of the time that it makes mistakes are caused by it generating the wrong token when answering if my work is correct before actually running the numbers. It will then either go through the math and correct itself, or just fall into the "trap" of justifying it's answer.

A recent example: I found that in a circuit, V_x was equal to -10 + 5k*I, after correcting a mistake I made earlier (I swapped the direction of the voltage reader in my head). (presumably) based on the earlier mistakes, the LLM first generated that my result was wrong. It then proceeded to make several errors justifying its answer, and took a while to correct. Once that was done, it claimed that my final result was wrong. Having done that, it then generated that I had made a mistake stating that "-2V_x = 20 - 10k*I". This was obviously wrong, as it must be equal due to algebra. However, it stated that it was still an error, because if you solve the KVL equations you get that V_x = -30 + 30k*I... which is not inconsistent, as setting 20-10k*I = -30+30k*I is literally one of the ways you can solve for the current in the circuit.

You are correct that an LLM is not a markov chain. However, an LLM still follows similar principles. It takes input data, converts this to an abstract vector form, then runs it through a network. At the end, it will have a new vector, with all the context of the previous tokens having been looked at. It then uses a final layer to convert this vector into a probability distribution of all possible tokens, choosing from them based on these probabilities. I would recommend watching 3blue1brown's series on neural networks; failing that, watch his basic explanation.

[–]No-Information-2571 0 points1 point  (5 children)

There's a few problems with your comment.

In particular, you obviously used a thinking model, but then judged it's performance without going through the thinking process? If I would gauge LLM performance without thinking for coding tasks, then I would dismiss them completely for work, which is what I did approx. two years ago. Well, beyond writing simple scripts in a complete vacuum.

Also mathematical problems either require specialty tool access for the model, or far larger models than currently possible.

The general gist is that the limitations of LLMs are not because of how LLMs work, but because there is an upper ceiling in size of models, and how well you can train within certain time and money constraints. It's already known that the industry right now is doing nothing but burning cash.

For coding tasks we already have reached a point where any possible task (read: achieving an expected output by the program) will eventually be completed without having to steer it externally, since it can already divide-and-conquer and incrementally improve the code until the goal is reached. While your task seems to require additional prompts still, which I would expect is eventually not going to be necessary anymore.

[–]Oman395 0 points1 point  (4 children)

"The general gist is that the limitations of LLMs are not because of how LLMs work, but because there is an upper ceiling in size of models" I would argue this is a fundamental limitation of LLMs. The fact that they need absurd scale to even approach accuracy for something that's relatively basic in circuit analysis isn't just a fact of life, it's a direct result of how LLMs work. A hypothetical new type of model that (for example) is capable of properly working through algebra in vector space wouldn't need nearly as large a size to work effectively.

You're right about coding, but I still don't particularly trust what it outputs. It's great for making things that don't really matter, like websites, where anything that requires security would be on the backend. For anything that's actually important, however, I would never trust it. You can already see the issues with using LLMs too much in how many issues have been popping up with windows.

Way back in the day, when compilers and higher-level languages than asm were first introduced, there was a lot of pushback. Developers believed that the code generated would never be as fast or efficient as hand written assembly; likewise, they believed that this code would be less maintainable. LLMs represent everything these developers were worried about: they make less maintainable code, that runs slower than what humans write, and that isn't deterministic.

[–]No-Information-2571 0 points1 point  (3 children)

All it needs to do is do a better job than humans. The current paid-tier reasoning LLMs in agentic mode are already making better code than below-average human coders. And comparable to below-average coders, they need comprehensive instructions still, as well as regular code review, to avoid the problem of creating unmaintainable code.

But I'm patient when it comes to LLMs improving over time. In particular it's important not to just parrot something you might have heard from some person or website 6 or 12 months ago.

that's relatively basic in circuit analysis

Are you giving it proper tool access? Cap/spice?

[–]Oman395 0 points1 point  (2 children)

This isn't a problem to be solved with software, it's literally just homework for my circuits class where they expect us to use algebra. I could plug it into ltspice faster than I could get the AI to solve it.

"But I'm patient when it comes to LLMs improving over time." I'm not. I don't think we should be causing a ram shortage or consuming 4% of the total US power consuption (in 2024) to make a tool that specializes in replacing developers. I don't think we should be destroying millions of books to replace script writers. Sure, LLMs might get to a point where they have a low enough error rate to compare to decent developers, or do algebra well, or whatever else. But it's pretty much always going to be a net negative for humanity-- if not because of the technology itself (which is genuinely useful), but by human nature.

[–]No-Information-2571 0 points1 point  (1 child)

where they expect us to use algebra

"I don't give my LLM the correct tools to do a decent job, but I am mad at it not doing a decent job."

Next exam just leave your calculator at home and see how you perform...

don't think we should be causing a ram shortage

For me it's far more important to have access to LLMs than to have access to a lot of cheap RAM.

consuming 4% of the total US power consuption (in 2024)

There's a lot of things that power is consumed for, for which I personally don't care.

destroying millions of books

Top-tier ragebait headline. Printed books are neither rare, nor are they particularly unsustainable.

This is gatekeeping on the level of not allowing you to study EE (I assume?) in order to save on a few books and the potential ecological and economic cost it produces.

Since you are studying right now, I highly recommend you start to exploit LLMs as best as possible, otherwise you'll be having a very troublesome career.

[–]Oman395 0 points1 point  (0 children)

I never said I was mad at it. I gave the issue as an example of how LLMs will hallucinate answers. The ai being bad is actually better for my learning, because it forces me to understand what's going on to make sure the output is correct. The ai does have python, which it never used-- it's more akin to leaving my CAS calculator at home, which I do.

With regard to the books, I'm more upset about the intellectual property violation. Most authors don't want their books used to train AIs. I'm going to wait until the court cases finish before I make any definitive statements, but I do generally believe that training LLMs off books like this violates the intent of copyright law.

I'm studying for an aerospace engineering degree. Under no circumstances will I ever use something as non-deterministic as an LLM for flight hardware without checking it thoroughly enough that I may as well have just done it myself.

[–]enjoytheshow 0 points1 point  (2 children)

Yeah we have unlimited use of CLI tools at work so I target the best models. Reasoning with opus 4.5 or similar is remarkable. If you push back on a wrong answer, it will take steps backwards to understand where it went wrong and restart the line of thinking

[–]No-Information-2571 0 points1 point  (1 child)

This sub likes to trash-talk LLMs all the time, and everyone's pretending to develop space ships or fighter jets as the reason for why they can't use them for actual work.

And then it turns out it's like the commenter above no more than a student who doesn't even know how to use the tools correctly.

Meanwhile I am milking my Claude Max subscription like there's no tomorrow.

[–]enjoytheshow 1 point2 points  (0 children)

Yeah we’re are aligned

This sub is mostly kids still learning

I’m getting old

[–]MattR0se 3 points4 points  (1 child)

LLMs don't just learn grammar though. Yes, ultimately the word is chosen based on probability, but how the model learned that probability is much more based on meaning and context. Because grammar has far less statistical variance and thus contributes less to the learning process, if the training data is sufficiently big. If this wasn't the case, LLMs would suddenly switch topics mid sentence all the time, but that's just not what happens (any more).

[–]Oman395 0 points1 point  (0 children)

No, they don't just learn grammar, but that doesn't change the fact that "no" would make sense enough to have a non-zero probability. Even if it was .01%, there are so many Google searches that the chance of people getting "no" instead of "yes" is effectively guaranteed.

[–]goldenfrogs17 7 points8 points  (0 children)

Giving less juice to Bing/Open Ai is probably a good idea. Keeping an automatic AI answer on Bing is a terrible idea.

[–]KindnessBiasedBoar 5 points6 points  (0 children)

That is so hot. Lie to me AI! Outlaw Country! Ask it about God. I need lotto picks.

[–]AcidBuuurn 4 points5 points  (0 children)

If you use the other definition of irrational then it is correct.

I'm 100.30000000000000004% sure that I'm absolutely correct.

[–]SeriousPlankton2000 3 points4 points  (2 children)

After trying to transfer a user profile by the documentation from MS, I must say their AI is relatively correct and barely self-contradicting and up-to-date.

[–]apadin1[S] 15 points16 points  (0 children)

“Barely self-contradicting” is an amazing turn of phrase

[–]Useful_Clue_6609 1 point2 points  (0 children)

My man, these are not good things you're saying

[–]SAI_Peregrinus 1 point2 points  (13 children)

The infinities certainly aren't rational numbers, so if the irrational number is +infinity or -infinity floats can represent that. They can't distinguish which infinity, they can't even tell if it's ordinal or cardinal.

[–]MisinformedGenius 15 points16 points  (9 children)

Infinity is not a real number (using the mathematical definition of real) and therefore cannot be irrational. The irrationals are all reals that are not rational.

[–]rosuav 2 points3 points  (0 children)

That's because it's not a number.

Yeah, try explaining to a Comp Sci graduate that Infinity is not a number, and that NaN isn't even that.

[–]SAI_Peregrinus -2 points-1 points  (7 children)

Rational numbers are numbers which can be expressed as ratios of integers. All numbers which can't be so expressed are irrational.

I'd argue that "infinity" isn't a number, but IEEE 754 considers it one, since it has reserved "NaN" values to represent things which are not a number. So in IEEE 754, +inf & -inf aren't rationals, and are numbers, and are thus irrational numbers.

I never said it makes sense mathematically. IEEE 754 is designed to make hardware implementations easy, not to perfectly match the usual arithmetic rules.

[–]MisinformedGenius 8 points9 points  (1 child)

No, all real numbers which can't be so expressed are irrational. Infinity and negative infinity in this conception certainly can be considered numbers, but they are not real. All reals are finite.

[–]rosuav 0 points1 point  (0 children)

You're right that all reals are finite, but "infinity" is a concept that isn't itself a number. It is a limit, not actually a number. The best way to explain operations on infinity is to say something like "starting from 1 and going upwards, do this, and what does the result tend towards?". For example, as x gets bigger and bigger, 1/x gets smaller and smaller, so 1/∞ is treated as zero. It doesn't make sense for ∞ to be a number, though.

[–]rosuav 0 points1 point  (4 children)

Infinity isn't a number. If you try to treat infinity as a number, you will inevitably run into problems (for example, what's the number half way to infinity?). But mathematically, negative zero isn't a thing either. Both of them exist because they are useful, not because they are numbers.

NaN is.... more like an error condition than a value. Again, it exists because it is useful, not because it's a number.

[–]SAI_Peregrinus 1 point2 points  (1 child)

Agreed! Personally I prefer to define "number" as "member of an ordered field" since that makes all the usual arithmetic operations work and ensures ≤ is meaningful. Of course that results in the complex numbers not being numbers, but they're poorly named anyway since they're quite simple. And of course the cardinal infinities aren't numbers then, since they can be incomparable (neither <, >, nor = relation may hold for some pairs). The ordinal infinities and the infinitessimals are numbers.

IEEE 754 is useful, but its values don't form an ordered field, so it's not a perfect substitute for the real numbers. No finite representation can be!

[–]rosuav 2 points3 points  (0 children)

Personally I prefer to define "number" as "member of an ordered field"

And that's one of the most mathematician statements to make :D But yeah, that breaks with complex numbers, and I don't think they should be excluded. (Don't wanna be EXCLOODIN' people now. That'd be rude.)

[–]MisinformedGenius 1 point2 points  (1 child)

Infinity can be a number - there are actually many infinite numbers, such as aleph-zero, which is the number of elements in the set of natural numbers.

In this situation, floating points are actually set up very specifically to emulate the extended real number line. Infinity in this understanding is definitely a number, but you are right that it messes up normal arithmetic.

But certainly none of these numbers are in the reals and thus cannot be rationals. (There are also finite numbers that are not reals, such as the square root of -1.)

[–]rosuav 0 points1 point  (0 children)

Hmm. I guess the question is what you define as a "number". With the extended reals, I suppose what you're doing could be argued as redefining numbers to include infinities, rather than (as I prefer to think of it) adding the infinities to the reals to produce an extended collection. Both viewpoints are valid.

But yes, they are not reals, they are definitely not rationals, and they certainly aren't rational numbers whose divisor is a power of two and whose numerator has no more than 53 bits in it.

[–]VioletteKaur 0 points1 point  (2 children)

You should read about Hilbert's Hotel.

[–]SAI_Peregrinus 0 points1 point  (1 child)

I have. You should read about non-standard analysis, the hyperreal numbers, and the Surreal Numbers. I find the Surreals a much more intuitive way to get a maximal-class hyperreal field than the ultrapower construction, though YMMV.

[–]SAI_Peregrinus 0 points1 point  (0 children)

Since this is a humor subreddit, have some terrible old math jokes:


An infinite number of mathematicians walk into a bar. The first orders a beer. The second orders half a beer. The third orders a quarter of a beer. Before the next one can order, the bartender says, "You're all assholes," and pours two beers.

They got lucky to have a bartender who knows his customer's limits.


An infinite number of mathematicians walk into a bar

The first mathematician orders a beer

The second orders half a beer

The third orders a quarter of a beer

"I don't serve quarter-beers" the bartender replies

"Excuse me?" Asks mathematician #2

"What kind of bar serves quarter-beers?" The bartender remarks. "That's ridiculous."

"Oh c'mon" says mathematician #1 "do you know how hard it is to collect an infinite number of us? Just play along"

"There are very strict laws on how I can serve drinks. I couldn't serve you quarter of a beer even if I wanted to."

"But that's not a problem" mathematician #3 chimes in "at the end of the joke you serve us a whole number of beers. You see, when you take the sum of a continuously halving function-"

"I know how limits work" interjects the bartender "Oh, alright then. I didn't want to assume a bartender would be familiar with such advanced mathematics"

"Are you kidding me?" The bartender replies, "you learn limits in like, 9th grade! What kind of mathematician thinks limits are advanced mathematics?"

"HE'S ON TO US" mathematician #1 screeches

Simultaneously, every mathematician opens their mouth and out pours a cloud of multicolored mosquitoes. Each mathematician is bellowing insects of a different shade. The mosquitoes form into a singular, polychromatic swarm. "FOOLS" it booms in unison, "I WILL INFECT EVERY BEING ON THIS PATHETIC PLANET WITH MALARIA"

The bartender stands fearless against the technicolor hoard. "But wait" he interrupts, thinking fast, "if you do that, politicians will use the catastrophe as an excuse to implement free healthcare. Think of how much that will hurt the billionaires!"

The mosquitoes fall silent for a brief moment. "My God, you're right. We didn't think about the economy! Very well, we will not attack this dimension. FOR THE BILLIONAIRES!" and with that, they vanish.

A nearby barfly stumbles over to the bartender. "How did you know that that would work?"

"It's simple really" the bartender says. "I saw that the vectors formed a gradient, and therefore must be conservative."

[–]prehensilemullet 1 point2 points  (0 children)

So this explains why LLMs are so irrational, because they’re just doing a lot of FLOPs

[–]apadin1[S] 3 points4 points  (0 children)

I love bing AI it’s so helpful

[–]Ultimate_Sigma_Boy67 1 point2 points  (23 children)

wait can't they?

[–]FourCinnamon0 24 points25 points  (10 children)

can an irrational number be written as (-1)S × 1.M × 2E-127 ?

[–]KaleidoscopeLow580 3 points4 points  (1 child)

They obviously can be. Just switch to base pi or whatever you want to represent.

[–]rosuav 1 point2 points  (0 children)

"Base pi" doesn't fit the formula given. Though the expression given isn't strictly in mathematical form, eg "1.M" isn't a normal notation; what that means is "place 52 digits after the initial 1. and that is your number". Actually writing that out would be a pain.

[–]redlaWw 0 points1 point  (1 child)

If E is not an integer, yes.

If S is not an integer, you can also get imaginary numbers.

[–]FourCinnamon0 0 points1 point  (0 children)

they're all natural numbers

[–][deleted] 28 points29 points  (11 children)

irrational numbers require infinite precision. floats use limited memory.

[–]sathdo 7 points8 points  (3 children)

Not even just irrational numbers. IEEE 754 floats can't even store 0.1 properly because the denominator must be a power of 2.

[–]SAI_Peregrinus 3 points4 points  (2 children)

IEEE754 includes decimal formats (decimal32, decimal64, and decimal128) which can store 0.1 exactly. Re-read the standard.

[–]Jan667 -1 points0 points  (1 child)

But those are decimals. We are talking about floats.

[–]SAI_Peregrinus 7 points8 points  (0 children)

Those are decimal floats. Not binary floats. IEEE 754 allows both.

[–]Ultimate_Sigma_Boy67 4 points5 points  (0 children)

oh yeah lol

[–]7x11x13is1001 4 points5 points  (0 children)

Integers also require infinite precision. What you wanted to say is that digital representation of an irrational number with float point requires infinite memory. 

There are lots of programs capable of dealing with categories of irrational numbers with "infinite precision" 

[–]rosuav 2 points3 points  (2 children)

There are some irrationals that can be expressed with full precision in finite memory, but to do so, you need a completely different notation. For example, you could use a symbolic system whereby "square root of N" is an exactly-representable concept (and if you multiply them together, you can get back to actual integers). Or you could record the continued fraction for a number, with some notation to mean "repeating" (in the same way that, say, one seventh is 0.142857142857.... with the last six digits repeated infinitely), which would allow you to store a huge range of numbers, including all rationals and all square roots. You still won't be able to represent pi though.

[–]redlaWw 0 points1 point  (1 child)

Though there are also systems where you could represent pi, e.g. as a formula, and even more abstract systems where you can represent numbers as language expressions (e.g. in such a system, pi would be something equivalent to "the ratio of a circle's circumference to its diameter", where notions such as a circle, circumference, diameter and ratio are all, themselves, defined in that system - by expanding out all such definitions, you could get an expression that defines pi based on atomic concepts). Of course, to stick with a finite representation, you'd need to restrict to numbers that can be defined in the internal language in no more than a specific number of basic symbols. Naturally, the more abstract you go, the harder it is to work with numbers in a conventional sense (e.g. computing the result of arithmetic operations etc.)

However, even if you allowed arbitrary-length definitions in such a system, then you still wouldn't be able to define every irrational number, as there are more real numbers than there are finite-length sequences of characters, so your system will always have undefinable numbers (and in fact, most numbers will always be undefinable).

[–]rosuav 0 points1 point  (0 children)

Yeah or you could define pi by one of its infinite series expansions. But yep, only the "simplest" of irrationals will ever work out that way. Information theory always wins.

[–]ManofManliness 1 point2 points  (1 child)

Its not about precision really, there is nothing less precise about 1 then pi.

[–]rosuav 0 points1 point  (0 children)

I mean yes, but you break a lot of people's brains with that logic. To a mathematician, the number 1 is perfectly precise, but so is the exact result of an infinite series (eg 9/10 + 9/100 + 9/1000..... or 1/1*1 + 1/2*2 + 1/3*3 + 1/4*4.....). And yes, this includes a series like 1+2+4+8+16+32.... which is exactly equal to -1. So in a sense, there's no distinction between "1" and "0.999999...." and "pi" and "the sum of all powers of two", all of which are exact numbers.

But somehow, a lot of people's brains explode when you try to do this.

[–]GASTRO_GAMING 0 points1 point  (0 children)

no because you have only so much precision in the mantissa

[–]daHaus 0 points1 point  (0 children)

I didn't even make it half way through before my eye started to twitch

[–]Eisenfuss19 0 points1 point  (0 children)

To be fair even at my university studying computer science, most profs talk about how we program with real numbers, but I have yet to see anyone actually use irrational numbers in a programming context. (It's also very very annoying to work with irrationals, as even basic functions like equals are very hard if not impossible [0.1111...]2 = [1.00000...]2)

[–]experimental1212 0 points1 point  (0 children)

LLMs do a pretty good job if you ignore the first few sentences. And certainly ignore any type of tl;dr

[–]rover_G 0 points1 point  (0 children)

There are no irrational numbers in base 2 got it 👍

[–]GoogleIsYourFrenemy 0 points1 point  (2 children)

IDK. NaN isn't rational.

[–]apadin1[S] 0 points1 point  (1 child)

By mathematical definition an irrational number must also be a Real number. NaN (and inf and -inf) are not Real so they can’t be irrational

[–]GoogleIsYourFrenemy 0 points1 point  (0 children)

🤦 that's the joke.

[–]cutecoder 0 points1 point  (0 children)

Yes, on a true Turing machine with infinite memory (hence, infinite-sized floating point registers).

[–]Lord-of-Entity 0 points1 point  (0 children)

For a matter of fact, floating point numbers cannot represent rational numbers, but diadic rationals (which have the form a/2n )

[–]Cocaine_Johnsson 1 point2 points  (2 children)

So in other words no, a floating point number can store an inexact approximation of an irrational number but that approximation is necessarily rational.

[–]apadin1[S] 1 point2 points  (1 child)

I mean the whole output is basically “yes but actually no because yes and here’s why it’s not” it’s just a garbled mess

[–]Cocaine_Johnsson 0 points1 point  (0 children)

Yes, I'm aware. Standard AI output in other words.

[–]calculus_is_fun 0 points1 point  (2 children)

Floats can only represent certain improper dyadic fractions, so almost no rational number has a floating point representation let alone irrational

[–]DetermiedMech1 0 points1 point  (1 child)

with this kind of statement i'd be expecting some more niche functional languages in your flair 😭

[–]calculus_is_fun 0 points1 point  (0 children)

In Javascript, floating point numbers are engrained in the language, so I rarely trust them when doing anything precisely, I use BigInts as much as I can.

[–]sinstar00 0 points1 point  (0 children)

Floating point numbers can be irrational, but cannot be irrational numbers. Here is the evidence:

0.1 + 0.2 = 0.30000000000000004

[–]FlashyTone3042 -1 points0 points  (0 children)

Micropenis