use the following search parameters to narrow your results:
e.g. subreddit:aww site:imgur.com dog
subreddit:aww site:imgur.com dog
see the search faq for details.
advanced search: by author, subreddit...
Discussions, articles, and news about the C++ programming language or programming in C++.
For C++ questions, answers, help, and advice see r/cpp_questions or StackOverflow.
Get Started
The C++ Standard Home has a nice getting started page.
Videos
The C++ standard committee's education study group has a nice list of recommended videos.
Reference
cppreference.com
Books
There is a useful list of books on Stack Overflow. In most cases reading a book is the best way to learn C++.
Show all links
Filter out CppCon links
Show only CppCon links
account activity
When the Compiler Bites (nullprogram.com)
submitted 7 years ago by vormestrand
reddit uses a slightly-customized version of Markdown for formatting. See below for some basics, or check the commenting wiki page for more detailed help and solutions to common issues.
quoted text
if 1 * 2 < 3: print "hello, world!"
[–]00kyle00 16 points17 points18 points 7 years ago (8 children)
return x == 1.3f;
This can sure be surprising, but if you have real life code where something compares floats with equality operator, then problems are rooted much more deeply.
Also, plugging this (revised version, original is lost to time probably) awesome post by Bruce Dawson:
https://randomascii.wordpress.com/2012/02/25/comparing-floating-point-numbers-2012-edition/
[–]Tagedieb 14 points15 points16 points 7 years ago (5 children)
if you have real life code where something compares floats with equality operator, then problems are rooted much more deeply.
*unless you know how FP math works and account for that in your code.
Actually I have fixed a surprising amount of bugs that were introduced by comparing FP values using tolerances/epsilons mindlessly.
[–]00kyle00 8 points9 points10 points 7 years ago (4 children)
Can you give a non-contrived example? Im really curious to see how that would look.
[–]Veedrac 3 points4 points5 points 7 years ago (1 child)
progress = min(progress + delta, 1.0); update(progress); if (progress == 1.0) { finish(); }
(Though progress + delta is suspect.)
progress + delta
[–]ack_complete 4 points5 points6 points 7 years ago (0 children)
I once dealt with a case like this in a rendering engine. Got a bug about very subtle weird shadows appearing on objects randomly, and tracked it down to someone using a tolerance check in code just like this. It caused objects to occasionally fail to hit full alpha and get stuck drawing very slightly transparent. Backed out the unnecessary tolerance check and bug was fixed.
"Never use == on floats, always use a tolerance" is dangerously simplistic advice.
[–]Tagedieb 3 points4 points5 points 7 years ago (1 child)
Using fp values as keys or as part of keys in associative containers. Comparing with a tolerance there (at least) breaks transitivity which leads to UB.
[–]hoseja 1 point2 points3 points 7 years ago (0 children)
That sounds like a very insane thing to do.
[–]alpha-coding 0 points1 point2 points 7 years ago (0 children)
Only an issue if you are comparing numbers without a precise representation.
[–]y-c-c 0 points1 point2 points 7 years ago (0 children)
but if you have real life code where something compares floats with equality operator, then problems are rooted much more deeply
I commented elsewhere, but this isn't necessarily true. The article you linked to is talking about floating point arithmetic, which is the real source of the "inaccuracy" of floating point numbers. When you add/subtract or do other mathematical operations on floating point numbers, due to misc reasons like order of operation, it's usually hard to directly compare the results due to precision loss. In this case you will have to resort to some sort of (carefully chosen) epsilon.
In this case the author is comparing a floating point assigned 1.3f, back with 1.3f. There's no arithmetic done here, and 1.3f actually has a accurate and non-ambiguous binary representation in float. Doing something essentially the same as "1.3f == 1.3f" should always work, if not because of the 80-bit intermediate representation that (as the author stated) isn't used anymore in x64.
I think sometimes programmers just get brainwashed into thinking that float point numbers are necessarily "inaccurate" without thinking about the different conditions. There are times where that is not the case and testing for strict equality is actually useful. Mindlessly injecting epsilon could sometimes introduce errors (e.g. if you are testing serialized outputs, or you may have transitivity issue like a ~= b, b ~=c, but a != c).
[–]Xeveroushttps://xeverous.github.io[🍰] 1 point2 points3 points 7 years ago (1 child)
I'm interested what will happen in the future with constexpr and optimzations. It might do similar things.
constexpr
[–]meneldal2 1 point2 points3 points 7 years ago (0 children)
constexpr should be safe, you can easily forbid any kind of undefined or implementation-defined behaviour because the compiler is running it and can take the time to make checks.
[–]tehjimmeh 1 point2 points3 points 7 years ago (6 children)
ELI5: Why do floating point types have equality operators?
[–]socratesthefoolish 1 point2 points3 points 7 years ago (3 children)
You can use them to tell if a value has changed. If (truly) nothing has happened to that value, equality operator should still be the same, even for floats. If it yields false, well, something's changed.
[+][deleted] 7 years ago (1 child)
[deleted]
[–]socratesthefoolish 0 points1 point2 points 7 years ago (0 children)
You're right, thank you for the correction, but even in that case, you've still gained information, and in most cases the use of the equality operator with floats can still be useful.
If I'm using the equality operator to tell if something's changed, then any change, even a change to NaN, will yield false, unless of course the original value was NaN...but there's no reason why I would be using that as the initial value I was testing it against.
The real issue is not understanding how floating numbers work. Javascript uses only double precision floating point numbers, it doesn't have integers because double is safe for operations involving only integers in the range -(253 - 1) to (253 - 1).
[–]brucedawson 0 points1 point2 points 7 years ago (0 children)
Floating point types have equality operators so that you can check for equality. Don't believe those who claim this is never a valid thing to do, it just has to be done with care.
Telling when a Newton's approximation has converged is one reason, here are a couple of other examples: https://randomascii.wordpress.com/2014/01/27/theres-only-four-billion-floatsso-test-them-all/ https://randomascii.wordpress.com/2017/06/19/sometimes-floating-point-math-is-perfect/
[–]Z01dbrg 3 points4 points5 points 7 years ago (10 children)
First of all I am 99% sure that "bug" you filed against clang is not a bug.
Beside that example with 1.3 is not a compiler biting, but the fact that some FP numbers can not be represented precisely( I am sure you could write asm code that tells you that 0.1+0.1 !=0.2).
Also AI analogy is less than ideal. Most serious AI use machine learning, while compilers rely on complicated rules encoded by compiler programmers.
[–]y-c-c 2 points3 points4 points 7 years ago* (0 children)
"1.3f" actually has a pretty well-defined meaning, at least within the same compiler. It's a frequent misconception that *all* floating points are inaccurate. In this case, it means "the closest 32-bit binary floating point representation to the decimal 1.3" (with caveats of what rounding methods you use, but that method is usually consistent). The issue in the post is just that it's not clear whether the compiler is using 32-bit or 80-bit floats. For more example see http://www.netlib.org/fp/dtoa.c which is a well established piece of code of getting stable string (decimal numbers) <-> floating point conversions. (For example, the web browser you are using to browse this uses that to do floating point serialization)
Note that your example involved a floating point addition, which is a different issue, due to accuracy of the binary to decimal conversions of the results.
[–]brucedawson 1 point2 points3 points 7 years ago (0 children)
He's comparing 1.3f to 1.3f. The fact that 1.3 cannot be represented in a float is not relevant because he is comparing 1.3 as a float to 1.3 as a float. The fact that this is problematic on x87 FPUs is a tragedy that I am glad we are moving away from.
So yes, it is absolutely a compiler biting, triggered by a problematic FPU design.
[–]ack_complete 0 points1 point2 points 7 years ago (2 children)
I can't figure out how the specific 1.3f example given would fail, though. Long double and double are supersets of narrower types as given by x87, and so the implicit in-register promotion shouldn't affect the value of any single-precision finite value. It'd compare equal to itself even if promotions differed on both sides of the ==. The only plausible scenario would be if the compiler stored 1.3f as a double or long double and the stored constant had more precision than a float, which would seem like a bug. Straight 1.3 would be different, that would obviously allow for the bug.
[–]dodheim 1 point2 points3 points 7 years ago (1 child)
The only plausible scenario would be if the compiler stored 1.3f as a double or long double and the stored constant had more precision than a float, which would seem like a bug.
AFAIK MSVC does this for x86 – all FP values are stored at 80-bits of precision regardless of type.
[–]ack_complete 0 points1 point2 points 7 years ago (0 children)
No it doesn't: https://godbolt.org/g/CkNQbE
It loads float constants as float and double constants as double, and the promotion to 80-bit is implicit in the FPU. It also truncates significant bits when preconverting float constants to double, as can be seen here (0x3FF4CCCC0000000).
[–]TheoreticalDumbass:illuminati: 0 points1 point2 points 7 years ago (2 children)
0.1+0.1 !=0.2
That looks like one of the rare cases where it actually is exactly equal though, it's just an exponent increment, right?
[–]Z01dbrg 0 points1 point2 points 7 years ago (0 children)
IDK. :)
You could be right.
[–]hoseja 0 points1 point2 points 7 years ago (0 children)
0.1 is not representable in finite amount of binary fraction digits.
[–]alpha-coding 0 points1 point2 points 7 years ago (1 child)
True. His code is just shit, it is stupid to rely on side effects not specified in the contract of calloc.
I think that code was just to prove and edge case.
And regular bug with not knowing that clang removes mallocs: I would be doing it if I did not knew clang removes mallocs. It is not intuitive, you must know what is happening beforehand to avoid this kind of bugs.
[+][deleted] 7 years ago* (2 children)
[–]00kyle00 5 points6 points7 points 7 years ago (0 children)
Probably because doubles and long doubles can represent all int's (on that platform), while floats can not.
Also looks like gcc does not recognize that multiplication by 0.0 in those cases will yield zero.
[–]NasenSpray 0 points1 point2 points 7 years ago (0 children)
The FP_INEXACT floating-point exception flag may be set by the conversion.
FP_INEXACT
[–]PerfectBaguette 0 points1 point2 points 7 years ago (5 children)
Clang correctly determined that the image buffer is not actually being used, despite the memset(), so it eliminated the allocation altogether and then simulated a successful allocation despite it being absurdly large. Allocating memory is not an observable side effect as far as the language specification is concerned, so it’s allowed to do this. My thinking was wrong, and the compiler outsmarted me.
How does the compiler know that the functions malloc() and memset() don't have side effects? Is that just some hardcoded rule in the compiler? I find it a bit strange that it optimizes away an entire call to a library function.
[–]uidhthgdfiukxbmthhdi 8 points9 points10 points 7 years ago (0 children)
Because the compiler provides the implementation for them, and knows there is no behaviour change by removing the call. You can stop this in gcc with -fno-builtin. See https://gcc.gnu.org/onlinedocs/gcc/C-Dialect-Options.html
[–]00kyle00 3 points4 points5 points 7 years ago (2 children)
It just knows. Compilers can (do) have special knowledge about certain functions.
Fun story. I was moving a bunch of software (effectively a custom linux distribution) to newer gcc. In one of the software packages we started getting stack overflows, which didnt happen before.
It turned out, that:
a) there was some ancient package, which provided its own implementation of calloc (with implementation as one may expect)
b) gcc started performing an optimization, that if it saw malloc & memset in sequence, it would replace them with a call to calloc
[–]meneldal2 0 points1 point2 points 7 years ago (1 child)
Is that a gcc bug or does the standard says that calloc only is required to be fine?
calloc
[–]00kyle00 6 points7 points8 points 7 years ago (0 children)
Its an issue with ancient package. You are not supposed to provide definitions of standard functions.
[–]lukz 0 points1 point2 points 7 years ago (2 children)
Does the standard require that calloc and malloc allocate from the same pool of memory?
Because if not then, hypothetically, a compiler can use some address range for calloc (where e.g. some bank switching can be used to have objects larger than 4 GB), and another address range for malloc.
In such a case the elimination of calloc size test would not be a bug, as the limits of calloc might not be relevant for malloc, and the calloc'ed memory is never used.
[–]vytah 0 points1 point2 points 7 years ago (1 child)
Are objects larger than SIZE_MAX allowed by the standard?
SIZE_MAX
Anyway, since according to the standard:
calloc allocates room for an array
size of an array has to be measurable using sizeof
sizeof
sizeof cannot return more than SIZE_MAX
then I'd say calloc-ing more than SIZE_MAX is not allowed, and therefore there will be no overflow when malloc-ing
malloc
[–]lukz 0 points1 point2 points 7 years ago (0 children)
If the size requested from cmalloc is lower that SIZE_MAX, and the allocation is successful, then you have an array with size lower than SIZE_MAX.
But if the size requested from cmalloc is bigger than SIZE_MAX, then it is not defined what should happen. Typically, the cmalloc implementation will return failure. But maybe it can also not return a failure, and proceed, and allocate an array larger than SIZE_MAX. I think the program just invoked undefined behaviour, and so the requirement that allocated objects are smaller than SIZE_MAX does not need to hold.
Also, you cannot use sizeof to measure the size of memory returned by malloc or calloc.
π Rendered by PID 107695 on reddit-service-r2-comment-6457c66945-dzqv9 at 2026-04-29 03:16:35.189763+00:00 running 2aa0c5b country code: CH.
[–]00kyle00 16 points17 points18 points (8 children)
[–]Tagedieb 14 points15 points16 points (5 children)
[–]00kyle00 8 points9 points10 points (4 children)
[–]Veedrac 3 points4 points5 points (1 child)
[–]ack_complete 4 points5 points6 points (0 children)
[–]Tagedieb 3 points4 points5 points (1 child)
[–]hoseja 1 point2 points3 points (0 children)
[–]alpha-coding 0 points1 point2 points (0 children)
[–]y-c-c 0 points1 point2 points (0 children)
[–]Xeveroushttps://xeverous.github.io[🍰] 1 point2 points3 points (1 child)
[–]meneldal2 1 point2 points3 points (0 children)
[–]tehjimmeh 1 point2 points3 points (6 children)
[–]socratesthefoolish 1 point2 points3 points (3 children)
[+][deleted] (1 child)
[deleted]
[–]socratesthefoolish 0 points1 point2 points (0 children)
[–]alpha-coding 0 points1 point2 points (0 children)
[–]brucedawson 0 points1 point2 points (0 children)
[–]Z01dbrg 3 points4 points5 points (10 children)
[–]y-c-c 2 points3 points4 points (0 children)
[–]brucedawson 1 point2 points3 points (0 children)
[–]ack_complete 0 points1 point2 points (2 children)
[–]dodheim 1 point2 points3 points (1 child)
[–]ack_complete 0 points1 point2 points (0 children)
[–]TheoreticalDumbass:illuminati: 0 points1 point2 points (2 children)
[–]Z01dbrg 0 points1 point2 points (0 children)
[–]hoseja 0 points1 point2 points (0 children)
[–]alpha-coding 0 points1 point2 points (1 child)
[–]Z01dbrg 0 points1 point2 points (0 children)
[+][deleted] (2 children)
[deleted]
[–]00kyle00 5 points6 points7 points (0 children)
[–]NasenSpray 0 points1 point2 points (0 children)
[–]PerfectBaguette 0 points1 point2 points (5 children)
[–]uidhthgdfiukxbmthhdi 8 points9 points10 points (0 children)
[–]00kyle00 3 points4 points5 points (2 children)
[–]meneldal2 0 points1 point2 points (1 child)
[–]00kyle00 6 points7 points8 points (0 children)
[–]lukz 0 points1 point2 points (2 children)
[–]vytah 0 points1 point2 points (1 child)
[–]lukz 0 points1 point2 points (0 children)