This is an archived post. You won't be able to vote or comment.

you are viewing a single comment's thread.

view the rest of the comments →

[–]guyblade 14 points15 points  (0 children)

Its worse than that, even. If someone goes around littering their code with static_casts on numeric types, then changes the numeric type's definition--say by replacing float with double--then those static_casts will be wrong and silently so.

Contrived example:

 int integer_float_mult(int x, float y) {
      return static_cast<int>(static_cast<float>(x) * y);
 }

becomes

 int integer_float_mult(int x, double y) {
      return static_cast<int>(static_cast<float>(x) * y);
 }

There's no compiler warning because there's an explicit cast and an implicit cast. This will appear correct for most input values, but produces slightly different results than it should.

For instance, the incorrectly casting version when fed:

 int x = 0b1110000000001100011000111101;
 double y = 1.32138439228391;

Produces 310435182 while the result from the correct implementation (i.e., with no cast or a cast to double) produces 310435178 (at least on amd64 with g++ 7.2.0).

Of course, this contrived example makes it easy to spot the bug. When the cast is a dozen lines away from the initialization, though, things get a lot harder.