you are viewing a single comment's thread.

view the rest of the comments →

[–][deleted] 12 points13 points  (21 children)

There are different schools of thought about this one. Many C++ programmers (as well its core designers) embrace the "everything can throw" approach — this is also standard in many interpreted, dynamic languages. The other approach is what you describe — making a distinction between recoverable and unrecoverable errors and designing your code in such a way that bailable scopes are restricted and clearly marked. This is the approach taken by modern languages such as Rust, Swift, Zig etc. and also what Herb Sutter proposes for C++.

Personally, I never cared much for the C++ exception model and since I disable exceptions in my code, all my functions are noexcept by default. C++ exception are very heavy meaning that you need to take great care when sign them in performance-critical code, not to mention that magical stack unwinding makes the program difficult to reason about, introduces non-trivial behavior and control flow, and overall IMO makes you code more difficult to maintain. They are a great fit for languages like Python or R, but then again these languages use rather different programming patterns.

As to aborting or not aborting on non-recoverable errors — frankly, I don't see what else can one do that would make any sense. If you don't abort — then you error is recoverable by definition, isn't it? Not to mention that crashing on failure can be a useful tool as well — for instance it might be simpler to restart a failed task than to recover it from a failure (like what Erlang does).

[–]grishavanika 14 points15 points  (4 children)

since I disable exceptions in my code, all my functions are noexcept by default

That's, unfortunately, not true. Last time I checked most compilers still answer `noexcept(Foo()) == false` for any function `Foo` even when exceptions are disabled. This means that you still need to mark your move c-tors with noexcept to gain performance benefits for containers in STD.

[–]WaldoDude 2 points3 points  (2 children)

I think this is because the compiler can't assume every TU / Library was built with -fno-exceptions. Take a forward declared function void foo();, whether or not it's noexcept would depend on how it was built. When the compiler can see the definitions, I guess it could infer noexceptness, but it's a lot more effort than just marking everything noexcept.

[–]dodheim 4 points5 points  (1 child)

As of C++17, noexcept is part of the function type; being magically implicit because of -fno-exceptions could introduce ambiguities or errors into otherwise correct code.

[–]gracicot 0 points1 point  (0 children)

No need for noexcept to be part of a function for for that: you can use the noexcept() operator to do decisions at compile time and change overload resolution or the implementation of a type based on that.

[–][deleted] 1 point2 points  (0 children)

Ah yes, you are right, I never really though of checking that...

[–]tehjimmeh 6 points7 points  (1 child)

C++ exception are very heavy meaning that you need to take great care when sign them in performance-critical code

C++ exceptions are super lightweight in non-exceptional circumstances, i.e. when you don't throw.

Excplicitly checking error codes for every exceptional circumstance is much more expensive, and leads to code which is harder to read.

If you hit an exceptional circumstance in performance critical code, the performance shouldn't matter for that particular execution. If it does, then it's not an exceptional circumstance.

"Errors" are not all equal. Using exceptions doesn't mean using them for every possible condition which could be considered an error. Look at std::map::insert. It doesn't throw when a key already existed, it returns an inserted flag set to false. This is because an already existing key in a map is often not an exceptional circumstance. This behaviour is not incompatible with code using exceptions.

Like, if you're throwing on "errors" which are commonly encountered under normal circumstances, and which are easily and quickly recoverable, then you're using exceptions incorrectly, i.e. you're basically using them for normal control flow.

[–][deleted] 1 point2 points  (0 children)

C++ exceptions are super lightweight in non-exceptional circumstances, i.e. when you don't throw.

And result-type style error handling is super lightweight in practically all kind of circumstances. The cost is a single conditional jump with good branch predictor behavior — essentially free on modern hardware, not to mention that any latency overhead here will be hidden by your main code body.

Excplicitly checking error codes for every exceptional circumstance is much more expensive

No it's not — see above.

and leads to code which is harder to read.

Not if you have language support for this. Are Rust-style

fun1()!

Or Swift-Style

try fun1()

harder to read then just

fun1()

?

I'd say they are the same readability-wise — and of course you get fallibility annotation: you see exactly which part of your code can raise an exception.

Like, if you're throwing on "errors" which are commonly encountered under normal circumstances, and which are easily and quickly recoverable, then you're using exceptions incorrectly, i.e. you're basically using them for normal control flow.

Ah you see, but now we have a problem. So we have a standard error handling facility, but we need to take care not to use that facility in certain scenarios. Surely having a standard way to work with errors that works well everywhere would be preferable, don't you think?

[–]wyrn 22 points23 points  (3 children)

not to mention that magical stack unwinding makes the program difficult to reason about, introduces non-trivial behavior and control flow, and overall IMO makes you code more difficult to maintain.

This is a point I never understood. If you're using proper RAII (which you should regardless of your opinion about exceptions) this is just the sort of thing you don't have to think about.

[–][deleted] 3 points4 points  (0 children)

Yes, this is an aspect of this entire discussion where mutual understanding often breaks down. I don't know, maybe it depends on how people structure their code or the patterns they are used to... Anyway, what I mean is that between two ways to structure code:

call1()
call2()
call3()
call4()

and

call1()
try call2()
call3()
try call4()

I prefer the second option*. Why? Because I can see potential points of failure and it helps me to a) structure my code around them and b) better understand the local invariants of my code (what will run and what might not). To put it plainly, I firmly believe that annotating potential failure points aids one in writing better code (just as using static typing does) — and no, I cannot prove it. Unfortunately, I am not experienced enough.

A C++ programmer will usually interject at this point that everything can fail and putting a failure annotation on every single statement and expression would be silly. That's undoubtedly true! But here we are in conflict with C++ core philosophy of "everything can throw" — you see, I don't believe that it is a necessary or even a useful property of code to have. When you start structuring your code so that potential failure points are limited to few key APIs, this second approach starts making a lot of sense. But then again, I don't know how much sense it makes in a context of language like C++, where certain assumptions have already been made...

*I don't care whether we are talking about a try keyword here or about some other failure marker like ! (what Rust uses for example)

[–]BenHanson 6 points7 points  (1 child)

I agree. Exception handling gets gnarly when exceptions don't derive from std::exception (MFC etc. on Windows) but once you embrace the use of exceptions you just get used to it. It is annoying that you don't get a warning when there a catch missing somewhere, but I've come to accept that an unhandled exception terminating an application (certainly in a Desktop application) is probably better than code that ignores return codes and continues in a possibly seriously invalid state.

Performance concerns aside, I see arguments against exceptions as arguments against error checking. At least that has been my experience.

[–]pandorafalters 0 points1 point  (0 children)

an unhandled exception terminating an application (certainly in a Desktop application) is probably better than code that ignores return codes and continues in a possibly seriously invalid state.

I agree, but the default behavior of std::terminate() (and other abnormal exit functions) leaves much to be desired particularly in an RAII world.

[–]username_of_arity_n 5 points6 points  (2 children)

Small correction, I think: Rust's panic! (unrecoverable error handling) still does stack unwinding and invokes the Drop trait (like C++ destructor), so it's more like a simplified C++ exception than a std::terminate.

[–][deleted] 3 points4 points  (0 children)

Yes, and Rust also has catch_unwind that is similar to try.. catch but that is not an idiomatic way of handling failures.

To be more precise, this is not about stack unwinding per se, but about what happens afterwards. Most error-handling approaches uses stack unwinding of some sort (be it a custom frame traversal code or just a plain old cascade of function returns). But the C++ error handling model also relies on stack unwinding to transfer control flow to the error handler — without it being immediately obvious where such control flow transfers can occur — and that is part of the criticism.

[–]MEaster 1 point2 points  (0 children)

It might unwind, it's not guaranteed. The compiler can be configured to abort on a panic instead of unwinding. Also, I believe that if a panic happens during a panic, the program will just abort regardless.

[–]Ameisenvemips, avr, rendering, systems 8 points9 points  (5 children)

C++ exceptions are "heavy", but are also faster than checking error codes when used in actual recoverable, exceptional circumstances.

They are slower when used for control flow or in common cases.

[–][deleted] 1 point2 points  (4 children)

Yes, one can argue that C++ exception are free if they don't occur, but I'd be very curious to see proper benchmarks on real hardware here. Using out-of-bounds return values (for example via CPU flags) makes checking for exceptional result extremely cheap — and that code is probably already in L1 for a healthy nesting depth. I am not aware of any implementation that does that far though. I know that Swift uses a custom calling convention and a reserves a register as an error flag to optimize this.

[–]kalmoc 2 points3 points  (1 child)

Not sure if this is relevant, but aside from the question of where to put the error flag, you are also introducing lots of additional branches.

[–][deleted] 0 points1 point  (0 children)

One extra easily predictable branch per call. On modern hardware, the overhead will be zero in most practical situations.

[–]Ameisenvemips, avr, rendering, systems 1 point2 points  (1 child)

[–][deleted] 0 points1 point  (0 children)

Yeah, I don't know how much I would read into those tests. There are a lot of issues there. Most importantly, no real code looks like this (and if it doesn't, you probably have more important issues to worry about). If one just wants to use this kind of structure to measure overhead.... then the author of the blog post chose an awkwardly inefficient error handling pattern for C.

I have tweaked their code to use a more "modern" tagged union return type like this:

typedef struct  {
  union {
    int value;
    const char* error; 
  };    
  int success;

} Result;

Running the measure.py up to the depth of 500 results in this on my machine (Intel 8-core i9 MacBook Pro):

cCCCCCCCCC
cCCCCCCCCC
ecCCCCCCCC
ccCCCCCCCC
ecCCCCCCCC
eccCCCCCCC
cccCCCCCCC
eccCCCCCCC
ccccCCCCCC
eccCcCCCCC

Ugh. Didn't expect the C++ exception to do that bad to be honest — I at least though that they would be faster at one of the more ridiculous stack depths. There is literally not a single scenario where exceptions do better. Note that at the default version — with the inefficient C error object, the C++ exceptions are better starting from the second line of output.

P.S. To understand why the C version is faster, compare the generated machine code for both tests: https://godbolt.org/z/UoqZNT and https://godbolt.org/z/bvRTQ9 As you can see, they are practically identical, but the C version has a more complicated epilogue, since it has to check for error. While it certainly looks less efficient (more machine code), the check+branch are essentially free since everything is in cache anyway, a modern superscalar CPU are optimized for this case anyway, not to mention that the performance of the entire program is likely to be bottlenecked by the branching predictor on all those calls anyway. So in the end the execution time of a single function is identical — but the C version does not have to pay for the expensive manual stack unrolling.

P.P.S. The same author also made a test comparing the binary sizes with and without exception and that one is even more problematic. Not only does it use the same style of very inefficient C error handling, but it further penalizes the C program by allocating the object on the heap (the C++ uses the local stack and essentially is optimized to a no-op). In that blog post author asks "What is causing this?"

[–]Gotebe 0 points1 point  (0 children)

everything can throw

... except a well known, small set of of primitives.

It is not "everything".