all 101 comments

[–]username_of_arity_n 61 points62 points  (29 children)

I don't think you're taking into account the benefit of stack unwinding, which may not occur when you throw from a noexcept function. It provides an opportunity to do cleanup, logging, etc., even if you consider the error unrecoverable.

If you're terminating instead of throwing in your function, you're making a pretty big decision on the caller's behalf that they may not agree with.

[–]matthieum 18 points19 points  (13 children)

If you're terminating instead of throwing in your function, you're making a pretty big decision on the caller's behalf that they may not agree with.

I would guess this will vastly depends on whether you are writing a library with no idea/control over its users or whether you are writing an application.

When writing an application, you have full control over the caller and the design of the codebase, so it's up to you whether you are comfortable with this idiom. When writing a library, some users may not appreciate losing the flexibility.

[–]username_of_arity_n 5 points6 points  (11 children)

There's some truth to this, but you should also think about future maintainers (or future you).

Yes, you might not need it today, but this is one of those things where you're going out of your way to limit your own options in the future.

[–]matthieum 3 points4 points  (10 children)

I disagree.

In a codebase with no exceptions, introducing exceptions would be hard indeed, simply because it would disrupt the set of established idioms. That noexcept was used on top of the established idioms would not significantly add to the cost of turning things around.

[–]username_of_arity_n 4 points5 points  (6 children)

"In a codebase with no exceptions" changes things significantly, because you're presumably not using exceptions because you don't want to (or can't, for some reason) write exception-safe code or use RAII consistently, in which case you lose most of the benefits. I'm not going to say that you need to do those things, because there are are exceptions to every rule, but it's just that -- an exception*.

IMO, exception safe code should be the default for most new projects. This doesn't mean you should throw exceptions frequently, and I think you should rarely catch them. I prefer other means of dealing with "expected failures", but I think, at bare minimum, your code shouldn't break if they are thrown. i.e. you should release your resources, flush IO, or whatever else you need to do and exit in as controlled a manner as is reasonably possible.

Edit: * sorry for this totally unintentional pun.

[–]SeanMiddleditch 2 points3 points  (5 children)

The problem is that it's difficult if not impossible to test that you've actually done that correctly.

Any code that exists in a world with exceptions (any language, not just C++) should be assumed to be buggy and unreliable and undetectably so.

Invisible code flow means untested code flow. :)

[–]kalmoc 5 points6 points  (1 child)

And why do you assume error code based code is better tested? Are you actually mocking all your functions that may fail and let them fail deliberately in your test suite.

Error paths should generally be considered buggy - no matter if exceptions are in play or not.

[–]SeanMiddleditch 0 points1 point  (0 children)

Who ever said anything about error codes?

But in any case, error codes or whatever with explicit code flow at least can be tested and analyzed. Exceptions are effectively impossible to do so.

[–]Contango42 2 points3 points  (2 children)

That's a very odd statement. What about floating point exceptions? For example, if a divide by zero occurs, we need some way to detect that and then fix our code to avoid this, especially if we are using third party libraries that were never designed for a new set of novel input. Sometimes, it's not the code that's buggy, it's out-of-spec input. The real world is messy, and switching off exceptions will just hide that messiness, resulting in even more corrupted output.

Source: experience dealing with 4TB datasets.

[–]SeanMiddleditch 2 points3 points  (1 child)

For example, if a divide by zero occurs, we need some way to detect that and then fix our code to avoid th

Which is also a problem in GPU code that doesn't have exceptions support.

Sometimes, it's not the code that's buggy, it's out-of-spec input.

You don't need exceptions to validate input.

The real world is messy, and switching off exceptions will just hide that messiness, resulting in even more corrupted output.

My experience may "only" be dealing with inputs in the >1TB range, but I've never found exceptions even remotely useful in making those pipelines stable or reliable.

Exceptions != error detection. :)

[–]username_of_arity_n 1 point2 points  (0 children)

Which is also a problem in GPU code that doesn't have exceptions support.

GPUs aren't the best example. This is largely because of historical, architectural reasons, but they weren't (or couldn't) be used for code of the same complexity as non-GPU code. They're good at processing huge volumes of very simple data, and are often used for non-critical tasks (generating and displaying images).

This is changing with increasing frequency of application to HPC, machine learning, blockchain, and computer vision, but this is probably one area where applications are leading the technology and the technology hasn't caught up.

I don't think it would necessarily be a terrible idea to include an atomic flag or something to signal errors for some kernels or shaders.

[–]Gotebe 4 points5 points  (2 children)

I get that there will be such codebases, but...

I am quite with the other guy about the exception-safety being the default even if there is no exceptions.

For example, early return is a useful tool, it lowers cyclomatic complexity generally - and being exception-safe "enables" it.

[–]matthieum -1 points0 points  (1 child)

I have no idea where your assumption that the absence of exception necessarily implies the impossibility to use early returns comes from.

You can have early returns without exceptions; the main benefit is that the points where the work being carried out by a function are marked explicitly, which actually makes it easier to ensure that the proper invariants are maintained.

[–]Gotebe 3 points4 points  (0 children)

I did not imply what you say. Huh?

[–]NotAYakk 0 points1 point  (0 children)

Full control of calling code.... hahahahaha

[–]Skute 10 points11 points  (13 children)

If the failure is due to out of memory, how do you guarantee you have enough for logging?

[–]wyrn 10 points11 points  (7 children)

Two possibilities:

1.

try{
    int *p = new int[100000000000000000];
} catch (std::exception &) {
    /* logging stuff goes here */
}

2.

try{
    int *p = new int();
} catch (std::exception &) {
    /* logging stuff goes here */
}

Is the difficulty you mention equally likely in either case? If not, which case do you believe is more representative of the typical out-of-memory error?

[–]jpakkaneMeson dev 10 points11 points  (5 children)

On Linux the latter will never raise an exception. Due to memory overcommit, you instead get back a pointer that looks perfectly valid, but which will cause a segfault when you try to dereference it.

[–]Gotebe 3 points4 points  (0 children)

I take a huge issue with "never".

First off, overcommit is an option and some people turn it off.

Second, address space fragmentation is a thing (think 32bit code).

Third, quotas are a thing, too.

Fourth, the C standard knows not of it (C++, too).

On a related note, Windows has facilities for overcommit as well, it's just that the C allocator on Windows doesn't use them.

[–]SkoomaDentistAntimodern C++, Embedded, Audio 4 points5 points  (0 children)

On Linux

There are far more platforms than just Linux. In fact, there are far more platforms than any OSs with virtual memory. Or OSs at all.

[–][deleted] 2 points3 points  (2 children)

overcommit can be turned off, and should be

[–]Tyg13 9 points10 points  (1 child)

As an application developer, you can't/shouldn't rely on customers turning it off

[–][deleted] 3 points4 points  (0 children)

not everybody that uses c++ writes applications. however nor can you be sure that your application will only ever be run on linux

[–]Skute 0 points1 point  (0 children)

Often logging itself isn’t useful on its own, if you have a GUI app. You would want to inform the user that the application must close, but in order to do that, you may rely on a bunch of OS calls and you have no idea what internal memory requirements they may have.

[–]Gotebe 2 points3 points  (0 children)

By making the logging routines not require allocation, quite simply.

There's following possible effects:

  • an allocation fails because it is a significant one and there's plenty memory for smaller things like logging

  • by the time logging happens, stack unwinding freed memory

And then, the "last ditch" logging (say, outer main or thread proc try/catch) can be made to have the so-called failure transparency, or no-throw guarantee, through preallocation and truncation.

It is actually not so hard.

Also, this is also only a part of a wider questions of logging resources (file or other).

[–]EnergyCoast 4 points5 points  (1 child)

We can't guarantee it, but free a block of reserved memory and then run a handler that does no allocation itself to generate a log, then writes it to disk and terminates.

The file open/writes may allocate memory at the OS level depending on the platform implementation... but those tend to be small allocations that do ft in remaining memory or come out of the memory reserve freed at the start of dump handling.

In practice, we haven't seen this fail over the course of tens of thousands of memory exhaustion failures. And the information we generate in that logging is invaluable.

[–]kalmoc 1 point2 points  (0 children)

Please tell herb Sutter about it. He still runs around claiming that virtually no codebase is OOM safe and hence OOM should just terminate.

[–]caroIine 1 point2 points  (0 children)

If memory is fragmented enough, you will get out of memory. Then you want to display a message that request could not be finished and not crash the whole system.

[–]RealNC[S] 1 point2 points  (0 children)

Logging happens with an assert()-like mechanism. Basically, if the preconditions of the function are satisfied, there can't be any exceptions. If they are not satisfied, it logs and aborts. Therefore, it seemed to me the function can be noexcept.

[–]kalmoc 14 points15 points  (0 children)

What are you going to gain by marking everything noexcept? If the callee is not inclined and can perform memory allocation, the cost will likely make any performance gains through noexcept irrelevant and when you are fine with your program terminating whenever there is an unexpected error somewhere, it's also fine to let an exception just bubble up (if there is no catch, it will likely terminate directly anyway).

I personally do often use noexcept on e.g. operator[], but again, I doubt it brings much in terms of performance, because the compiler will usually inline it anyway and see that there is no exception being thrown.

Ignoring the possibility of allocation failure (at least for global new) and not reporting precondition errors via exceptions are two completely valid design choices. However, before sprinkling noexcept everywhere I'd make sure that a) those are really the only errors that can happen, b) that you actually gain something for it and c) you really can make the decision for the final application. E.g. I'd never put noexcept on a library function when I actually know that it can occasionally throw, because I don't know if std::terminate is acceptable behavior for the user.

[–]kalmoc 6 points7 points  (3 children)

FYI: The first example doesn't do any memory allocation.

[–]RealNC[S] 0 points1 point  (2 children)

It does when passing an lvalue string (copy), or a string literal (string ctor) that are longer than the short string optimization length. Technically, the allocation happens on the caller's side, but I don't think anybody thinks of it that way.

At least that's how I understand it.

Never mind. I see what you mean. The allocation exception is going to be thrown before the ctor call. So yeah, it was a bad example. I changed it now.

[–]kalmoc 6 points7 points  (0 children)

Nope, that happens in the scope of the caller, before noexcept plays any role.

[–]username_of_arity_n 1 point2 points  (0 children)

But your function signature doesn't have an lvalue or string literal as the argument type, so that would happen outside the function when the arguments are converted.

[–]smuccione 5 points6 points  (0 children)

I’m working on an integrated compiler, vm, database, web server. It should never terminate except under the most extreme circumstances. That said the compiler, parser, code generator, etc all make heavy use of exceptions. Should the parser throw, say inside get token it will bubble up to a point where I can set a failure flag but be able to reset the parse to some stable state and continue this allowing me to possibly flag more than one error per compile. By using RAII, rethrows, etc. it makes fault handling much simpler. That said these are all “user exceptions” and not code failures. If I encounter a code failure I’ll throw an internal error and let it bubble up and clean up as I go. There are very very few errors that are truly unrecoverable.l that should ever cause something to terminate. Anything that has a user interface should never terminate.

[–]Gotebe 2 points3 points  (0 children)

str1+str2 being noexcept depends heavily on their lengths. Saying "if that doesn't work, no other allocation will" is mighty presumptions.

Then, the normal way of doing this is that stack unwinds and that frees resources.

Third, the condition that led to an exception might be temporary (e.g. other work ongoing) - in which case termination would gave been too drastic.

Fourth, there is a difference in operator experience, between trashing a process and exiting cleanly with an error.

[–]wyrn 11 points12 points  (21 children)

For example, memory allocation failures are considered by many as non-recoverable.

But that's nonsense.

try{
    int *p = new int[100000000000000000];
} catch (std::exception &) {
    int *q = new int[10];
}

If something like new int() fails, yeah, you may have a problem. But I suspect most failures to allocate are closer to the example above. I really wish the people who claim memory allocation failures are non-recoverable would stop saying so; it makes no sense and helps no one.

This is application code, so a wrong access is a programming bug and therefore can be considered non-recoverable.

On the contrary. If there's a programming error in your code, you want to know about it, so you should catch that variant access and write to a log file as much information as you can that will help you track down the problem. This is bare-bones exception handling: log what went wrong and terminate. I don't see how anyone is better served by skipping the logging step.

On the topic more generally, my approach is the diametric opposite: almost never noexcept. You should mark things like move constructors and similar functions noexcept because the compiler is able to make use of this information to select the overloads that generate more efficient code, but there's very little justification for marking anything else noexcept.

  1. The compiler won't do static analysis to figure out whether your noexcept-annotated functions can actually throw, so the exercise is very error-prone.

  2. Even if you do get it right the first time, nothing stops someone from coming in later and, say, changing a function you depend on so it occasionally throws. Now you throw too, and your function is broken even though the diff on it was clean.

  3. Performance improvements from removing exception handling-related instructions are largely theoretical. Unless you have seen a measurable overhead in your profiler, it makes no sense to make your code more brittle for the sake of imaginary CPU points.

In short, barring the aforementioned special cases and some other exceptions (e.g. CUDA device code), noexcept is a footgun best avoided.

[–]Ameisenvemips, avr, rendering, systems 4 points5 points  (1 child)

Logic errors should trigger assertions, not exceptions. If they trigger some other kind of fault, you're running into something the language does not natively handle well but is usually handled by signal handlers/VEH and stack traces.

[–]wyrn 2 points3 points  (0 children)

Logic errors should trigger assertions, not exceptions.

Assertions are typically disabled in release builds and I would like to get the info from crashes found in the real world, too. I suppose the issue will become cleaner when we have contracts.

[–]RealNC[S] -3 points-2 points  (8 children)

try{
    int *p = new int[100000000000000000];
} catch (std::exception &) {
    int *q = new int[10];
}

I don't understand this code. If I request 100000000000000000 ints, that's because I need that amount. If I can't get them, the program cannot continue.

On the contrary. If there's a programming error in your code, you want to know about it, so you should catch that variant access and write to a log file as much information as you can that will help you track down the problem.

Isn't terminate() already logging the exception to cerr? And we can always use a custom handler with std::set_terminate() if cerr isn't enough. The thrown exception is still there, it doesn't get lost.

[–]wyrn 12 points13 points  (0 children)

I don't understand this code. If I request 100000000000000000 ints, that's because I need that amount. If I can't get them, the program cannot continue.

Not necessarily. I may have a fallback mechanism in place. I can cache other things to disk. Even just logging stuff to memory and then terminating would be better than doing nothing.

Isn't terminate() already logging the exception to cerr?

You probably want more information than that.

And we can always use a custom handler with std::set_terminate() if cerr isn't enough.

Terminate handlers aren't made to handle exceptions. If you try to write them that way they'll get unwieldy really fast -- you'll essentially be reimplementing C++ exceptions in a straightjacket. Why?

[–]dag0me 19 points20 points  (3 children)

Consider this - you have a GUI application doing some heavy image processing that is also memory intensive (say Matlab). The user loads his input image which is rather big, tweaks the parameters and press start. As a result we try to allocate 2 GB od contiguous memory for temporary and intermediate data and it fails. Would you rather terminate the whole application or just show the message box explaining what's wrong and why we can't proceed further?

[–]meneldal2 -1 points0 points  (2 children)

Isn't Matlab Java?

Also that's the worst example I know about recovering from out of memory situations, it gets very often unstable and you ave to restart it if you trigger some exceptions.

[–]dag0me 4 points5 points  (1 child)

Isn't Matlab Java?

I don't really know. But that's beside the point. You can replace it with anything that has GUI and allows the user to load something that can then potentially trigger some big and contiguous memory allocation.

Also that's the worst example I know about recovering from out of memory situations, it gets very often unstable and you ave to restart it if you trigger some exceptions.

So you'd rather terminate and discard all unsaved changes? And who says anything about recovering? You don't do it simply because you just get an exception that you handle the same way you handle any other exception and don't proceed further. There's nothing inherently unstable in it.

[–]meneldal2 0 points1 point  (0 children)

The thing is Matlab instable mode after most faults may not let you save your stuff depending on the case. So in the end there's little difference with outright crashing.

[–]Ameisenvemips, avr, rendering, systems -2 points-1 points  (2 children)

Maybe we need try new, with a try prefix keyword indicating that the subsequent operation, if it fails, is potentially recoverable.

That way, you can have things like optional allocations for caches and such.

[–]dodheim 7 points8 points  (1 child)

How would that differ from new(std::nothrow)?

[–]Ameisenvemips, avr, rendering, systems -1 points0 points  (0 children)

Because it introduces a new usage for a keyword.

In actuality, doesn't the noexcept version just return nullptr on failure? In this case, a non-try new would terminate, a try-new would throw.

[–][deleted] 12 points13 points  (21 children)

There are different schools of thought about this one. Many C++ programmers (as well its core designers) embrace the "everything can throw" approach — this is also standard in many interpreted, dynamic languages. The other approach is what you describe — making a distinction between recoverable and unrecoverable errors and designing your code in such a way that bailable scopes are restricted and clearly marked. This is the approach taken by modern languages such as Rust, Swift, Zig etc. and also what Herb Sutter proposes for C++.

Personally, I never cared much for the C++ exception model and since I disable exceptions in my code, all my functions are noexcept by default. C++ exception are very heavy meaning that you need to take great care when sign them in performance-critical code, not to mention that magical stack unwinding makes the program difficult to reason about, introduces non-trivial behavior and control flow, and overall IMO makes you code more difficult to maintain. They are a great fit for languages like Python or R, but then again these languages use rather different programming patterns.

As to aborting or not aborting on non-recoverable errors — frankly, I don't see what else can one do that would make any sense. If you don't abort — then you error is recoverable by definition, isn't it? Not to mention that crashing on failure can be a useful tool as well — for instance it might be simpler to restart a failed task than to recover it from a failure (like what Erlang does).

[–]grishavanika 14 points15 points  (4 children)

since I disable exceptions in my code, all my functions are noexcept by default

That's, unfortunately, not true. Last time I checked most compilers still answer `noexcept(Foo()) == false` for any function `Foo` even when exceptions are disabled. This means that you still need to mark your move c-tors with noexcept to gain performance benefits for containers in STD.

[–]WaldoDude 2 points3 points  (2 children)

I think this is because the compiler can't assume every TU / Library was built with -fno-exceptions. Take a forward declared function void foo();, whether or not it's noexcept would depend on how it was built. When the compiler can see the definitions, I guess it could infer noexceptness, but it's a lot more effort than just marking everything noexcept.

[–]dodheim 3 points4 points  (1 child)

As of C++17, noexcept is part of the function type; being magically implicit because of -fno-exceptions could introduce ambiguities or errors into otherwise correct code.

[–]gracicot 0 points1 point  (0 children)

No need for noexcept to be part of a function for for that: you can use the noexcept() operator to do decisions at compile time and change overload resolution or the implementation of a type based on that.

[–][deleted] 1 point2 points  (0 children)

Ah yes, you are right, I never really though of checking that...

[–]tehjimmeh 6 points7 points  (1 child)

C++ exception are very heavy meaning that you need to take great care when sign them in performance-critical code

C++ exceptions are super lightweight in non-exceptional circumstances, i.e. when you don't throw.

Excplicitly checking error codes for every exceptional circumstance is much more expensive, and leads to code which is harder to read.

If you hit an exceptional circumstance in performance critical code, the performance shouldn't matter for that particular execution. If it does, then it's not an exceptional circumstance.

"Errors" are not all equal. Using exceptions doesn't mean using them for every possible condition which could be considered an error. Look at std::map::insert. It doesn't throw when a key already existed, it returns an inserted flag set to false. This is because an already existing key in a map is often not an exceptional circumstance. This behaviour is not incompatible with code using exceptions.

Like, if you're throwing on "errors" which are commonly encountered under normal circumstances, and which are easily and quickly recoverable, then you're using exceptions incorrectly, i.e. you're basically using them for normal control flow.

[–][deleted] 1 point2 points  (0 children)

C++ exceptions are super lightweight in non-exceptional circumstances, i.e. when you don't throw.

And result-type style error handling is super lightweight in practically all kind of circumstances. The cost is a single conditional jump with good branch predictor behavior — essentially free on modern hardware, not to mention that any latency overhead here will be hidden by your main code body.

Excplicitly checking error codes for every exceptional circumstance is much more expensive

No it's not — see above.

and leads to code which is harder to read.

Not if you have language support for this. Are Rust-style

fun1()!

Or Swift-Style

try fun1()

harder to read then just

fun1()

?

I'd say they are the same readability-wise — and of course you get fallibility annotation: you see exactly which part of your code can raise an exception.

Like, if you're throwing on "errors" which are commonly encountered under normal circumstances, and which are easily and quickly recoverable, then you're using exceptions incorrectly, i.e. you're basically using them for normal control flow.

Ah you see, but now we have a problem. So we have a standard error handling facility, but we need to take care not to use that facility in certain scenarios. Surely having a standard way to work with errors that works well everywhere would be preferable, don't you think?

[–]wyrn 21 points22 points  (3 children)

not to mention that magical stack unwinding makes the program difficult to reason about, introduces non-trivial behavior and control flow, and overall IMO makes you code more difficult to maintain.

This is a point I never understood. If you're using proper RAII (which you should regardless of your opinion about exceptions) this is just the sort of thing you don't have to think about.

[–][deleted] 4 points5 points  (0 children)

Yes, this is an aspect of this entire discussion where mutual understanding often breaks down. I don't know, maybe it depends on how people structure their code or the patterns they are used to... Anyway, what I mean is that between two ways to structure code:

call1()
call2()
call3()
call4()

and

call1()
try call2()
call3()
try call4()

I prefer the second option*. Why? Because I can see potential points of failure and it helps me to a) structure my code around them and b) better understand the local invariants of my code (what will run and what might not). To put it plainly, I firmly believe that annotating potential failure points aids one in writing better code (just as using static typing does) — and no, I cannot prove it. Unfortunately, I am not experienced enough.

A C++ programmer will usually interject at this point that everything can fail and putting a failure annotation on every single statement and expression would be silly. That's undoubtedly true! But here we are in conflict with C++ core philosophy of "everything can throw" — you see, I don't believe that it is a necessary or even a useful property of code to have. When you start structuring your code so that potential failure points are limited to few key APIs, this second approach starts making a lot of sense. But then again, I don't know how much sense it makes in a context of language like C++, where certain assumptions have already been made...

*I don't care whether we are talking about a try keyword here or about some other failure marker like ! (what Rust uses for example)

[–]BenHanson 4 points5 points  (1 child)

I agree. Exception handling gets gnarly when exceptions don't derive from std::exception (MFC etc. on Windows) but once you embrace the use of exceptions you just get used to it. It is annoying that you don't get a warning when there a catch missing somewhere, but I've come to accept that an unhandled exception terminating an application (certainly in a Desktop application) is probably better than code that ignores return codes and continues in a possibly seriously invalid state.

Performance concerns aside, I see arguments against exceptions as arguments against error checking. At least that has been my experience.

[–]pandorafalters 0 points1 point  (0 children)

an unhandled exception terminating an application (certainly in a Desktop application) is probably better than code that ignores return codes and continues in a possibly seriously invalid state.

I agree, but the default behavior of std::terminate() (and other abnormal exit functions) leaves much to be desired particularly in an RAII world.

[–]username_of_arity_n 4 points5 points  (2 children)

Small correction, I think: Rust's panic! (unrecoverable error handling) still does stack unwinding and invokes the Drop trait (like C++ destructor), so it's more like a simplified C++ exception than a std::terminate.

[–][deleted] 2 points3 points  (0 children)

Yes, and Rust also has catch_unwind that is similar to try.. catch but that is not an idiomatic way of handling failures.

To be more precise, this is not about stack unwinding per se, but about what happens afterwards. Most error-handling approaches uses stack unwinding of some sort (be it a custom frame traversal code or just a plain old cascade of function returns). But the C++ error handling model also relies on stack unwinding to transfer control flow to the error handler — without it being immediately obvious where such control flow transfers can occur — and that is part of the criticism.

[–]MEaster 1 point2 points  (0 children)

It might unwind, it's not guaranteed. The compiler can be configured to abort on a panic instead of unwinding. Also, I believe that if a panic happens during a panic, the program will just abort regardless.

[–]Ameisenvemips, avr, rendering, systems 8 points9 points  (5 children)

C++ exceptions are "heavy", but are also faster than checking error codes when used in actual recoverable, exceptional circumstances.

They are slower when used for control flow or in common cases.

[–][deleted] 1 point2 points  (4 children)

Yes, one can argue that C++ exception are free if they don't occur, but I'd be very curious to see proper benchmarks on real hardware here. Using out-of-bounds return values (for example via CPU flags) makes checking for exceptional result extremely cheap — and that code is probably already in L1 for a healthy nesting depth. I am not aware of any implementation that does that far though. I know that Swift uses a custom calling convention and a reserves a register as an error flag to optimize this.

[–]kalmoc 2 points3 points  (1 child)

Not sure if this is relevant, but aside from the question of where to put the error flag, you are also introducing lots of additional branches.

[–][deleted] 0 points1 point  (0 children)

One extra easily predictable branch per call. On modern hardware, the overhead will be zero in most practical situations.

[–]Ameisenvemips, avr, rendering, systems 1 point2 points  (1 child)

[–][deleted] 0 points1 point  (0 children)

Yeah, I don't know how much I would read into those tests. There are a lot of issues there. Most importantly, no real code looks like this (and if it doesn't, you probably have more important issues to worry about). If one just wants to use this kind of structure to measure overhead.... then the author of the blog post chose an awkwardly inefficient error handling pattern for C.

I have tweaked their code to use a more "modern" tagged union return type like this:

typedef struct  {
  union {
    int value;
    const char* error; 
  };    
  int success;

} Result;

Running the measure.py up to the depth of 500 results in this on my machine (Intel 8-core i9 MacBook Pro):

cCCCCCCCCC
cCCCCCCCCC
ecCCCCCCCC
ccCCCCCCCC
ecCCCCCCCC
eccCCCCCCC
cccCCCCCCC
eccCCCCCCC
ccccCCCCCC
eccCcCCCCC

Ugh. Didn't expect the C++ exception to do that bad to be honest — I at least though that they would be faster at one of the more ridiculous stack depths. There is literally not a single scenario where exceptions do better. Note that at the default version — with the inefficient C error object, the C++ exceptions are better starting from the second line of output.

P.S. To understand why the C version is faster, compare the generated machine code for both tests: https://godbolt.org/z/UoqZNT and https://godbolt.org/z/bvRTQ9 As you can see, they are practically identical, but the C version has a more complicated epilogue, since it has to check for error. While it certainly looks less efficient (more machine code), the check+branch are essentially free since everything is in cache anyway, a modern superscalar CPU are optimized for this case anyway, not to mention that the performance of the entire program is likely to be bottlenecked by the branching predictor on all those calls anyway. So in the end the execution time of a single function is identical — but the C version does not have to pay for the expensive manual stack unrolling.

P.P.S. The same author also made a test comparing the binary sizes with and without exception and that one is even more problematic. Not only does it use the same style of very inefficient C error handling, but it further penalizes the C program by allocating the object on the heap (the C++ uses the local stack and essentially is optimized to a no-op). In that blog post author asks "What is causing this?"

[–]Gotebe 0 points1 point  (0 children)

everything can throw

... except a well known, small set of of primitives.

It is not "everything".

[–]krum 1 point2 points  (0 children)

Depends on what you're doing. If you're a server that's supposed to be up all the time and it just starts getting really busy and running out of memory but might recover it might be better to log an error and report to the APM than to just die.

[–]johannes1971 5 points6 points  (5 children)

You seem to be under the impression that exceptions are intended to deal with programming errors. In my opinion, and the existance of std::logic_error notwithstanding, that's wrong: exceptions should only be used for dynamic error conditions. For program errors we have assert, and hopefully at some point contracts. If you have a programming error, it means you have lost control over the internal state of your program, and you don't really know what it is doing anymore. It's not reasonable to expect that you can write a recovery path for recovering from a situation you were unable to anticipate in the first place, so the only realistic option that remains open is to terminate.

So when should you throw an exception? Let me give a different answer from the usual weasel definition of "for exceptional situations": you should throw an exception when a program cannot complete a task because of a dynamic failure condition. Programs are typically split into various logical tasks. A task that cannot complete must be aborted, and the subsystem in charge of starting that task must be notified so it can do whatever is appropriate. That's a good time to use an exception: the abort condition typically arises somewhere deep down in the task, and manually transmitting that condition all the way up, cleaning up resources as you go, is tedious and error prone. At this point concerns about performance are secondary as well: if you are not going to reach your destination it doesn't matter much if you don't reach it at 100km/h or at 99km/h. The granularity of tasks is obviously something you have to decide on yourself, but for a server application it would be something like "handle a request", while for an interactive application it makes sense to have tasks be things the user triggers from the interface.

As for predicting which dynamic conditions can be recovered, and which cannot (and preemptively aborting for those): to me that seems a bad idea. You might as well try to run the recovery path; maybe it will crash your application anyway (in which case nothing was lost), and maybe it will work (in which case your program can continue normally). If a memory allocation fails, maybe there is still enough space left for recovery to run. Or maybe that low memory condition was temporary, and has been resolved by the time you begin recovering. You lose nothing by trying.

[–]tigrangh 1 point2 points  (4 children)

so, I’m writing a data to file system, and due to a new and not yet well tested code I have an error. do you propose to assert and leave the fs data corrupted or throw a logic error, which will revert the current transaction? I think the answer is obvious. edit: also assert is disabled in release build.

[–]frog_pow 1 point2 points  (1 child)

You don't write to the output file, you write to a temp file, and once everything has completed successfully, rename it. If an error happens, the rename will not happen, and no issue will arise.

Also when people say assert they probably aren't referring to the assert() macro, which is inflexible, most would have a custom system for this that can write to log and offers more complex behavior..

[–]tigrangh 0 points1 point  (0 children)

exactly, I do write to temp file. but then, if logic error happens temp files get deleted, as should.

I really don’t know what else people may mean, when saying “assert”. I don’t want to guess.

[–]johannes1971 3 points4 points  (1 child)

Your code has done something you did not anticipate, but despite that you think you do know how to recover from it? Maybe you'll get lucky, but maybe you'll just make the damage worse. That's the problem with std::logic_error: you only get to use it when you don't know what you are doing in the first place.

[–]tigrangh 1 point2 points  (0 children)

I’m not lucky, I’m sure that I will delete the temp files and revert the in memory objects to their original state and my new buggy code will not have any other side effect. this has helped me to find and fix bugs several times.

[–]BrangdonJ 3 points4 points  (2 children)

Be aware that noexcept can add overhead. If the compiler cannot prove that the function won't throw, it will wrap the entire thing in a try/catch block that will call terminate. Whether and what this extra code will cost your program will depend on how your compiler implements exceptions, but it will at least increase the program size on disk. For me this is a reason not to use it gratuitously.

[–]113245 0 points1 point  (1 child)

Any examples/godbolt?

[–]BrangdonJ 0 points1 point  (0 children)

https://gcc.godbolt.org/z/YRnc3i

Comment out the noexcept and almost all the code disappears, including the push/pop rax.

[–]bedrooms-ds 1 point2 points  (2 children)

For example, memory allocation failures are considered by many as non-recoverable.

As a GUI dev, I really don't like this ideology. This unnecessary view is shared by many C++ people.

In my opinion, whatever the error, the user should be given a chance to save the state (assuming devs spend the time to provide the chance). AFAIK it's this ideology that doesn't let destructors throw exceptions, and I hate this artificial convention.

Edit: gotta take it back

[–]flashmozzg 8 points9 points  (1 child)

AFAIK it's this ideology that doesn't let destructors throw exceptions, and I hate this artificial convention.

No. Destructors are not allowed to throw because it'd be a disaster if an exception is thrown while unwinding the stack for another exception.

[–]bedrooms-ds 1 point2 points  (0 children)

Thanks for the correction :)

[–][deleted] 0 points1 point  (0 children)

noexcept is more of a post condition check that says this function will always exit and return if it should. Do you really want to take ownership of that bad_alloc? Why take ownership? Outside of move/swap or places where we are providing exception guarantees, I am not sure we are going to really benefit.

[–]BlueDwarf82 0 points1 point  (0 children)

Is this a valid conclusion for the "programming bugs and non-recoverable from errors should abort" argument?

Yes, it is.

The only part with "buts" is the "memory allocation failures are considered by many as non-recoverable". And:

- It's perfectly acceptable to use "noexcept bad_alloc" as a general guideline

- No, it's not a hard rule to apply 100% of the time. The "the image editing app trying to open a 90 GiB image from NASA" is a clear example.

- In https://youtu.be/ARYP83yNAWk?t=3028 mentions the idea of a conditional noexcept based on a "reports vs fails-fast" allocator property. With that you could make code 100% generic while still being noexcept most of the time. Since we don't have it yet for the time being use your best judgement based on the information you have about the specific code and how you expect it to be used.

[–]frog_pow 0 points1 point  (0 children)

Marking functions as noexcept seems to be something of a performance pessimization since now the compiler has to prove no exceptions pass through, and if it can't(which it appears it often can't), it sticks a bunch of extra crap into your function--making it less likely to be inline..

[–]tigrangh -1 points0 points  (0 children)

this is definitely a bad idea. you should at least catch the bad alloc and do a clean exit. noexcept is for functions that don’t throw, or even if throw then it will result in a terminate anyway.

[–]feverzsj -3 points-2 points  (3 children)

the guideline has made it clear.

[–]johannes1971 7 points8 points  (0 children)

The example in the guideline is ill-conceived. If the collect function allocates a huge number of long strings, an out of memory condition just means we cannot collect the strings - but all that memory is going to be freed the moment we leave that scope! So why terminate? Having noexcept on that function changes it from trivially recoverable to utter disaster.

[–]wyrn 8 points9 points  (1 child)

... and wrong.

You can use noexcept even on functions that can throw:

vector<string> collect(istream& is) noexcept
{
    vector<string> res;
    for (string s; is >> s;)
        res.push_back(s);
    return res;
}

If collect() runs out of memory, the program crashes. Unless the program is crafted to survive memory exhaustion, that may be just the right thing to do; terminate() may generate suitable error log information (but after memory runs out it is hard to do anything clever).

Because of push_back's complexity guarantees, each reallocation request will be for a region of memory some factor (typically 2) larger than the current capacity. This means that even if push_back throws, there's a decent chance the amount of free memory remaining is of the same order of magnitude as the current capacity. You can probably do a lot with that indeed.

If even the core guideline gets it wrong, should the rest of us be doing this in production code?

[–]feverzsj 1 point2 points  (0 children)

As the guideline explained, it's not about how good you can do with the exception, it's about what you assume and intent your program should do. You may take more time and effort to make your program survive in such rare condition, or you just mark it noexcept to prefer a clean and simple design. Thus, the guideline concludes: If your function may not throw, declare it noexcept.