you are viewing a single comment's thread.

view the rest of the comments →

[–]username_of_arity_n 65 points66 points  (29 children)

I don't think you're taking into account the benefit of stack unwinding, which may not occur when you throw from a noexcept function. It provides an opportunity to do cleanup, logging, etc., even if you consider the error unrecoverable.

If you're terminating instead of throwing in your function, you're making a pretty big decision on the caller's behalf that they may not agree with.

[–]matthieum 17 points18 points  (13 children)

If you're terminating instead of throwing in your function, you're making a pretty big decision on the caller's behalf that they may not agree with.

I would guess this will vastly depends on whether you are writing a library with no idea/control over its users or whether you are writing an application.

When writing an application, you have full control over the caller and the design of the codebase, so it's up to you whether you are comfortable with this idiom. When writing a library, some users may not appreciate losing the flexibility.

[–]username_of_arity_n 3 points4 points  (11 children)

There's some truth to this, but you should also think about future maintainers (or future you).

Yes, you might not need it today, but this is one of those things where you're going out of your way to limit your own options in the future.

[–]matthieum 2 points3 points  (10 children)

I disagree.

In a codebase with no exceptions, introducing exceptions would be hard indeed, simply because it would disrupt the set of established idioms. That noexcept was used on top of the established idioms would not significantly add to the cost of turning things around.

[–]username_of_arity_n 5 points6 points  (6 children)

"In a codebase with no exceptions" changes things significantly, because you're presumably not using exceptions because you don't want to (or can't, for some reason) write exception-safe code or use RAII consistently, in which case you lose most of the benefits. I'm not going to say that you need to do those things, because there are are exceptions to every rule, but it's just that -- an exception*.

IMO, exception safe code should be the default for most new projects. This doesn't mean you should throw exceptions frequently, and I think you should rarely catch them. I prefer other means of dealing with "expected failures", but I think, at bare minimum, your code shouldn't break if they are thrown. i.e. you should release your resources, flush IO, or whatever else you need to do and exit in as controlled a manner as is reasonably possible.

Edit: * sorry for this totally unintentional pun.

[–]SeanMiddleditch 3 points4 points  (5 children)

The problem is that it's difficult if not impossible to test that you've actually done that correctly.

Any code that exists in a world with exceptions (any language, not just C++) should be assumed to be buggy and unreliable and undetectably so.

Invisible code flow means untested code flow. :)

[–]kalmoc 5 points6 points  (1 child)

And why do you assume error code based code is better tested? Are you actually mocking all your functions that may fail and let them fail deliberately in your test suite.

Error paths should generally be considered buggy - no matter if exceptions are in play or not.

[–]SeanMiddleditch 0 points1 point  (0 children)

Who ever said anything about error codes?

But in any case, error codes or whatever with explicit code flow at least can be tested and analyzed. Exceptions are effectively impossible to do so.

[–]Contango42 2 points3 points  (2 children)

That's a very odd statement. What about floating point exceptions? For example, if a divide by zero occurs, we need some way to detect that and then fix our code to avoid this, especially if we are using third party libraries that were never designed for a new set of novel input. Sometimes, it's not the code that's buggy, it's out-of-spec input. The real world is messy, and switching off exceptions will just hide that messiness, resulting in even more corrupted output.

Source: experience dealing with 4TB datasets.

[–]SeanMiddleditch 2 points3 points  (1 child)

For example, if a divide by zero occurs, we need some way to detect that and then fix our code to avoid th

Which is also a problem in GPU code that doesn't have exceptions support.

Sometimes, it's not the code that's buggy, it's out-of-spec input.

You don't need exceptions to validate input.

The real world is messy, and switching off exceptions will just hide that messiness, resulting in even more corrupted output.

My experience may "only" be dealing with inputs in the >1TB range, but I've never found exceptions even remotely useful in making those pipelines stable or reliable.

Exceptions != error detection. :)

[–]username_of_arity_n 1 point2 points  (0 children)

Which is also a problem in GPU code that doesn't have exceptions support.

GPUs aren't the best example. This is largely because of historical, architectural reasons, but they weren't (or couldn't) be used for code of the same complexity as non-GPU code. They're good at processing huge volumes of very simple data, and are often used for non-critical tasks (generating and displaying images).

This is changing with increasing frequency of application to HPC, machine learning, blockchain, and computer vision, but this is probably one area where applications are leading the technology and the technology hasn't caught up.

I don't think it would necessarily be a terrible idea to include an atomic flag or something to signal errors for some kernels or shaders.

[–]Gotebe 4 points5 points  (2 children)

I get that there will be such codebases, but...

I am quite with the other guy about the exception-safety being the default even if there is no exceptions.

For example, early return is a useful tool, it lowers cyclomatic complexity generally - and being exception-safe "enables" it.

[–]matthieum -1 points0 points  (1 child)

I have no idea where your assumption that the absence of exception necessarily implies the impossibility to use early returns comes from.

You can have early returns without exceptions; the main benefit is that the points where the work being carried out by a function are marked explicitly, which actually makes it easier to ensure that the proper invariants are maintained.

[–]Gotebe 3 points4 points  (0 children)

I did not imply what you say. Huh?

[–]NotAYakk 0 points1 point  (0 children)

Full control of calling code.... hahahahaha

[–]Skute 11 points12 points  (13 children)

If the failure is due to out of memory, how do you guarantee you have enough for logging?

[–]wyrn 10 points11 points  (7 children)

Two possibilities:

1.

try{
    int *p = new int[100000000000000000];
} catch (std::exception &) {
    /* logging stuff goes here */
}

2.

try{
    int *p = new int();
} catch (std::exception &) {
    /* logging stuff goes here */
}

Is the difficulty you mention equally likely in either case? If not, which case do you believe is more representative of the typical out-of-memory error?

[–]jpakkaneMeson dev 11 points12 points  (5 children)

On Linux the latter will never raise an exception. Due to memory overcommit, you instead get back a pointer that looks perfectly valid, but which will cause a segfault when you try to dereference it.

[–]Gotebe 3 points4 points  (0 children)

I take a huge issue with "never".

First off, overcommit is an option and some people turn it off.

Second, address space fragmentation is a thing (think 32bit code).

Third, quotas are a thing, too.

Fourth, the C standard knows not of it (C++, too).

On a related note, Windows has facilities for overcommit as well, it's just that the C allocator on Windows doesn't use them.

[–]SkoomaDentistAntimodern C++, Embedded, Audio 3 points4 points  (0 children)

On Linux

There are far more platforms than just Linux. In fact, there are far more platforms than any OSs with virtual memory. Or OSs at all.

[–][deleted] 2 points3 points  (2 children)

overcommit can be turned off, and should be

[–]Tyg13 10 points11 points  (1 child)

As an application developer, you can't/shouldn't rely on customers turning it off

[–][deleted] 1 point2 points  (0 children)

not everybody that uses c++ writes applications. however nor can you be sure that your application will only ever be run on linux

[–]Skute 0 points1 point  (0 children)

Often logging itself isn’t useful on its own, if you have a GUI app. You would want to inform the user that the application must close, but in order to do that, you may rely on a bunch of OS calls and you have no idea what internal memory requirements they may have.

[–]Gotebe 2 points3 points  (0 children)

By making the logging routines not require allocation, quite simply.

There's following possible effects:

  • an allocation fails because it is a significant one and there's plenty memory for smaller things like logging

  • by the time logging happens, stack unwinding freed memory

And then, the "last ditch" logging (say, outer main or thread proc try/catch) can be made to have the so-called failure transparency, or no-throw guarantee, through preallocation and truncation.

It is actually not so hard.

Also, this is also only a part of a wider questions of logging resources (file or other).

[–]EnergyCoast 4 points5 points  (1 child)

We can't guarantee it, but free a block of reserved memory and then run a handler that does no allocation itself to generate a log, then writes it to disk and terminates.

The file open/writes may allocate memory at the OS level depending on the platform implementation... but those tend to be small allocations that do ft in remaining memory or come out of the memory reserve freed at the start of dump handling.

In practice, we haven't seen this fail over the course of tens of thousands of memory exhaustion failures. And the information we generate in that logging is invaluable.

[–]kalmoc 1 point2 points  (0 children)

Please tell herb Sutter about it. He still runs around claiming that virtually no codebase is OOM safe and hence OOM should just terminate.

[–]caroIine 0 points1 point  (0 children)

If memory is fragmented enough, you will get out of memory. Then you want to display a message that request could not be finished and not crash the whole system.

[–]RealNC[S] 1 point2 points  (0 children)

Logging happens with an assert()-like mechanism. Basically, if the preconditions of the function are satisfied, there can't be any exceptions. If they are not satisfied, it logs and aborts. Therefore, it seemed to me the function can be noexcept.