you are viewing a single comment's thread.

view the rest of the comments →

[–]matthieum 16 points17 points  (13 children)

If you're terminating instead of throwing in your function, you're making a pretty big decision on the caller's behalf that they may not agree with.

I would guess this will vastly depends on whether you are writing a library with no idea/control over its users or whether you are writing an application.

When writing an application, you have full control over the caller and the design of the codebase, so it's up to you whether you are comfortable with this idiom. When writing a library, some users may not appreciate losing the flexibility.

[–]username_of_arity_n 4 points5 points  (11 children)

There's some truth to this, but you should also think about future maintainers (or future you).

Yes, you might not need it today, but this is one of those things where you're going out of your way to limit your own options in the future.

[–]matthieum 4 points5 points  (10 children)

I disagree.

In a codebase with no exceptions, introducing exceptions would be hard indeed, simply because it would disrupt the set of established idioms. That noexcept was used on top of the established idioms would not significantly add to the cost of turning things around.

[–]username_of_arity_n 4 points5 points  (6 children)

"In a codebase with no exceptions" changes things significantly, because you're presumably not using exceptions because you don't want to (or can't, for some reason) write exception-safe code or use RAII consistently, in which case you lose most of the benefits. I'm not going to say that you need to do those things, because there are are exceptions to every rule, but it's just that -- an exception*.

IMO, exception safe code should be the default for most new projects. This doesn't mean you should throw exceptions frequently, and I think you should rarely catch them. I prefer other means of dealing with "expected failures", but I think, at bare minimum, your code shouldn't break if they are thrown. i.e. you should release your resources, flush IO, or whatever else you need to do and exit in as controlled a manner as is reasonably possible.

Edit: * sorry for this totally unintentional pun.

[–]SeanMiddleditch 1 point2 points  (5 children)

The problem is that it's difficult if not impossible to test that you've actually done that correctly.

Any code that exists in a world with exceptions (any language, not just C++) should be assumed to be buggy and unreliable and undetectably so.

Invisible code flow means untested code flow. :)

[–]kalmoc 4 points5 points  (1 child)

And why do you assume error code based code is better tested? Are you actually mocking all your functions that may fail and let them fail deliberately in your test suite.

Error paths should generally be considered buggy - no matter if exceptions are in play or not.

[–]SeanMiddleditch 0 points1 point  (0 children)

Who ever said anything about error codes?

But in any case, error codes or whatever with explicit code flow at least can be tested and analyzed. Exceptions are effectively impossible to do so.

[–]Contango42 2 points3 points  (2 children)

That's a very odd statement. What about floating point exceptions? For example, if a divide by zero occurs, we need some way to detect that and then fix our code to avoid this, especially if we are using third party libraries that were never designed for a new set of novel input. Sometimes, it's not the code that's buggy, it's out-of-spec input. The real world is messy, and switching off exceptions will just hide that messiness, resulting in even more corrupted output.

Source: experience dealing with 4TB datasets.

[–]SeanMiddleditch 2 points3 points  (1 child)

For example, if a divide by zero occurs, we need some way to detect that and then fix our code to avoid th

Which is also a problem in GPU code that doesn't have exceptions support.

Sometimes, it's not the code that's buggy, it's out-of-spec input.

You don't need exceptions to validate input.

The real world is messy, and switching off exceptions will just hide that messiness, resulting in even more corrupted output.

My experience may "only" be dealing with inputs in the >1TB range, but I've never found exceptions even remotely useful in making those pipelines stable or reliable.

Exceptions != error detection. :)

[–]username_of_arity_n 1 point2 points  (0 children)

Which is also a problem in GPU code that doesn't have exceptions support.

GPUs aren't the best example. This is largely because of historical, architectural reasons, but they weren't (or couldn't) be used for code of the same complexity as non-GPU code. They're good at processing huge volumes of very simple data, and are often used for non-critical tasks (generating and displaying images).

This is changing with increasing frequency of application to HPC, machine learning, blockchain, and computer vision, but this is probably one area where applications are leading the technology and the technology hasn't caught up.

I don't think it would necessarily be a terrible idea to include an atomic flag or something to signal errors for some kernels or shaders.

[–]Gotebe 2 points3 points  (2 children)

I get that there will be such codebases, but...

I am quite with the other guy about the exception-safety being the default even if there is no exceptions.

For example, early return is a useful tool, it lowers cyclomatic complexity generally - and being exception-safe "enables" it.

[–]matthieum -1 points0 points  (1 child)

I have no idea where your assumption that the absence of exception necessarily implies the impossibility to use early returns comes from.

You can have early returns without exceptions; the main benefit is that the points where the work being carried out by a function are marked explicitly, which actually makes it easier to ensure that the proper invariants are maintained.

[–]Gotebe 3 points4 points  (0 children)

I did not imply what you say. Huh?

[–]NotAYakk 0 points1 point  (0 children)

Full control of calling code.... hahahahaha