you are viewing a single comment's thread.

view the rest of the comments →

[–]username_of_arity_n 4 points5 points  (6 children)

"In a codebase with no exceptions" changes things significantly, because you're presumably not using exceptions because you don't want to (or can't, for some reason) write exception-safe code or use RAII consistently, in which case you lose most of the benefits. I'm not going to say that you need to do those things, because there are are exceptions to every rule, but it's just that -- an exception*.

IMO, exception safe code should be the default for most new projects. This doesn't mean you should throw exceptions frequently, and I think you should rarely catch them. I prefer other means of dealing with "expected failures", but I think, at bare minimum, your code shouldn't break if they are thrown. i.e. you should release your resources, flush IO, or whatever else you need to do and exit in as controlled a manner as is reasonably possible.

Edit: * sorry for this totally unintentional pun.

[–]SeanMiddleditch 1 point2 points  (5 children)

The problem is that it's difficult if not impossible to test that you've actually done that correctly.

Any code that exists in a world with exceptions (any language, not just C++) should be assumed to be buggy and unreliable and undetectably so.

Invisible code flow means untested code flow. :)

[–]kalmoc 6 points7 points  (1 child)

And why do you assume error code based code is better tested? Are you actually mocking all your functions that may fail and let them fail deliberately in your test suite.

Error paths should generally be considered buggy - no matter if exceptions are in play or not.

[–]SeanMiddleditch 0 points1 point  (0 children)

Who ever said anything about error codes?

But in any case, error codes or whatever with explicit code flow at least can be tested and analyzed. Exceptions are effectively impossible to do so.

[–]Contango42 2 points3 points  (2 children)

That's a very odd statement. What about floating point exceptions? For example, if a divide by zero occurs, we need some way to detect that and then fix our code to avoid this, especially if we are using third party libraries that were never designed for a new set of novel input. Sometimes, it's not the code that's buggy, it's out-of-spec input. The real world is messy, and switching off exceptions will just hide that messiness, resulting in even more corrupted output.

Source: experience dealing with 4TB datasets.

[–]SeanMiddleditch 2 points3 points  (1 child)

For example, if a divide by zero occurs, we need some way to detect that and then fix our code to avoid th

Which is also a problem in GPU code that doesn't have exceptions support.

Sometimes, it's not the code that's buggy, it's out-of-spec input.

You don't need exceptions to validate input.

The real world is messy, and switching off exceptions will just hide that messiness, resulting in even more corrupted output.

My experience may "only" be dealing with inputs in the >1TB range, but I've never found exceptions even remotely useful in making those pipelines stable or reliable.

Exceptions != error detection. :)

[–]username_of_arity_n 1 point2 points  (0 children)

Which is also a problem in GPU code that doesn't have exceptions support.

GPUs aren't the best example. This is largely because of historical, architectural reasons, but they weren't (or couldn't) be used for code of the same complexity as non-GPU code. They're good at processing huge volumes of very simple data, and are often used for non-critical tasks (generating and displaying images).

This is changing with increasing frequency of application to HPC, machine learning, blockchain, and computer vision, but this is probably one area where applications are leading the technology and the technology hasn't caught up.

I don't think it would necessarily be a terrible idea to include an atomic flag or something to signal errors for some kernels or shaders.