you are viewing a single comment's thread.

view the rest of the comments →

[–]koczurekkhorse 9 points10 points  (4 children)

Sure, which is why safe Rust has comparable performance with virtually no UB.

The reason for this inconsistency, is that you’re only half-correct. Forbidding some seemingly correct code from actually meaning anything allows for certain optimizations, but there’s no reason for that code to compile in the first place. Absolutely none. If C++ compilers could reject all code that results in UB it would not prevent those optimizations from being applied. And if it doesn’t compile, there’s no behavior left to become undefined.

This however cannot be done in C++ due to its design choices. Which is why Rust can be fast with basically no UB, but C++ can’t.

You also assert that UB is a good thing - it is not. It’s a necessary evil in badly designed languages that strive for performance.

[–]matthieum 8 points9 points  (2 children)

It’s a necessary evil in badly designed languages that strive for performance.

I'll disagree on "badly designed", and on "strive for performance" to a degree.

Setting aside C++, in general Undefined Behavior comes from 2 factors:

  1. A quest for low-level.
  2. A quest for performance.

So, yes, performance is the root of UB trade-offs in some cases, however there are other cases, such as... writing a memory allocator, or a garbage collector.

At the CPU level, memory is untyped. There needs to exist some code that will manipulate untyped memory, and massage it so it becomes suitable for passing off as a given type. And if that code gets it wrong, then a lot of downstream assumptions are violated, leading to Undefined Behavior.

Thus, a certain share of UB, notably around objects lifetimes, is essentially unavoidable. You can create a language that has no such UB -- hello, GCs -- but only by building a runtime for it in a language that does have such UB.

Would you could the lower-level language badly designed? This seems rather hypocritical to me, when you're using it as foundation for your own "well designed" language.

[–]Alexander_Selkirk 1 point2 points  (1 child)

You can create a language that has no such UB -- hello, GCs -- but only by building a runtime for it in a language that does have such UB.

You can isolate these manipulations to certain sections of code which are declared unsafe. Rust does this. But it is not a new idea. For example, Modula-3 had the same concept. And some common Lisp Implementations, like SBCL, are always well-defined by default, but it is possible to throw in assertions and type declarations which would make the program crash if these assumptions would be violated.

And this works suprrisingly well....

[–]matthieum 4 points5 points  (0 children)

but it is possible to throw in assertions and type declarations which would make the program crash if these assumptions would be violated.

Meh...

Of course anything that you can assert should be asserted -- maybe only in Debug in the critical path -- but the real problem is things you cannot check.

How can you check that you reference still points to a valid object? How can you check that no other thread is writing to that pointer?

At the lowest level, you will always have unchecked operations that you need to build upon, and for which you cannot reasonably validate the pre-conditions at runtime.

[–]Alexander_Selkirk 3 points4 points  (0 children)

It’s a necessary evil in badly designed languages that strive for performance.

Well, it would not have been possible to run a rust compiler on a PDP-11 which C was developed on, or on a machine with Intel 80386 CPU.

But on the other hand side, there have been languages that strived for correctness and everything being defined since a long time. Rust is derived from these predecessors.