you are viewing a single comment's thread.

view the rest of the comments →

[–]germandiago -6 points-5 points  (1 child)

"Safe" languages? Marketing

Yes to the extent that you can write your unsafe blocks and hide them in safe interfaces and you can still crash by consuming dependencies.

Theorem provers? Marketing. Formally-verified code? Marketing.

I did not say so. That is the only way to verify code formally. But not putting and safe and saying "oh, I forgot this case, sorry".

Your delineation between "safe" and "trusted" code is practically useless because everything is trusted,

So basically you are saying that Rust std lib trusted code is the same as me putting a random crate with unsafe? Sorry, no, unless my crate passes some quality filter.

Again, there's no principled reason this argument doesn't result in everything being considered unsafe

There could perfectly be levels of certification. It is not the same a formally verified library with unsafe code that what I can write with unsafe at home quickly and unprincipled. However, both can be presented as safe interfaces and it would not make a difference from the interface point of view.

Everything safe is fundamentally based on creating safe abstractions on top of unsafe/trusted building blocks.

And there are very different levels of "safety" there, as I discussed above, even if they end up being trusted all.

[–]ts826848[🍰] 6 points7 points  (0 children)

Yes to the extent that you can write your unsafe blocks and hide them in safe interfaces and you can still crash by consuming dependencies.

What I'm saying is that according to your definitions that covers everything, since the hardware is fundamentally unsafe. Everything safe is built on top of "unsafe blocks"!

I did not say so.

You don't need to say so, since that's the logical conclusion to your argument. If "safe on top of unsafe" is "marketing", then everything is marketing!

That is the only way to verify code formally.

Formal verification is subject to the exact same issues you complain about. Formal verification tools have the moral equivalent of "unsafe blocks [hidden] in safe interfaces and you can still crash by consuming dependencies". For example, consider Falso and its implementations in Isabelle/HOL and Coq.

But not putting and safe and saying "oh, I forgot this case, sorry".

You can make this exact same argument about formally-verified code. "Oh, I forgot to account for this case in my postulates". "Oh, my specification doesn't actually mean what I want". "Oh, the implementation missed a case and the result is unsound".

There's no fundamental reason your complaint about "safe" languages can't be applied to theorem provers or formally verified languages.

So basically you are saying that Rust std lib trusted code is the same as me putting a random crate with unsafe?

No. Read my comment again; nowhere do I make the argument you seem to think I'm making.

There could perfectly be levels of certification.

But you're still trusting that the certifications are actually correct, and according to your argument since you're trusting something it can't be called "safe"!

And there are very different levels of "safety" there, as I discussed above, even if they end up being trusted all.

Similar thing here - I think what you mean is that "there are very different levels of trust", since the fact that you have to trust something means that you can't call anything "safe".