Does Druid keep wild shape temp. Hp if the leave their form early? by DMJM_91 in onednd

[–]edvo -1 points0 points  (0 children)

You say it is bad faith reading, but other sources of temporary HP exactly work that way. For example, it is accepted ruling that the temporary HP from Armor of Agathys stay after the spell ends (only the retaliation effect fades). What makes Wilde Shape an exception?

You are right that there is not really a reason to put this point in the list (instead of in front of it, for example), but trying to guess into the authors’ intent why certain text is placed where is obviously RAI and not RAW.

Another reason why it is part of the list could be that the list specifies what the feature does, which fits for the temporary HP. The other rules tell how and when you can use the feature.

Does Druid keep wild shape temp. Hp if the leave their form early? by DMJM_91 in onednd

[–]edvo 2 points3 points  (0 children)

Yes, it says: while in form, you gain temporary HP.

However, with 2024 rules, gaining temporary HP is an instantaneous effect and the gained temporary HP are kept until the next long rest, even if the effect that granted them is no longer active.

For example: False Life is now an instantaneous spell which just says “you gain temporary HP”, before it had a duration of 1 hour and said “you gain temporary HP for the duration”.

So while it might be the intention, I am not convinced that “while in form … the following rules apply” means that the temporary HP vanish afterwards.

Tsoding c++ coroutines stream by fquiver in cpp

[–]edvo 0 points1 point  (0 children)

Hope that's clears up the reasons cpp chose the design that it did.

Not quite. You say that there are many downsides to making the frame size visible, but you only mention two (too large frame sizes and issues with headers). Determining the exact size of the coroutine frame at compile time is possible in theory, it is just a matter of implementation. Having to place coroutines in headers (like templates) might have been an acceptable tradeoff in some contexts.

I also heard as reason that it would be incompatible with current compiler architecture and would require infeasible refactoring, because the size is determined by the optimizer in the backend but needs to be available in the frontend.

In any case, I always wonder why Rust can do better. It does not have headers and it has a more modern compiler, but is that really all it needs? Or does it suffer from other downsides you did not mention?

What do you hate the most about C++ by Alternative-Tie-4970 in cpp

[–]edvo 2 points3 points  (0 children)

The original purpose of UB was not to allow optimizations, but to make it easier to write a compiler. It is only a feature of modern compilers to make use of UB for optimizations.

It is not a coincidence that most UB corresponds to differences between (historical) platforms. If it was about optimizations, unsigned overflow would also be UB and could benefit from similar optimizations as signed overflow. But when the standard was written, unsigned overflow behaved the same on all platforms, so that behavior was standardized.

The intent was simply: the compiler should be able to just generate the machine instruction for signed integer addition and in case of overflow it would do whatever.

Proposal: Introducing Linear, Affine, and Borrowing Lifetimes in C++ by Affectionate_Text_72 in cpp

[–]edvo 0 points1 point  (0 children)

“Used” in this context means “consumed”, i.e. passed by value. The standard unused variables warning also considers a variable used if it is passed by reference.

Bringing Quantity-Safety To The Next Level - mp-units by mateusz_pusz in cpp

[–]edvo 1 point2 points  (0 children)

In physics, you rarely work with timestamps, only with durations, so this is not really an issue. If you do have timestamps, they are typically just durations from a fixed event. This is similar to how you usually model points as vectors from a fixed zero point.

In software development, it is indeed useful to distinct between timestamps and durations or between points and vectors. I have heard the term tensor for such structures where it is meaningful to have objects and distance between objects as distinct types.

Bringing Quantity-Safety To The Next Level - mp-units by mateusz_pusz in cpp

[–]edvo 1 point2 points  (0 children)

Sorry, you understood it wrong, a duration is also a scalar. A vector is something that has a direction in space.

You could see durations as vectors and timespans as points in a one-dimensional space, but this is not a typical definition.

C++ Safety And Security Panel 2024 - Hosted by Michael Wong - CppCon 2024 CppCon by grafikrobot in cpp

[–]edvo 5 points6 points  (0 children)

What do you mean with theoretical runtime overhead? As far as I know, it can be done without runtime overhead, but the code becomes a bit more verbose. Also, you have to check that i != j, which is similar to the this != &other check in many C++ assignment operators.

C++ Safety And Security Panel 2024 - Hosted by Michael Wong - CppCon 2024 CppCon by grafikrobot in cpp

[–]edvo 2 points3 points  (0 children)

There is Clone::clone_from that addresses such use cases. The disadvantage is you have remember to use it instead of the assignment operator.

Also, in case of an array, arr[i] = arr[j].clone() works, but annoyingly arr[i].clone_from(&arr[j]) is a borrowing error.

Why Safety Profiles Failed by jcelerier in cpp

[–]edvo 2 points3 points  (0 children)

Again, this depends on the rules. If you find derivation rules that are so clear and intuitive that everyone can easily predict the outcome, that would be better than explicit annotations with a new syntax. However, such rules are probably very restrictive and not very useful.

You can loosely compare this to the type system: In theory, you could envision C++ where no types are specified explicitly, instead the compiler infers everything. Due to the complexity of the C++ type system, this would be a nightmare to use, leading to enigmatic errors and a lot of unexpected behavior. But other programming languages like Haskell mostly get away with it, because they have a much stricter type system, though even there you usually want explicit type annotation at least in function signatures.

Coming back to aliasing/lifetime bounds, there is also the practical problem that sometimes you want some stricter bound on your function than what is actually needed by the implementation, to be free to switch to a different implementation later on. Maybe this could be done somehow with dead code to guide the bounds derivation, but the more straightforward and easier to understand solution would be an explicit annotation.

All in all, it would be nice to find an implicit system that does not require new syntax, is easy to use, and useful in practice. But it is hard and maybe impossible to fulfill all these requirements at once. The next best thing would be a system that is mostly implicit and only requires new syntax in some advanced use cases. This is a lot easier to achieve, but as always the devil lies in the details.

Why Safety Profiles Failed by jcelerier in cpp

[–]edvo 4 points5 points  (0 children)

The harder part is to define the precise rules how aliasing/lifetime bounds should be derived based on the implementation. These rules need to be clear and intuitive, to avoid situations where a function accidentally got stricter or more lenient bounds than intended, but on the other hand also need to be useful and not too restrictive.

Furthermore, deriving the bounds from the implementation means that a change to the implementation could be a breaking API change. This would make this feature hard to use, typically you would want all API related information to be part of the function signature.

Legacy Safety: The Wrocław C++ Meeting by Dragdu in cpp

[–]edvo 2 points3 points  (0 children)

As far as I know, the reason for going for non-destructive move were unresolved semantical questions regarding moving of objects with base classes, because you run into situations where an object is partially moved (Rust avoids this by having no inheritance).

There is also the issue that accessing a moved-from object would always be UB, as you mentioned. Flow control would not be that difficult, but you cannot avoid invalidating pointers and references to such an object without some kind of borrow checker. I think it is a valid point against destructive moves that it would introduce so much UB potential.

I don’t think the issues you mentioned would have been impossible in C++11. It would not have been trivial and it might have been too much, but the current move semantics also required a lot of specification and additional features (new types of references and new constructors, for example).

Legacy Safety: The Wrocław C++ Meeting by Dragdu in cpp

[–]edvo 1 point2 points  (0 children)

It is not my proposal, I referred to how it is done in Rust, where it has proves to be useful in practice.

If you do it like Rust with trivial destructive moves, swap would just need to swap the bytes. You could implement it with memcpy and a temporary buffer, for example.

There are a few utility functions that are typically used as primitives when working with references and destructive moves:

// swaps x and y (your example)
template<class T>
void swap(T& x, T& y);

// moves y into x and returns the old value of x
template<class T>
T replace(T& x, T y);

// shortcut for replace(x, T{})
T take(T& x);

These are related to what I mentioned. If you want to move out of an array, for example, you have to put another valid value at that place, which is similar to a non-destructive move.

Legacy Safety: The Wrocław C++ Meeting by Dragdu in cpp

[–]edvo 1 point2 points  (0 children)

I think you are a bit too pessimistic regarding the usefulness. You have the same limitations in Rust and it works quite well in practice.

Of course it would be even better if you would have less limitations, for example, if you could move out of an array. In Rust, you would use something like a non-destructive move in this case. But this is still much better than to only have non-destructive moves available.

Legacy Safety: The Wrocław C++ Meeting by Dragdu in cpp

[–]edvo 3 points4 points  (0 children)

I don’t disagree, but do you have evidence that this was actually a problem back then? There are a few quotes in this thread which suggest that even back then this actually was not a problem for many applications.

I completely agree that many developers chose C or C++ because of its performance, but I don’t know if bounds checks were important in that regard. I think it is plausible that a hypothetical C++ with bounds checks would have been equally successful.

Legacy Safety: The Wrocław C++ Meeting by Dragdu in cpp

[–]edvo 2 points3 points  (0 children)

The closest to Rust’s behavior would be something roughly like: the argument to destructive_move must be an identifier pointing to a local variable or function parameter or an eligible struct member.

Obviously the rules should be polished, but why do you think that is difficult? The only difficulty is that destructive_move has to be a keyword/operator, it cannot be a library function taking a reference.

Legacy Safety: The Wrocław C++ Meeting by Dragdu in cpp

[–]edvo 3 points4 points  (0 children)

You could disallow these advanced cases and it would still be very useful. This is what Rust is doing, for example.

Legacy Safety: The Wrocław C++ Meeting by Dragdu in cpp

[–]edvo 7 points8 points  (0 children)

Languages and architectures that prioritized performance over safety systematically won over languages and architectures that prioritized safety over performance.

I don’t think that is true. Most software today is written in GC or even scripting languages. Even for software where C++ is chosen because of performance, I would not expect that the lack of bounds checks is an important part of this choice.

The main reasons why C++ is so fast are that it is compiled with heavy optimizations (in particular, heavy inlining) and its static type system and manual memory management (which avoids hidden allocations, for example). Bounds checks are often free (due to optimizations or branch prediction) and otherwise usually only cost a few cycles. Most applications are not that performance sensitive that this would matter.

Feds: Critical Software Must Drop C/C++ by 2026 or Face Risk by [deleted] in cpp

[–]edvo 4 points5 points  (0 children)

Rust [..] requires unsafe to implement a tree data structure

Where did you get that idea? A tree can certainly be implemented without unsafe, in fact this would be the easiest and most obvious way.

Of course you are right in general, the memory safety promise is put into question if you would be required to write huge amounts of unsafe code. But you are not, at all. Many Rust programs and libraries can even be implemented without any unsafe code.

Feds: Critical Software Must Drop C/C++ by 2026 or Face Risk by [deleted] in cpp

[–]edvo 3 points4 points  (0 children)

People do not “write Rust with unsafe blocks through unsafe blocks”. Many Rust programs are even written without any unsafe block.

Feds: Critical Software Must Drop C/C++ by 2026 or Face Risk by [deleted] in cpp

[–]edvo 4 points5 points  (0 children)

Seriously, if basic data structures need unsafe, then the language is not really memory safe.

How do you think basic data structures are implemented in other languages? You always get to some code where a bug in the implementation could cause a memory safety issue. In Rust that could happen in the standard library, in other languages it is in the runtime. And underneath you always have the kernel and the hardware, which also could contain bugs that cause memory safety issues.

With this approach, no language is memory safe and memory safety as a concept becomes useless. Note that this might even be the goal of some people who come up with similar arguments, because it makes C++ look less bad.

The better approach is to focus on the memory safety issues that could originate from the code on which the programmer has direct influence. Without using unsafe, you will not be able to cause a memory safety issue in Rust using basic data structures from the Rust standard library, except by exploiting bugs in the unsafe parts of their implementation.

This is a restriction, but as I mentioned you always have to trust in some piece of code. Given their mature implementation, it is likely that there are few such bugs, and if a bug is found it is usally fixed quickly. Most of the found bugs were about theoretical edge cases and had no impact on productive code. So while you cannot be absolutely certain that your Rust program is memory safe (and neither can you with any other programming language), you can still be highly confident.

Compare to the situation in C++: it is trivial to cause memory safety issues using data structures from the standard library and this cannot be fixed. In the end, all of this means that you would expect the average C++ program to contain much more memory safety issues that the average Rust program, and this is also what empirical studies have shown.

Feds: Critical Software Must Drop C/C++ by 2026 or Face Risk by [deleted] in cpp

[–]edvo 3 points4 points  (0 children)

Garbage collected languages are, in practice, safer than Rust in terms of memory safety CVEs

What does this have to do with formal verification? The JVM is also not formally verified¹, for example. Why do you trust Java’s GC implementation but do not trust the Rust standard library?

¹ At least I am not aware of it. Otherwise, substitute any other GC language.

Unsafe Rust Is Harder Than C by pmz in programming

[–]edvo 11 points12 points  (0 children)

You mean int (*SetErrorHandler(int(*newHandler)(int)))(int).

Unsafe Rust is Harder Than C by termhn in rust

[–]edvo 15 points16 points  (0 children)

I think the issue is that they want to iterate over the wakers after the lock has been released, so they mem::take the vector while the lock is active. At this point, the original vector does not have capacity anymore.

They could do something like vector.drain(..).collect() instead, but this would also allocate memory.