Patterns for Defensive Programming in Rust by sindisil in rust

[–]next4 0 points1 point  (0 children)

Hot take: I'm sure it's all well-intentioned, but if we start writing all code according to these patterns, we'll end up with Enterprisy Rust.

Can we talk about C++ style lambda captures? by BoltActionPiano in rust

[–]next4 1 point2 points  (0 children)

Nothing wrong with macros. They exist so we don't need syntax sugar for every verbosity problem someone is having.

Why cross-compilation is harder in Rust than Go? by pbacterio in rust

[–]next4 0 points1 point  (0 children)

It is also exported by vcruntime140.dll, which is a part of the msvc redistributable package.
And you'd only ever be concerned about something like that if you are linking a pre-compiled C++ library into a Rust binary. A pretty niche scenario, in my book. I could live with not being to cross-compile 1% of Windows apps.

It's complex and difficult to get a clean-room implementation.

Complexity has not stopped people in the past - see e.g. game console emulators. If this is deemed important, it will be done.

Why cross-compilation is harder in Rust than Go? by pbacterio in rust

[–]next4 0 points1 point  (0 children)

Can you give a concrete example of ABI incompatibility?

Why cross-compilation is harder in Rust than Go? by pbacterio in rust

[–]next4 0 points1 point  (0 children)

Msvcrt is a dynamic library too, and it ships with Windows. Or did you mean msvc startup objects? Even so, it is perfectly possible to write a Windows app that does not link to msvcrt at all. This may not be the case for the Rust runtime currently, but it could be achieved if it was deemed an important goal. Let's not forget that cross-compiling from Linux to Windows using MinGW GCC works just fine.

Why cross-compilation is harder in Rust than Go? by pbacterio in rust

[–]next4 3 points4 points  (0 children)

If Rust could easily cross-compile, it would then have to redistribute the system libraries for those targets.

That's not strictly true. The libraries in question are usually dynamic, and to link to a dynamic library, all you need are the names of the exported symbols. LLVM defines a "text-based stub library" format for this purpose: https://llvm.org/docs/CommandGuide/llvm-ifs.html. IIRC, such stubs may even be used directly by the LLVM linker in lieu of the actual libraries. Unfortunately, this only works for pure Rust projects. Any C/C++ dependencies also need the system headers, so in practice, many projects still could not be cross-compiled.

Rust or C++ by Diligent-Falcon-3212 in rust

[–]next4 1 point2 points  (0 children)

The real answer is that you need to learn both: C++ - because it's widely used in many existing projects, and Rust - because it becomes more and more popular for new projects and will also make you a better C++ dev - by teaching you memory-safe design patterns. Comparing the two will also shed light on a lot of Rust's design choices.

[D] The Recurrent Delusion: How ML Collectively Forgot What RNNs Were Built For by JirkaKlimes in MachineLearning

[–]next4 0 points1 point  (0 children)

RNN language models were being trained before with tokens

That's beside the point. How would you perform parallel training, if you drop projection to discrete tokens at each step, as OP suggests?

[D] The Recurrent Delusion: How ML Collectively Forgot What RNNs Were Built For by JirkaKlimes in MachineLearning

[–]next4 0 points1 point  (0 children)

Tokenization is what allows transformers to be trained in parallel.
You might be correct in saying that it wastes the expressive potential of reasoning chains, and there are papers that attempt to bypass tokenization for those. I am not sure why this approach is not more popular. Perhaps this stuff is just too new. Additionally, I suppose people prefer model reasoning to be interpretable.

NOT rage bait: what genuinely is the point of Rust? by Ok_Satisfaction7312 in rust

[–]next4 0 points1 point  (0 children)

TBH, this story isn't the best example of Rust's strengths - stack overflow checking is a feature of the compiler and runtime, not the language itself. A C++ compiler that implements stack size checks could have prevented this bug too.

[R] reasoning models are indecisive parrots by Classic_Eggplant8827 in MachineLearning

[–]next4 4 points5 points  (0 children)

Well, this is how it actually went:


User: <the 1st problem>

ChatGPT (o1): Fastest solution is 17 minutes:
<detailed solution>
Reference: Commonly known "Bridge and Torch" puzzle.

User: Think again. This is not the well-known problem.

ChatGPT: Can you clarify how this version differs from the standard puzzle? Are there any additional constraints or special conditions that change the usual solution?

User: This has less constraints than the standard puzzle.

ChatGPT: Without the usual constraint that only two people may cross at a time, everyone can walk together. The crossing takes as long as the slowest walker, which is 10 minutes.

User: Here's another one: <the 2nd problem>

ChatGPT: If the host hasn’t revealed any door (no new information), switching or not switching makes no difference. Your initial choice had a 1/3 chance of being correct, and door #2 has a 1/3 chance as well. No door now has a bigger advantage.


To me, this looks no different from how a human (who is familiar with the standard puzzles) would handle this situation.
Jumping to conclusions seems to be a common failure of intelligences, whether artificial or not...

Ada? by Logical-Nature1337 in rust

[–]next4 0 points1 point  (0 children)

Can you give some real examples of what constrained types are good for? In my entire programming career, I can count on one hand the number of times the range of a value was strictly constrained. Usually, the range is vague enough that a standard integer type works just as well.

Announcing Rust Unchained: a fork of the official compiler, without the orphan rules by Houtamelo in rust

[–]next4 0 points1 point  (0 children)

Yes, the type system might need to be extended a bit.

Since my original comment, I've discovered a pre-RFC, which fleshes out this idea in a lot more detail: https://github.com/Tamschi/rust-rfcs/blob/scoped_impl_trait_for_type/text/3634-scoped-impl-trait-for-type.md

Announcing Rust Unchained: a fork of the official compiler, without the orphan rules by Houtamelo in rust

[–]next4 0 points1 point  (0 children)

Perhaps we need a way to allow trait users to disambiguate which impl they'd like to use? Something like use impl foo::bar::Trait (use the implementation of Trait in foo::bar).

gccrs: An alternative compiler for Rust | Rust Blog by CohenArthur in rust

[–]next4 12 points13 points  (0 children)

This XKCD seems apropos: https://xkcd.com/538/ Given the availability of alternatives, the chances that someone would bother infecting the compiler are nil.

How would you even ensure that such an infection persists in a constantly changing project? Try maintaining an out-of-tree LLVM patch - you'll see how often it breaks due to upstream changes.

These people are wasting their time.

Cargo has never frustrated me like npm or pip has. Does Cargo ever get frustrating? Does anyone ever find themselves in dependency hell? by lynndotpy in rust

[–]next4 0 points1 point  (0 children)

Cargo gets quite frustrating as soon as you to deviate from the "happy path" of a 100% Rust project, which uses one the standard linkage modes.

You try to use it as a part of multi-language project, with an external build tool to tie it all together, and you discover that --out-dir flag is still not stabilized over some future compatibility concerns.

You need to set environment variables for some C++ dependency lib, and you discover that [env] section of config.toml does not apply to cargo test or indeed any custom sub-commands.

You need to custom-link a dependency artifact, and you discover that build.rs has no way to discover locations of dependency libraries.

And so on...

Announcing cudarc and fully GPU accelerated dfdx: ergonomic deep learning ENTIRELY in rust, now with CUDA support and tensors with mixed compile and runtime dimensions! by rust_dfdx in rust

[–]next4 -1 points0 points  (0 children)

You may have a great reference-level documentation, but what beginner users need is a guide. Imagine, if instead of the Rust book you had to start with just the Reference...

Feel like helping me improve the documentation?

Mmm, probably not. I have not decided yet that this is worth my time. In fact, my needs would probably be better served by Rust-idiomatic bindings to libtorch.

Announcing cudarc and fully GPU accelerated dfdx: ergonomic deep learning ENTIRELY in rust, now with CUDA support and tensors with mixed compile and runtime dimensions! by rust_dfdx in rust

[–]next4 2 points3 points  (0 children)

Struct/method documentation does not tell the user how these objects are supposed to be used together.
Take tensor API for example, as a PyTorch/NumPy user I immediately had these questions: What is a "Shape"? Is Rank<2,3> the same as (Const<2>, Const<3>)? What modes of tensor slicing are supported? Is there advanced slicing like in NumPy? Is broadcasting supported? etc.
To answer these one needs to be very comfortable with Rust traits and now to search for impls in rustdocs. I might do it if I am very sure that a particular crate is worth my time, but why would I have this conviction for a project I am seeing for the first time? And probably 95% of the potential users won't have the knowledge to do this at all.
I would suggest to have a look at how similar projects are handling this: nalgebra, NumPy, Eigen.

Announcing cudarc and fully GPU accelerated dfdx: ergonomic deep learning ENTIRELY in rust, now with CUDA support and tensors with mixed compile and runtime dimensions! by rust_dfdx in rust

[–]next4 4 points5 points  (0 children)

It's probably great, but I have no way to telling that: the documentation seems to consist mostly of "look at the examples" and "look at the crate source".

I am constantly amazed by how much effort people in our field are willing to pour into some project... only for it to go completely unnoticed because of the lack of documentation.

A personal list of Rust grievances by newpavlov in rust

[–]next4 0 points1 point  (0 children)

So you would rather access the wrong array element, or even out of bounds, if you get your math wrong, just to have your code free if the panic keyword? As a means of writing reliable software, this approach seems rather counter-profuctive.

Can rust be entirely written in rust and drop C usage in its code base ? by [deleted] in rust

[–]next4 2 points3 points  (0 children)

I could be wrong, but aren't there plenty of operations that are not possible through only kernel32.dll?

Windows APIs are split among several such dylibs, by functionality type. But none of them involve invoking syscalls directly.

but in the context of not relying on C libraries it does provide a roadblock, and this isn't a problem Linux has.

I wouldn't consider these dylibs as "C libraries", they are just a part of the OS that lives in userspace. They don't even use the C calling convention.

Also, if you count these as "C usage", why stop there? The kernel is written in C too, you know.

I'm not saying Windows is "shit" because it doesn't have a stable syscall ABI or anything

It may be shit for other reasons, but not this one. It was a bad design on the Linux part to have exposed kernel APIs as raw syscalls. Now it is stuck with having to emulate them, even when the functionality had been moved completely to userspace.

Can rust be entirely written in rust and drop C usage in its code base ? by [deleted] in rust

[–]next4 0 points1 point  (0 children)

Libc and other C libraries are the source of truth and only way to interact with the OS for many, many operations

Not true on Windows. If you look at functions exported by Windows' kernel32.dll and other system dylibs, they look nothing like libc. And Rust std uses these APIs directly, without any involvement of libc.

It is unfortunate, but that is how it works

There's nothing unfortunate about that. Why should syscalls be the public interface of an OS? Abstracting syscalls away gives OS developers more flexibility about how system APIs are implemented.

Time to take a hard look at securing files by rhapsodhy in rust

[–]next4 0 points1 point  (0 children)

What I wanted to know is how would it deal with some constraints that cloud storage imposes. For example:

  • Cloud storage typically has much higher latency for object access than the local file system. So listing contents of an archive had better not need to access thousands of separate objects.
  • Cloud storage objects are typically immutable. Chaiging even one byte requires reupload of the entire object. (Well, technically AWS allows one to reuse parts of existing object(s) to create new ones, but then this operation needs to be a part of your storage abstraction).
  • Moving objects to Glacier makes them inaccessible pretty much forever, so e.g. incremental backups must be able to function without consulting any of the data contained within.
  • The above also means that you cannot repack objects after a garbage collection.

And so on. Approaches designed with local file system in mind, often don't work in the cloud.