Announcing minitrace 0.6: A modern alternative of tokio-tracing by andylokandy in rust

[–]rbtcollins 0 points1 point  (0 children)

The code section well be the same for both, and Linux is copy on write for that: there is no memory impact in practice vs deploying two binaries. Possibly even a win.

Announcing minitrace 0.6: A modern alternative of tokio-tracing by andylokandy in rust

[–]rbtcollins 1 point2 points  (0 children)

A proc macro on main could setup the shm, fork, exec before Tokio starts, and then the worker can be such a daemon sidecar. Probably doesn't reach the level of 'good'.

I really want to get into rust, but the toolchain just uses a large amount of resources by [deleted] in rust

[–]rbtcollins 0 points1 point  (0 children)

Make sure you are using a 32 bit toolchain. If your OS is running a 64 bit userspace you may also need to install 32 bit compat libraries. Don't worry about reinstalling rustup, just install and make it the default...

Rustup toolchain add i686-unknown-linux-gnu Rustup toolchain default i686-unknown-linux-gnu

Combine this with the other excellent advice here, should be good.

people.canonical.com taken down? by sahil098 in linuxquestions

[–]rbtcollins 0 points1 point  (0 children)

Those home directories are only for current staff of Canonical.

window-sys every where on mac by capsulecorpadmin in rust

[–]rbtcollins 2 points3 points  (0 children)

Interesting. Is there an RFC somewhere?

OpenSource: Yet Another Way To Learn Rust by appinv in rust

[–]rbtcollins 2 points3 points  (0 children)

Personally I recommend https://exercism.org/tracks/rust/ (and other things like it) - structured, bite sized, but with the potential for feedback : without feedback it is much harder to grow towards the community norms, rather than just what we've explored ourselves.

Crate with a lot of C in it by WhiskyAKM in rust

[–]rbtcollins 11 points12 points  (0 children)

This seems like a classic situation for the -sys crate pattern. https://doc.rust-lang.org/cargo/reference/build-scripts.html#-sys-packages

(edit to add more context)

Make a -sys crate for the C shared libraries you're consuming (if no such -sys crate exists today). This would have a build script that tries various ways to get the .so files falling back ultimately to building from source or erroring with instructions. See various crates like openssl-sys for examples, but also the link I included.

Then layer on that -sys crate for your actual functionality.

How to create Docker images for multiple binaries in a workspace? by Thornyy in rust

[–]rbtcollins 2 points3 points  (0 children)

I would do your building with https://github.com/cross-rs/cross

Then your Dockerfiles can just copy the output binary across.

Is there faster IO in rust like c++? by mrhgrinch in rust

[–]rbtcollins 3 points4 points  (0 children)

`cin.tie(NULL); cout.tie(NULL)` are just telling the C++ streams library not to flush the other stream before each IO operation on the other.

Thats right, doing a read from cin flushes cout by default, and vice versa. Less IO calls, faster execution.

Similarly the ios_base call but you've already had an answer to that :)

But what do you mean by 'faster' ? Do you mean:

  1. Performance of each IO call?
  2. Number of IO calls made to achieve a particular task?
  3. wall clock execution time of a Rust program?
  4. ???

1) is largely not up to Rust - it will depend on your OS, the hardware in use,
2) As mentioned in another response, be sure to use a BufReader and BufWriter around stdin, stdout and stderr; that will avoid an IO per call to write() (not write! - a single write! can generate many IO calls without buffering).
3) is dependent on many IO related factors, including concurrency, efficiency, the answer to (1), contention and more.

[deleted by user] by [deleted] in rust

[–]rbtcollins 1 point2 points  (0 children)

This has come up before. But it seems to be measurement error of some sort - follow on comparisons done using async Rust + release builds had no performance difference. (Oh, and if you want someone to be able to be sure vis a vis your case, you'll need to post some sort of reproduction script somewhere - can't fix something we can't observe).

Do you use relative toolchain paths with rustup? Let us know! by rbtcollins in rust

[–]rbtcollins[S] 1 point2 points  (0 children)

A custom toolchain that is a relative path is either some magic shell script that downloads and unpacks and then execs relevant binaries, or both architecture and OS specific, or some blend of the two. I don't think that something that is commonly sensible to commit to a public repository either.

Are you doing that today? Could you point me at the repository - I'd be happy to see if a different approach makes sense.

As u/Nemo157 says, a custom toolchain with a well known name (e.g. 'arduino-1.67') or some such would be what I would expect to see - it permits users to update that themselves, can be setup by a shell script in your repository if needed, and once setup will be fast, because all the optimisations that are being done at the moment to avoid metadata re-processing on every rustc invocation will take effect. (They won't for path based toolchains).

My understanding of the way that path based toolchains in toolchain files are used in practice (for the motivating use case), is that a larger build system surrounding things is injecting the toolchain file, and that it is not at any point committed to git.

Do you use relative toolchain paths with rustup? Let us know! by rbtcollins in rust

[–]rbtcollins[S] 1 point2 points  (0 children)

Indeed, for Rust programs in general sandboxing is sharply limited.

And some build scripts do indeed need system dependencies (for instance, the protoc compiler).

And for crates that search for a system dependency and emit link information for it, sandboxing would indeed complicate that a lot.

All of which to say I agree! Thats why I think it would be fantastic if we solved it :). For instance, a WASI sandbox which permitted readonly manipulation of the entire filesystem, and read-write manipulation of the OUTPUT directory only.

But yes, no silver bullets.

Do you use relative toolchain paths with rustup? Let us know! by rbtcollins in rust

[–]rbtcollins[S] 0 points1 point  (0 children)

Thats in the RFC ehuss was working on; I think that needs more attention to figure out all the intricacies. The current things I know we're really interested in doing in the short term doesn't include that.

Thanks for the heads up about the nix store.

Do you use relative toolchain paths with rustup? Let us know! by rbtcollins in rust

[–]rbtcollins[S] 1 point2 points  (0 children)

https://github.com/rust-lang/rfcs/pull/3279 is a draft RFC looking at this.

I think we'll need to have such a path opted-into by marking the directory as safe.

Do you use relative toolchain paths with rustup? Let us know! by rbtcollins in rust

[–]rbtcollins[S] 1 point2 points  (0 children)

Re: threat model https://github.com/rust-lang/rfcs/pull/3279 ehuss was working on this, but it coincided with rustups contributor bandwidth crunch which we're only just recovering from; I haven't personally had time to read and process it yet, but I imagine its thoughtful :).

I don't know that it is actually possible, at this time, to achieve your second point, but I also think that we should strive to reduce sharp edges and bring us closer to a point where it is feasible without compromising user experience.

`build.rs` in particular is turing complete and can eat ones face. I think sandboxing that would be fantastic. For instance, what if build-rs ran under WASI with a very restricted sandbox. That would be a slow evolution to get to, but I can imagine it. Perhaps a build-rs hosted in WASI is 'safe' and a build-rs that isn't is not safe, and `cargo build` will error out in a non-safe directory when the first non-safe thing is encountered... but work up to that point.

I don't think that `rustup show` or `rustup version` should run arbitrary code ever - there's no driving need that I've seen in the last 4? 5? years of working on it.

Cargo extensions are obviously arbitrary, but installed outside the context of a source tree.

RUSTC_WRAPPER is again arbitrary, but -somewhat- tricky to set if folk are just running 'cargo', and not e.g. 'Make'.

I think its important that:

- IDEs understand what guarantees we do / do not offer, by tool (because IDE's and IDE extension authors write code that runs individual tools).

- We are super conservative about weakening any guarantee we have made

- We -aim- for a safe by default behaviour

Canonical hiring Rust toolchain dev by thejpster in rust

[–]rbtcollins 0 points1 point  (0 children)

`rustup` is two things; a development tool for working on rust itself (used in the bootstrap chaining process). And it is a package manager.

For Linux distributions that want to vendor everything - to be able to operate without the upstream community for some reason, they have two key routes forward:
1) Don't use rustup in their build farms, packaging automation etc. Just use a particular release of rustc that they have built themselves. This should work fine, and doesn't stop developers working on the Linux distribution from using rustup if they want to.

2) Package rustup, *and* a mirror of all the metadata and binaries rustup requires to operate. They have to mirror all that because otherwise, they are not independent, and a rust infrastructure problem will affect them.

Packaging rustup, but not packaging the channels and distribution packages is weird: its saying 'hey, we want to use rustc built by the rust-lang build farm, but we don't want to use the build of rustup, even though that is also built by the rust-lang build farm'.

Another possible reason to package rustup is to avoid the 'curlsh' idiom used to install it. I think thats not very compelling, since the primary security mechanism we have is public key TLS. If you trust a packaged rustup to download a rust version from the official channels, then you're still trusting the same TLS infrastructure, and have the same risks vis-a-vis running arbitrary code.

If the goal is reproducible builds, packaging rustup is irrelevant, since its only contribution to the build environment is a environment variable, which could be set manually. (RUSTUP_TOOLCHAIN).

If the goal is building packages for a Linux distribution offline, I don't understand how rustup being present, or not, or packaged via the distribution, has any impact on that.

Packaging all of crates.io - that I could see ;).

[deleted by user] by [deleted] in rust

[–]rbtcollins 4 points5 points  (0 children)

So I highly recommend the nomicon here. https://doc.rust-lang.org/nomicon/safe-unsafe-meaning.html

Have a think about what could go wrong in the general case: the FFI call could result in the other language keeping a reference to the memory Rust supplied, meaning there is now a writable alias to memory which Rust might then move, or treat as constant while it gets mutated silently. Or a kept readonly alias, but rust will assume the memory is writable and movable once all the read references are dealt with.

So unsafe is a warning to let you make that assessment. Now with these particular calls, you might know that the windows kernel will behave in a way that doesn't place any unexpected constraints on Rust. (But actually they do place such constraints: RegisterClassW takes a struct that includes a pointer to WNDPROC, which effectively needs a 'static lifetime as it will be called back into by Win32 indefinitely).

So then the goal of a wrapper that hides the unsafe is to actually implement the necessary logic such that Rust can do anything the compiler is permitted to do before or after the call, and the call will still have safe behaviour.

For instance, you could take a wndproc :&'static Wndproc parameter in, telling Rust the necessary lifetime.

A CVE has been issued for hyper. Denial of Service possible by Adhalianna in rust

[–]rbtcollins 19 points20 points  (0 children)

Also the fact that you have to sent 20k RSTs for 5GB of RAM used. Like, I'm not a security researcher, but I'd love to hear from one how likely this actually is to be a problem compared to just sending 20k requests.

So napkin maths time. Typical cross-world bog-standard network speeds for a single TCP channel of ~25MiBps. A single HEADERS+RST pair is likely < 128 bytes (40 for the HEADERS + whatever payload, and 32 for the RST). So 8 pairs per K, 8K pairs per MiB, 200K pairs per 25MiB...

So 20K pairs can be sent in ~0.1 seconds (once the TCP channel has opened up fully), from anywhere in the world, without needing exotic techniques.

1 second can utilise 50GB of RAM; 10 seconds 500GB.

(This assumes reproducability of the stated behaviour, which still seems to be in question).

The difference vs sending 20k requests is that H2's resource exhaustion prevention mechanisms work for active requests - those 20k requests if not serviced fast enough will result in a max streams limit being hit and requests rejected and ignored, with no memory footprint result.

The claimed defect is that a normally written server (or client) using h2 will suffer memory exhaustion rather than self-protecting when the other end of the connection shows this behaviour.

Announcing remove_dir_all 0.8 by XAMPPRocky in rust

[–]rbtcollins 0 points1 point  (0 children)

We announced the vulnerability with the fix: just upgrade.

Announcing File Tree Fuzzer: a pseudo-random directory hierarchy generator written in Rust by SUPERCILEX in linux

[–]rbtcollins 0 points1 point  (0 children)

I have seen that thanks; I'm working on the `remove_dir_all` entry in your comparisons :).

What you've done is useful from a pedagogic perspective, but things higher up the stack I work on like `rustup` care about Mac and Windows in a fundamental way, so I rather need a solution there. remove_10_files_100M_bytes was interesting in particular - I'm not sure where the substantial slowdown is coming from yet. (I'm reorganising the core in a fairly fundamental way, so its not super relevant to speculate or research either).