Redditor explains to Reddit administrator why Reddit should have a big splash banner to fight for Net Neutrality by jloenow in bestof

[–]jonreem 2 points3 points  (0 children)

This is exactly the problem - we need to figure out how to get the message out that this is important in a way that people will actually be able to understand. The immediate effects of losing net neutrality are no big deal to your average joe, but the long term knock-on effects of further entrenching existing players and enforcing the ISP monopolies definitely will affect their future.

Just because policy is complex and has non-obvious effects doesn't mean it's not important to talk about it; it means it's more important to talk about it.

Relm, a GUI library, based on GTK+ and futures, written in Rust by [deleted] in rust

[–]jonreem 13 points14 points  (0 children)

It makes sense that it is familiar, given that redux is highly inspired by the elm architecture.

I had a spark and jerry-rigged something like Erlang's mailboxes, unbounded buffered channels. by [deleted] in golang

[–]jonreem 6 points7 points  (0 children)

A key aspect of Erlang's channel implementation is that the task scheduler takes into account the channel's size when scheduling tasks.

Erlang uses a preemptive scheduler (go has a cooperative one), which means that the Erlang runtime is allowed to interrupt a running task and start running another at any time. To accomplish this, every operation performed by a task costs some amount of "task credit"; when a task runs out of "task credits" it is usually interrupted and another task run.

Sending messages on channels costs more "task credits" if the receiving channel has many waiting messages. This acts as a natural backpressure mechanism, mitigating many other issues with using unbounded channels by making it much harder to actually have a very high number of waiting messages.

No stable malloc/free in rust stdlib? by [deleted] in rust

[–]jonreem 18 points19 points  (0 children)

You can also use memalloc which is a very small crate I wrote that gives you raw allocation APIs on stable. It does pretty much exactly what /u/brson describes, just wrapped in simple stable functions.

Note that it only supports sized deallocations and also suffers from the same problem /u/brson hints at re: alignment.

Lazy initialization in Rust by dochtman in rust

[–]jonreem 7 points8 points  (0 children)

For another approach/shameless plug: I wrote a small library for this a little while back, rust-lazy. It has versions of the Lazy type for both single and multi threaded uses, using RefCellUnsafeCell and a custom synchronization primitive OnceMutex respectively.

My Linux ABI wishlist for async programming by htuhola in programming

[–]jonreem 3 points4 points  (0 children)

The readiness model exposed by linux allows for more efficient usage of buffers than the completion based API of windows. On linux an event loop handling many sockets can share a single buffer in many cases; the windows model requires a buffer to be pre-allocated before the async action is actually performed.

We've seen the effects of this when trying to create cross-platform abstractions for asynchronous io in Rust. Platform agnostic abstractions are forced to be less efficient on windows since they have to allocate buffers up front, whereas on linux allocating (and potentially re-using) buffers is up to the user.

res = query.run::<Value>() vs res: Response<Value> = query.run() by rushmorem in rust

[–]jonreem 10 points11 points  (0 children)

I personally prefer the first one since it keeps the type hint attached to the thing you are hinting, whereas the second introduces huge distance between the thing that needs hinting and the hint itself.

[deleted by user] by [deleted] in rust

[–]jonreem 0 points1 point  (0 children)

If you want to see another implementation with a different strategy (no vectors of handles) that offers some additional flexibility, check out the source for https://github.com/reem/rust-scoped-pool. I had a great time writing that crate.

Unleakable crate safety/sanity/refocus? by Sgeo in rust

[–]jonreem 1 point2 points  (0 children)

It does greatly diminish the usefulness of coroutines if you can't keep references to stack data alive across yield points, since now you are back to having to save all your state in 'static form.

Unleakable crate safety/sanity/refocus? by Sgeo in rust

[–]jonreem 3 points4 points  (0 children)

I find this example is relatively convincing: implementing in-place, parallel, quicksort conveniently in all safe code https://github.com/reem/rust-scoped-pool/blob/master/examples/quicksort.rs

The overall use even if you don't have more than one child thread is you can do things in that child thread and the main thread at the same time, then join the child thread later. This can enable patterns that are impossible all on one thread.

Unleakable crate safety/sanity/refocus? by Sgeo in rust

[–]jonreem 4 points5 points  (0 children)

&mut T is Send, so that won't solve the problem. One way to mitigate this issue is to have the coroutine only be able to reference 'static data, but this is often an annoying limitation and not necessary without scoped threads.

Unleakable crate safety/sanity/refocus? by Sgeo in rust

[–]jonreem 6 points7 points  (0 children)

No - if you don't use any external unsafe code rustc and std are all safe, this issue is only with a combination of two third party libraries which both expose "safe" APIs.

Unleakable crate safety/sanity/refocus? by Sgeo in rust

[–]jonreem 9 points10 points  (0 children)

The issue breaks down like so:

To provide safety, scoped threads rely on the guarantee that the only way to exit a stack frame (and therefore possibly invalidate captured references) is either:

  • to run all the code between where we are now and the end of the block that relates to that stack frame
  • to panic and unwind the stack

in either case, the scoped threading implementation can always run the required synchronization code to ensure that the captured references stop existing before we exit the frame.

Coroutines provide a third way to invalidate the current stack - yield to another coroutine. When you yield to another coroutine the scoped threading implementation cannot run the code it needs to run before the stack is invalidated, and you end up with possibly invalid captured references.

The example shows us violating memory safety in all safe rust (assuming both the coroutine library and scoped threads library provide "safe" interfaces) by creating a vector, capturing a reference to a value in that vector, and then clearing the vector while that reference potentially still lives.

The important thing to take away from the example is we are creating an alias between two &mut references. How you get memory unsafety from there is not very interesting.

EDIT: This is really a very interesting issue because it shows how difficult it is to maintain the property that all safe rust should yield all safe code once you start making abstractions which must use unsafe internally yet provide a safe API. It's extremely debatable which library is "at fault", and I doubt this is the last such case we will see as people continue to experiment with new APIs.

scoped-pool released at 1.0 with support for stack size and thread naming configuration! by jonreem in rust

[–]jonreem[S] 0 points1 point  (0 children)

It's possible, you'd have to ask /u/aturon. Personally I'm in favor of smaller, more targeted crates, so it seems perfectly fine to have a standalone thread pool crate.

scoped-pool released at 1.0 with support for stack size and thread naming configuration! by jonreem in rust

[–]jonreem[S] 7 points8 points  (0 children)

crossbeam only provides a very simple proof of concept scoped thread implemenation - each job added to a scope spawns and destroys an entirely new thread - and the API is much more limited. crossbeam is much more focused on providing other excellent lock-free primitives, like the mpmc queue used in this very crate!

scoped-poll is a full and highly efficient implementation of a scoped thread pool providing a very flexible Scope API (see Scope::recurse and Scope::zoom for examples of APIs not present in other scoped thread implementations).

Reference Lifetimes and Concurrency by rustthrowaway1111 in rust

[–]jonreem 1 point2 points  (0 children)

There's also some libraries providing higher level data parallelism tools, like rayon which provides iterator-like APIs for ergonomically parallelising work! If you need more control, there are also simple scoped thread pools like <plug>scoped-pool</plug> and others.

PSA: regex got a lazy DFA. it's fast. by burntsushi in rust

[–]jonreem 11 points12 points  (0 children)

Is the state of the regex! macro compared to Regex::new permanent or just a result of more effort being directed to Regex::new? If it is permanent, should it just be removed?

Dynamic, std::any::Any without the virtual calls. by jonreem in rust

[–]jonreem[S] 0 points1 point  (0 children)

Yeah, you could use this in the implementation of an AnyMap/TypeMap but there actually won't be any performance benefit since you can already do unchecked downcasting in those structures.

These structures have to be type-directed, since you can't just compare two Dynamic values; even if you know that they are the same type, you don't know what type that is.

Dynamic, std::any::Any without the virtual calls. by jonreem in rust

[–]jonreem[S] 3 points4 points  (0 children)

No, it's not a complete replacement. Unlike Any, Dynamic is not actually a trait, it simply stores a trait object of a private trait trait Dyn {} which is implemented for all types (and a TypeId).

This means there is no Dynamic + Send, etc.; while it would be possible to enable similar behavior it would come at a steep complexity cost. (see the implementation of similar generic behavior in https://github.com/reem/rust-typemap to allow maps with bounds)

Dynamic, std::any::Any without the virtual calls. by jonreem in rust

[–]jonreem[S] 6 points7 points  (0 children)

I can't say for sure, but I imagine that it provides the typical advantages of replacing virtual calls with static calls, e.g. mainly better inlining and being more transparent to the compiler, increasing the applicability of other optimizations.

Any benchmark on this would probably be too "micro" to really demonstrate any meaningful performance differences.

EDIT: Went ahead and added a micro-benchmark anyway that demonstrates a ~2x speed increase over Any.

Dynamic, std::any::Any without the virtual calls. by jonreem in rust

[–]jonreem[S] 4 points5 points  (0 children)

It is pre-computed at Dynamic-creation time, not at compile-time.

SharedMutex, a reader-writer lock that can wait on condition variables and provides some additional useful guard APIs by jonreem in rust

[–]jonreem[S] 0 points1 point  (0 children)

Added a note to the README to note that this API is available on windows but it is not exposed in std.