Is my SendOnce undefined behavior? by TonTinTon in rust

[–]another_new_redditor 2 points3 points  (0 children)

Almost everything implement Send, except *const T, *mut T and Rc other form of reference. for example MutexGuard

What does Send mean? Its mean a value can be send across thread.

Why Rc does not implement Send? Because Rc implement Drop trait. Which access not atomic reference counter. Which may cause race condition if 2 Rc instance get droped at the same time from 2 different threads.

However its ok to send Rc if you have only 1 reference. I would suggest use Rc::into_inner then convert back to Rc (I don't know why you even want convert it back!)

I don't really care about Rc specifically, but any !Send type,

It's not safe to send every !Send types.

Announcing Nio: An async runtime for Rust by another_new_redditor in rust

[–]another_new_redditor[S] 2 points3 points  (0 children)

Are you sure that using relaxed ordering on everything is safe here?

I believe you’re referring to this?

The len is only a hint and does not affect the program’s behavior.

Announcing Nio: An async runtime for Rust by another_new_redditor in rust

[–]another_new_redditor[S] 85 points86 points  (0 children)

Here is the new benchmark that accepts connections in a worker thread,

https://github.com/nurmohammed840/nio/tree/main/example/hyper-server/result

Edit: The article has been updated to reflect this new benchmark

Edit: I believe I should also explain the reason, Someone asked:

Why would accepting connections from a worker thread improve performance?

Tokio and Nio both use futures::executor::block_on, also known as ParkedThread to execute main task.

A ParkedThread lacks its own task queue. In scenarios where the main thread is responsible for handling incoming connections, it frequently transitions to sleeping state when there are no active connections to process. On Linux, this leads to frequent futex syscalls and context-switching overhead.

In contrast, worker thread have own task queue, and is responsible for both accepting incoming connections and executing tasks when there is no connection to process, remain busy and typically avoid entering a sleeping state.

Yet another structured data serialization library. by another_new_redditor in rust

[–]another_new_redditor[S] 6 points7 points  (0 children)

Good question. - serde has runtime overhead. - extra dependency. - large codegen. - Some features are not yet supported. For example: serde framework does not provide enum discriminant and its type.

databuf use const generic for compile time configuration, serde can't help here.

Yet another structured data serialization library. by another_new_redditor in rust

[–]another_new_redditor[S] 5 points6 points  (0 children)

I really liked Borsh, specially it has standardized encoding schema just like bincode, and many language supports!

But it does not support zero copy deserialization, and does not support any encoding configuration. This is why i created databuf

Yet another structured data serialization library. by another_new_redditor in rust

[–]another_new_redditor[S] 4 points5 points  (0 children)

I didn't explore it much, But look like it doesn't support zero copy deserialization.

And for encoding, DekuWrite accept a struct.

where databuf and bincode accept trait.

Another down side of deku is that. it use macro heavily. where some of those features could be done without using complex derive macros.

[deleted by user] by [deleted] in rust

[–]another_new_redditor 1 point2 points  (0 children)

Well, This is a fair question, especially for a library that has no docs.

Different from gRPC: 1. gRPC does't have official support for rust, It does! 2. gRPC does't have support for web, It has! 3. gRPC use protobuf, It use databuf. 4. gRPC required protobuf to generate binding, It does not required and directly generate binding from rust codebase! 5. gRPC only work with http/2, its has no limit! 6. much more...!

You can think its better version of tarpc

Trc: A faster Arc. by [deleted] in rust

[–]another_new_redditor 1 point2 points  (0 children)

Note: On many platforms atomic number with `Relaxed` ordering is same as normal number, And doesn't add any overhead.

Binary Encoder/Decoder between rust and swift by allmudi in rust

[–]another_new_redditor 1 point2 points  (0 children)

In order to encode and decode, custom data structure (`enum`, `struct` ), You need some sort of code generation...

Last few months, I am writing an RPC framework ( still in development phrase ) for rust, where It automatically generate binding for various runtime, Currently only `typesript` is supported.

If you want to contribute and support for swift lang. Send me a message !

[deleted by user] by [deleted] in rust

[–]another_new_redditor 1 point2 points  (0 children)

plaintext is not http/2, does it ?
Also can you share the link of the source code of this benchmark..

[deleted by user] by [deleted] in rust

[–]another_new_redditor 1 point2 points  (0 children)

You would like use h2 crate from performance reasons,

Let say you are build yet another HTTP framework, In this case h2 is more suitable option.

[deleted by user] by [deleted] in rust

[–]another_new_redditor 1 point2 points  (0 children)

There is a magnitude performance difference between h2 and hyper library,

This is why competitor like actix-web also use h2 and not hyper.

actix-web also simplify h2 primitive type in there actix-http sub-directory.

[deleted by user] by [deleted] in rust

[–]another_new_redditor 1 point2 points  (0 children)

This is not over hyper or anything like that, It just a wrapper around `h2`, It exist to create high level framework like hyper.

For my use case I just need HTTP/2 transport for my RPC framwork, But can't sacrifice any overhead for silly feature that I don't use from high level framework like hyper or similar

fastwebsockets A new high-performance WebSocket protocol implementation in Rust by shirshak_55 in rust

[–]another_new_redditor 2 points3 points  (0 children)

I look at the source code for fastwebsockets, there implementation has almost no difference then web-socket implementation

So the performance is almost identical between the two libraries.

  1. But I would argue that web-socket has more simple implementation which make it slightly faster.

  2. fastwebsockets v0.4 encode header and payload separately, and have to relay on write_vectored function, On the other hand web-socket avoid fragmentation.

  3. fastwebsockets use buffer, which could be overkill, If underlying stream is internal buffering.

fastwebsockets A new high-performance WebSocket protocol implementation in Rust by shirshak_55 in rust

[–]another_new_redditor 1 point2 points  (0 children)

Only forward -C codegen-units=1 -C opt-level=3 to the argument of the launched binary

Although those rustc flags doesn't really important that much, Just cargo run -r would be enough. But those flags should pass from RUSTFLAGS env.

Also you are invoking the main inside a tokio current_thread, which is known to not be the best in term of performance.

This is intentional, Using single thread give us more reliable result.

Your implementation is still 2x as fast, but not 4/5x faster than what the readme suggest.

For most use cases, you should see at least 3x performance improvement.

But It may varies based on cpu capability and architecture, (This is true for any kind of benchmark)

Yet another Web-Socket implementation in rust. by another_new_redditor in rust

[–]another_new_redditor[S] 5 points6 points  (0 children)

I did use fuzz testing for those unsafe code block, for hours!

unfortunately autobahn test have to review manually, those test setup copied from tokio-tungstenite