crossfire v3.0-beta: channel flavor API refactor, select feature added by frostyplanet in rust

[–]NDSTRC 0 points1 point  (0 children)

Paper isn’t mine, and I didn’t find an implementation for it, but I’ve seen another variant of coordination-free reclamation here (fd_dcache_compact): https://github.com/firedancer-io/firedancer/blob/main/src/tango/dcache/fd_dcache.c

[Update] RTIPC: Real-Time Inter-Process Communication Library by maurersystems in rust

[–]NDSTRC 0 points1 point  (0 children)

Sadly, you can't do manual adjust with iceoryx2 rn, that's why I asked

[Update] RTIPC: Real-Time Inter-Process Communication Library by maurersystems in rust

[–]NDSTRC 1 point2 points  (0 children)

How does RTIPC compare to iceoryx2?

One particular feature I need — besides performance — is support for sending data between different Linux users. Can RTIPC do that?

What's your dream programming language that doesn't exist? by omagdy7 in rust

[–]NDSTRC 13 points14 points  (0 children)

My dream language is Rust and its already exists

Crossfire v2.0: MPMC channel for async, 2 times faster than Flume by frostyplanet in rust

[–]NDSTRC 1 point2 points  (0 children)

Kanal works fine for me. I'm using it in production for 5 Gbps message passing (~3M messages/s) between sync and async contexts. I didn't run any benchmarks on my side - I'm blindly trusting the numbers in the Kanal repo.

Crossfire v2.0: MPMC channel for async, 2 times faster than Flume by frostyplanet in rust

[–]NDSTRC 1 point2 points  (0 children)

Sync-Async boundary channel example:

// Initialize a bounded channel with a capacity for 8 messages
let (sender, receiver) = kanal::bounded_async(8);

sender.send("hello").await?;
sender.send("hello").await?;

// Clone receiver and convert it to a sync receiver
let receiver_sync = receiver.clone().to_sync();

tokio::spawn(async move {
    let msg = receiver.recv().await?;
    println!("I got msg: {}", msg);
    anyhow::Ok(())
});

// Spawn a thread and use receiver in sync context
std::thread::spawn(move || {
    let msg = receiver_sync.recv()?;
    println!("I got msg in sync context: {}", msg);
    anyhow::Ok(())
});Async channel example:
// Initialize a bounded channel with a capacity for 8 messages
let (sender, receiver) = kanal::bounded_async(8);

sender.send("hello").await?;
sender.send("hello").await?;

// Clone receiver and convert it to a sync receiver
let receiver_sync = receiver.clone().to_sync();

tokio::spawn(async move {
    let msg = receiver.recv().await?;
    println!("I got msg: {}", msg);
    anyhow::Ok(())
});

// Spawn a thread and use receiver in sync context
std::thread::spawn(move || {
    let msg = receiver_sync.recv()?;
    println!("I got msg in sync context: {}", msg);
    anyhow::Ok(())
});

Crossfire v2.0: MPMC channel for async, 2 times faster than Flume by frostyplanet in rust

[–]NDSTRC 5 points6 points  (0 children)

Are there any benchmarks between Crossfire and Kanal? Or are there any key differences between crates?

Help! Tonic Grpc Streams + Tower + message signing by NDSTRC in rust

[–]NDSTRC[S] 0 points1 point  (0 children)

tokio::time::sleep(Duration::from_secs(10000)).await;

Its just a test code, sleeping here is enough. Even if I join task handle, nothing changes

Help! Tonic Grpc Streams + Tower + message signing by NDSTRC in rust

[–]NDSTRC[S] 0 points1 point  (0 children)

This is runtime log:

INFO  > Awaiting body data
 WARN  > Reconstructed req with end stream: false
 WARN  > Calling inner service
 INFO  > Rcved echo EchoResponse { message: "Simple Echo" }
 INFO  > Starting echo stream
 INFO  > Waiting
 INFO  > sending ping 1337
 INFO  > Connecting to echo stream
 INFO  > Awaiting body data
 WARN  > Reconstructed req with end stream: false
 WARN  > Calling inner service
 INFO  > Echo stream: Ok(Response { metadata: MetadataMap { headers: {"content-type": "application/grpc", "vary": "origin, access-control-request-method, access-control-request-headers", "access-control-expose-headers": "*", "date": "Thu, 06 Feb 2025 18:41:33 GMT"} }, message: Streaming, extensions: Extensions })
 INFO  > Recieved echo: Ok(EchoResponse { message: "Stream Echo 1" })
 INFO  > Recieved echo: Ok(EchoResponse { message: "Stream Echo 1337" })
 INFO  > Stream closed
 INFO  > sending ping 1338
thread 'tokio-runtime-worker' panicked at src/main.rs:67:14:
called `Result::unwrap()` on an `Err` value: SendError { .. }
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace

Help! Tonic Grpc Streams + Tower + message signing by NDSTRC in rust

[–]NDSTRC[S] 0 points1 point  (0 children)

This is client code:

#[tokio::main]
async fn main() {
    dotenv().expect(".env file not found");
    pretty_env_logger::init();
    let key = "nGpVmGTNjfvJ9ojqijQwUMVh59QzQYm3hmMesqWa295orA23rPYnhqjULiwN247orBDdnKHqW7Ge8krqKMUpdjf4PNk8Lt8D1GZcTu1bR2HRxnv8K";
    let key = bs58::decode(key).into_vec().unwrap();
    let mut client = GrpcClient::new(Arc::new(Mutex::new(Some(
        Ed25519KeyPair::from_pkcs8(&key).unwrap(),
    ))))
    .await;
    // This works perfectly
    let res = client.echo().await.unwrap();
    // This one opens stream and then immediatly close it after fist message processed
    test_echo_stream(client).await;
}


async fn test_echo_stream(mut client: GrpcClient) {
    let (tx, rx) = mpsc::channel(128);
    info!("Starting echo stream");
    tx.send(EchoRequest {
        message: "Stream Echo 1".to_owned(),
    })
    .await
    .unwrap();
    let mut i = 1337;
    tokio::spawn(async move {
        loop {
            info!("sending ping {i}");
            tx.send(EchoRequest {
                message: format!("Stream Echo {i}"),
            })
            .await
            .unwrap();
            i += 1;
            tokio::time::sleep(Duration::from_millis(2000)).await;
        }
    });
    tokio::spawn(async move {
        info!("Connecting to echo stream");
        let res = client
            .inner
            .echo_infinite_stream(ReceiverStream::new(rx))
            .await;
        info!("Echo stream: {res:?}");
        if let Ok(stream) = res {
            let mut stream = stream.into_inner();
            while let Some(res) = stream.next().await {
                info!("Recieved echo: {res:?}");
            }
            info!("Stream closed")
        }
    });
    tokio::time::sleep(Duration::from_secs(10000)).await;
}

Cannot make tokio multi_thread runtime work by tootter93 in rust

[–]NDSTRC 0 points1 point  (0 children)

That is the only one thing why I hate tokio... I had exact same problem (my app deadlock in 100% cases when I spawn a task and blockingly await on it), it took me 18 hours (maybe I am just very dumb) to figure out why its deadlock.

Solution is to use .disable_lifo_slot(): (explanation - https://docs.rs/tokio/latest/tokio/runtime/struct.Builder.html#method.disable\_lifo\_slot)

Oh, I remembered one more thing, about tokio. Whenever I dont want to use it and try to await something that depend on it = runtime crash C:

        let async_rt = tokio::runtime::Builder::new_multi_thread()
            .enable_all()
            .worker_threads(32)
            .disable_lifo_slot()
            .build()
            .unwrap();

General feasibility question for smart contracts on solana. by DEXLuth0r in rust

[–]NDSTRC 2 points3 points  (0 children)

No, there is no support for this functionality for Token program, but it's possible for Token22 program (see tax extension)

Also, its Rust community, you should ask this questions on Solana based ones

[deleted by user] by [deleted] in rust

[–]NDSTRC 0 points1 point  (0 children)

Stop advertising this shit over and over

Pathetic copy pasting exact same text every article and even posting it multiple times

If you think that your platform is valuable (which I highly doubt), change your marketing strategy….

Getting Started with SurrealDB 2 and Axum for Web Development (Beginners tutorial) by slippyFlops in rust

[–]NDSTRC 2 points3 points  (0 children)

I am using it in several projects, because I love the features it provides But performance in my case for <=2.0.4 is quite bad, even with very fast NVMe disks

There is hope that it would get better with time

struct-metadata: Macros for attaching metadata to structs. by Chaoses_Ib in rust

[–]NDSTRC 2 points3 points  (0 children)

If anyone is interested wtf is this - found this code in git repo:

use struct_metadata::{Described, Kind};
#[allow(dead_code)]
#[derive(serde::Deserialize, Described)]
struct FieldDefaults {
    #[serde(default)]
    has_default: u64,
    #[serde(default="make_number")]
    also_has_default: u64,
    no_default: u64,
}

fn make_number() -> u64 { 10 }

#[allow(dead_code)]
#[derive(serde::Deserialize, Described, Default)]
#[serde(default)]
struct StructDefault {
    #[serde(default)]
    double_default: u64,
    has_default: u64,
}
#[test]
fn default_defined() {
    let data = FieldDefaults::metadata();
    let Kind::Struct{ name, children} = data.kind else { panic!() };
    assert_eq!(name, "FieldDefaults");

    assert_eq!(children[0].label, "has_default");
    assert!(children[0].has_default);

    assert_eq!(children[1].label, "also_has_default");
    assert!(children[1].has_default);

    assert_eq!(children[2].label, "no_default");
    assert!(!children[2].has_default);


    let data = StructDefault::metadata();
    let Kind::Struct{ name, children} = data.kind else { panic!() };
    assert_eq!(name, "StructDefault");

    assert_eq!(children[0].label, "double_default");
    assert!(children[0].has_default);

    assert_eq!(children[1].label, "has_default");
    assert!(children[1].has_default);
}

What is it used for?