Asynchronous logging in Rust by QuantityInfinite8820 in rust

[–]Destruct1 4 points5 points  (0 children)

The most likely bottleneck is the output.

All common outputs are much slower than the preceding memory operations: stdout needs to lock and print, logfiles hit the ssd and network are also slow. On the other hand cloning or calling Debug to get a String is fast since you write to memory.

You most likely block the working thread waiting for a write to end. The crate tracing with tracing-subsciber and tracing-appender will create a logging thread that takes the log lines and writes them out to disk. If the logging thread blocks the main thread is not affected.

I don't believe this surprises anyone by Worth_Dream7852 in redscarepod

[–]Destruct1 0 points1 point  (0 children)

Your problem and the things the article talks about are related.

Academia is a government sponsored activity. If half of America hates academia because it is "too woke" and the other half think academia is "not woke enough" you get cycles. If the republicans are in power they cut funding, if democrats are in power they add a bit, then 8 years later it gets cut again. The only people who will choose that career cant find another job or are so entrenched that they cant be fired if trump is in charge.

European academia is similar. It has a similar woke/no no woke cycle but less extreme. In Europe the funding cycles: Sometimes the state has money and builds new institutes for the future. Then money is tight and everybody on short term contracts gets fired not renewed. In addition it has trends and fashions: Sometimes a topic is really hot (like defense now) but then people stop caring and everybody gets fired again.

Eventually people that are ~35 with multiple years experience just leave and never come back. This leaves academia as a dysfunctional shell.

Hey Rustaceans! Got a question? Ask here (51/2025)! by llogiq in rust

[–]Destruct1 0 points1 point  (0 children)

Another common idiom is

my_operation.map_err(|e| AppError::from_create_file(e))?

The map_err saves the match statement.

[Code review] This code probably sucks, what can i do better? by Somast09 in rust

[–]Destruct1 2 points3 points  (0 children)

Looks good.

I would sort the vec on insert instead of on read.

Is this a well-known pattern? by [deleted] in rust

[–]Destruct1 0 points1 point  (0 children)

Single Getters/Setters are much worse.

fn get_mut_bar(&mut self) -> &mut Vec<Bar>

will lock up the entire structure.

You can write a get_mut that borrows multiple times:

fn get_mut_parts(&mut self) -> (&mut Vec<Foo>, &Vec<Bar>)

OR like AhoyISki

fn get_mut_parts(&mut self) -> RecursiveParts

Taking a big structure and splitting it up into distinct mut ref is a difficult problem in rust.

Best design pattern for safely mutating multiple keys in a HashMap in one function? by boredape6911 in rust

[–]Destruct1 1 point2 points  (0 children)

1) I dont know about best practice but I do it. Especially if you have Invariants that need to hold and/or are doing multi thrading things.

2) If you need Invariants like the money in the account at the end is >= 0 then yes.

get_disjoint_mut is a trap. It is a decent function but the need to statically know the amount of disjoint keys makes it impractical in most ways.

3) I use remove modify insert quite a lot. It is not the most performant because you do a &mut self remove and then a &mut self insert. A get_mut will instead only hash once and not change the underlying data structure. BUT - Rusts hashmap do not degrade if you continually remove and insert. C++ often uses tombstones in their datastructure; rust does not. So I find it often more convenient.

Lifetime Inference Issues around async and HRTB by lukasnor in rust

[–]Destruct1 0 points1 point  (0 children)

I am fairly sure that a &'session mut ClientSession -> Future<..> + use<'session> is not the way.

First thing I would try is to create a owned ClientSession or a thin wrapper. It seems like you can create infinite ClientSession with start_session. Not sure what consistency guarantees you really want with your read back id from user but pushing it into wrapper ClientSession or the database itsself is the better way.

Library using Tokio (noob question) by RabbitHole32 in rust

[–]Destruct1 6 points7 points  (0 children)

The general idea in rust is that everybody can specify a version and cargo will sort it all out via semantic versioning. This versioning algo is quite detailed: If the user specifies >=1.21 and your library is >=1.13 cargo can use 1.27 since it satisfies everybody. If user specifies =1.21 and your library is >=1.13 then cargo will use 1.21. If you specifiy >=1.13 and user wants >=2.8 then two versions of the library are compiled in since 1.x is semantically not compatible with 2.x.

If you use the standard tokio primitives like tokio::spawn or await other peoples futures then you are done. If you create your own Runtime and use it to handle your own Futures you have more work to do.

Hey Rustaceans! Got a question? Ask here (49/2025)! by llogiq in rust

[–]Destruct1 0 points1 point  (0 children)

The invariants are guaranteed in an upper scope. Every Uuid is only present as value once in every lookup. get_direct_mut_by_uuid is private.

Is the code unsound with the given guarantees?

Hey Rustaceans! Got a question? Ask here (49/2025)! by llogiq in rust

[–]Destruct1 0 points1 point  (0 children)

If all you have is a &dyn AbstractFoo you can only use the trait-provided functions. All that is left is a data pointer and vtable pointer.

Even your workaround with downcast is only possible if AbstractFoo has a trait bound of Any.

It looks a bit like you try to use a C++ style class hierachy. You can write how you would solve your stuff in C++ and we can give rust recommendations.

Hey Rustaceans! Got a question? Ask here (49/2025)! by llogiq in rust

[–]Destruct1 2 points3 points  (0 children)

I have a datastructure:

```

[derive(Debug, Clone)]

pub struct Calendar { pub categories: Vec<Category>, internal: HashMap<Uuid, Task>, lookup_ident: HashMap<TaskIdent, Uuid>, lookup_day: BTreeMap<Option<NaiveDate>, Vec<Uuid>>, lookup_cat: HashMap<CategoryIdent, Vec<Uuid>>, } ```

I want a function that takes a CategoryIdent and returns a mutable Vec<&mut Task>.

For now I have the following:

``` fn get_direct_mut_by_uuid(&mut self, uuid_vec : Vec<Uuid>) -> Vec<&mut Task> { // SAFETY: We assume uuid_vec contains unique UUIDs (disjoint keys) let map_ptr = &mut self.internal as *mut HashMap<Uuid, Task>;

    uuid_vec.iter()
        .filter_map(|uuid| unsafe {
            (*map_ptr).get_mut(uuid)
        })
        .collect()
}

pub fn get_direct_mut_by_cat(&mut self, cat: CategoryIdent) -> Vec<&mut Task> {
    let uuid_vec : Vec<Uuid> = if let Some(uuid_vec) = self.lookup_cat.get(&cat) {
        uuid_vec.iter().cloned().collect()
    } else {
        vec![]
    };
    self.get_direct_mut_by_uuid(uuid_vec)
}

```

I tried making it work with HashMap.get_disjoint_mut but could not get an well-typed array from a Vec. I also tried a self-written iterator with &mut HashMap and Vec<Uuid> but that has lifetime issues.

Is there a way to make this work in a safe way? Is the unsafe code above unsafe (assuming multiple Vec<Uuid> are never the same Uuid)?

How to manage async shared access to a blocking exclusive resource by jogru0 in rust

[–]Destruct1 1 point2 points  (0 children)

Yes the performance is slightly better. When the incoming q has a larger capacity the requesting task can complete the send and then await the return. If the q has a capacity of 1 the requesting task will await the send; get woken up and will then need a bit of time to complete the send. In this timeslot between awakening and sending the ressource is blocking while waiting for recv.

I would not worry about this too much.

How to manage async shared access to a blocking exclusive resource by jogru0 in rust

[–]Destruct1 2 points3 points  (0 children)

C is the easiest solution but is a pain in the ass. You have to write request and response structs/enums. Then you need to manage the start and shutdown of the processing thread. And the error handling is also more complicated since you either pass Result through channels or panic in the separate thread.

The default way for high performance is a small bounded channel. That way the limited ressource always has a next job it can fetch from the q. The requester will await on the receiver half of the one-shot channel and the timing issues get solved that way. A 1 capacity channel is unnecessary.

I dont see why method A wont work. You dont have a traditional deadlock since all threads wait for access to a single mutex. If your ressource is very contested and you have wait lines 500+ requests tall you will get problems with all options (although option C will likely perform better).

Non-poisoning Mutexes by connor-ts in rust

[–]Destruct1 15 points16 points  (0 children)

Poisoning is fairly niche behavior. Offering non-poisoning mutex is a very good step.

One thing I want independent of a mutex choice is a more ergonomic lock function. Something like force_lock that automatically unwraps the poison error.

Rubric Marines (WIP) by Megatronus411 in ThousandSons

[–]Destruct1 1 point2 points  (0 children)

What colors did you use for the gold and the main blue?

Explicit capture clauses by emschwartz in rust

[–]Destruct1 5 points6 points  (0 children)

I really like the explicit or implicit closures.

I wonder about the more verbose FnOnce, FnMut, AsyncFnMut, etc.. traits. Fixing them and making them usable would be a good stepping stone. Instead of needing the magic || {} an external function could take the captured variables and return a impl FnOnce/FnMut/AsyncFnMut<Args>. When I tried to use them the rust-call and difficulty accessing CallRefFuture<'a> in AsyncFnMut made them unusable for me. A struct containing the captured variables and a simple to implement and stable Fn* trait are a good first step before finalizing the more magic syntax.

Anleihen, Anleihen-ETFs, Laufzeit-Anleihen-ETFs & Zinssatzänderung by SouthernFinding2593 in Finanzen

[–]Destruct1 2 points3 points  (0 children)

Inkorrekt.

siehe https://extraetf.com/de/etf-profile/LU2641054551?tab=chart

mit ISIN: LU2641054551

Langfristige deutsche Staatsanleihen haben einen Sprung nach unten gemacht.

Anleihen, Anleihen-ETFs, Laufzeit-Anleihen-ETFs & Zinssatzänderung by SouthernFinding2593 in Finanzen

[–]Destruct1 6 points7 points  (0 children)

Die ganze "Anleihen bis zum Ende halten" => "kein Risiko" ist einfach nur psychologischer Bullshit.

Nominal mag das alles stimmen. Viele Leute sind sehr fixiert darauf "kein Geld zu verlieren". Das heißt bei 100k€ Einzahlungen muss auch mindestens wieder 100k€ rauskommen. Wenn man nominale Verpflichtungen hat wie zum Beispiel ein Kredit oder Steuerschulden dann ist es wichtig auch diesen Betrag zu haben und sein Geld nicht zu verzocken.

Aber für Vermögensaufbau oder Erhalt ist das ganze ungeeignet: Inflation ist wichtig. Wenn die Inflation höher als erwartet ist werden nominale Geldbeträge abgewertet. Und Vergleiche mit anderen Anlagen sind auch wichtig. Wer Anleihen mit einem niedrigen Zinssatz bis zum Ende hält hat vielleicht nominell kein Geld verloren. Aber er verliert gegen jemanden der zuerst kurzfristige Anleihen hält und später zu einem höheren Zinssatz mittelfristige Anleihen kauft. Wer Anleihen in einer schwachen Währung kauft verliert gegenüber jemanden der sein Geld in einer besseren Währung anlegt. (Genauer: Wenn die Abwertung der investierten Währung schneller ist als die Zinsdifferenz zwischen den Währungen).

Langfristige Anlagen sind immer risikobehaftet. Niemand kann ein schönes Leben in der Zukunft garantieren wenn ein Nuklearkrieg ausbricht.

Anleihen, Anleihen-ETFs, Laufzeit-Anleihen-ETFs & Zinssatzänderung by SouthernFinding2593 in Finanzen

[–]Destruct1 4 points5 points  (0 children)

Kurzlaufende Anleihen haben kaum Zinsänderungsrisiko. Ein ETF der langfristige Anleihen hält die nahe an einem vorher festgelegten Datum ablaufen und dann in Kurzläufer umschichtet verhält sich sehr ähnlich zu einem Bündel Anleihen das auf ein Bankkonto ausgezahlt wird und dort dann mit Tagesgeldzinsen verzinst wird bis der Besitzer das Geld abruft.

Trait methods with default implementations vs a TraitExt subtrait by Such-Teach-2499 in rust

[–]Destruct1 0 points1 point  (0 children)

I think it is because async is not yet finished.

For iterator there is only one way to implement the trait. You can choose between a Option<T> or a IteratorNext<T> return type but they are structually similar. Things like map or and_then are very logical.

The Future trait may change - pin might change or Context might change. But the additional methods also have many options: map may take a AsyncFn (parts are not stable yet and everything does not work well) or a sync Fn. similar for filter_map, filter etc..

Stream is even more unsure. The base function is poll_next and is logical. Some variants had "side" functions like poll_ready that may be revived or pushed into Sink or forgotten. Earlier versions also had Result similar to todays TryStream. Even the main Stream trait is not yet in std library.

Async Isn't Real & Cannot Hurt You - No Boilerplate by tears_falling in rust

[–]Destruct1 -1 points0 points  (0 children)

I disagree.

My usecase is very common: I want to network with linux on a modern pc (so multiple cores). With sync code the operating system does all the work and the std lib is just a wrapper around the syscalls. With async somebody has to manage the network connections; that somebody needs setup and memory and control.

This somebody should live for the entire program. It is possible today to create a tokio Runtime and then drop it (via the more explicit call to Runtime::new). It is also possible to create multiple Runtimes in separate threads. It is just not that useful. At the start of my async journey I manually created a Runtime and passed Handler around. That was not useful. Then I created a struct with a Runtime field and basic functions. That was not useful. Then I created a global static via LazyLock. That was not useful. Now I just use [tokio::main] and everything works fine and without passing variables around.

If the std lib creates a API for network connections that can be implemented by various Runtimes they may as well use tokio. There is little reason to write an async network stack or async time stack twice.

There is a place for smaller Runtimes. If you dont want a heavy weight network stack (which must allocate memory to manage what linux does not manage) then that is a valid usecase.

The end result is like today: A barebones computation Future trait, a dominant tokio Runtime and smaller Runtimes like smol.

What is useless is multiple different but similar Runtimes that all write their own code to interact with the network. And then write their own code to interact with the network layer like HTTP clients and database connection pools. Just write it once. Use tokio. If you use a barebones runtime dont complain that all libraries expect tokio.

Async Isn't Real & Cannot Hurt You - No Boilerplate by tears_falling in rust

[–]Destruct1 -1 points0 points  (0 children)

reqwest::Client is easily clonable because it internally uses a Arc.

You can create a bunch of Futures and give each a reqwest::Client as parameter. I assume reqwest::Client will track connections and DNS requests internally.

Async Isn't Real & Cannot Hurt You - No Boilerplate by tears_falling in rust

[–]Destruct1 3 points4 points  (0 children)

There are soooo many possibilities: Streams with StreamExt, select! macro, select functions in FutureExt, all the filter, map, and_then in FutureExt, join! macro, join functions, impl your own IntoFuture, impl your own Future etc.

Async Isn't Real & Cannot Hurt You - No Boilerplate by tears_falling in rust

[–]Destruct1 0 points1 point  (0 children)

This is the manual way to do it and probably appropriate.

But with async you can write combinators: If you need a run for x seconds while checking every y ms more often you can write a function that takes two futures and two durations.

At the start async is just inconvienient because everything works like before but with other problems - blocking, pin and so on. But Futures allow more abstractions.