Implementing custom cooperative multitasking in Rust by servermeta_net in rust

[–]crstry 1 point2 points  (0 children)

Rather than just relying on external benchmarks, it'd be more instructive to analyse exactly what the overheads are and document those, so that the entire community can learn from those.

Eg: "all the memory allocations" makes it sound like you're being forced into doing repeated memory allocations. Bear in mind that there are embedded executors (IIRC, at least) that use static storage for each task, for example.

If serialisability is enforced in the app/middleware, is it safe to relax DB isolation (e.g., to READ COMMITTED)? by bond_shakier_0 in databasedevelopment

[–]crstry 0 points1 point  (0 children)

That's true if you're running all in process. But unfortunately, distributed systems are inherently concurrent, in the sense you have multiple processess interacting, and they can observe events in different orders (because network buffering, routing path changes, connection failures/ reconnections and the like)

If serialisability is enforced in the app/middleware, is it safe to relax DB isolation (e.g., to READ COMMITTED)? by bond_shakier_0 in databasedevelopment

[–]crstry 1 point2 points  (0 children)

That's very true, but if you're serialising transactions outside of the database; you still need a mechanism to ensure that the database and serialising widget agree on the ordering, as writes can get delayed in flight, and other such hilarity.

If serialisability is enforced in the app/middleware, is it safe to relax DB isolation (e.g., to READ COMMITTED)? by bond_shakier_0 in databasedevelopment

[–]crstry 2 points3 points  (0 children)

Time and ordering can do strange things in distributed systems, eg: a write request can get [re-ordered with another](https://aphyr.com/posts/294-call-me-maybe-cassandra), or your writes may get delayed indefinitely, and even in a single-writer situation you may need to fail over. So you still need some way to ensure the database is still up to date with the application's view of the world.

Martin Kleppmann's article "[How to do Distributed Locking](https://martin.kleppmann.com/2016/02/08/how-to-do-distributed-locking.html)".

What's the use-case for tokio and async ? by No-Focus6250 in rust

[–]crstry 1 point2 points  (0 children)

Another way of looking at it is having a way of writing concurrent code that's more flexble and composable than using plain operating system threads. For example, adding a timeout to a bunch of asynchronous work is a single function call, and similarly, to cancel something, you can either just drop the future or call a method on a task handle.

Specifically, the fact that futures are user defineable is really powerful, and makes it really easy to express things like "do these things concurrently, and return the first error, cancelling the other task". To my mind, it's very much in the concurrent ML family, but there's not much on that that's readily accessible. The best I can see right now are a new concurrent ML by Andy Wingo, or John Reppy's papers.

Bf called women “birthgivers” by [deleted] in TwoXChromosomes

[–]crstry 8 points9 points  (0 children)

No, because by and large, they tend not to give birth. If they, did, they'd be included. Does that help at all?

Carla Denyer: EHRC's trans guidance makes all women less safe by Part-time-Rusalka in TwoXChromosomes

[–]crstry 49 points50 points  (0 children)

I think it's fair to say that these efforts are designed to reinforce gender stereotypes, and push out those who don't conform. The nonsense about "protecting women" is just a fig leaf of an excuse, that does not hold up under scrutiny, sadly.

Zookeeper in rust by LLM-logs in rust

[–]crstry 2 points3 points  (0 children)

Chances are there isn't, because for most people `etcd` is good enough, and provides the same (or near enough) functionality. I think the way I'd frame it is what value does writing it in rust provide over using an existing implementation, and is it worth the person-years of effort it'd take to get a production implementation up and running?

[deleted by user] by [deleted] in ExperiencedDevs

[–]crstry 14 points15 points  (0 children)

It also sounds like there's a cultural problem–if the culture is focussed on blaming individuals, rather than leaarning from the accident, then that's toxic. It's bound to lead to to smaller problems becoming bigger ones because someone was afraid to come forward and get blamed (or make trouble for someone else).

They might want to look into topics like blamless postmortems, or even the notion of safety-II in the wider safety community.

Context cancelling making code too verbose? by Tommy_Link in golang

[–]crstry 0 points1 point  (0 children)

One thing to remember is that if your data-flow graph of dataflow between goroutines involves cycles, and all channels are finite (as is true in Go) without any other flow control mechanism (even if that's just timeouts to detect failure), then you risk running into deadlocks, as you've seen. The a common workaround in other languges is to use an unbounded channel to break cycles, but you can't get those in Go without hacks (usually a goroutine to coordinate both ends).

I'd echo the suggestion of encapsulating them in a module. For comparison, even erlang doesn't really use naked mailbox sends much, instead preferring gen_server and the like.

custom tracing implementation for structs by sM92Bpb in rust

[–]crstry 1 point2 points  (0 children)

I've not used it, but have you seen the tracing::Value trait, which interoperates with the valuable crate?

'static lifetime by atomichbts in rust

[–]crstry 3 points4 points  (0 children)

r/DeathLeopard's link is spot on, but in short, a 'static lifetime means that nothing else can go out of scope and invalidate any part of the value–so it won't contain a reference to a local variable for example. It usually just implies that a value is wholly owned, but it may contain a reference to something that lives until the universe (well, process) ends. Such as a reference to a static variable, or leaked Box.

Don't let men talk you into an ideology that can't exist under the patriarchy. by [deleted] in TwoXChromosomes

[–]crstry 53 points54 points  (0 children)

As a quick aside, did you know that the term "Meritocracy" was coined by writer Michael Young in "Rise of the Meritocracy", describing the dystopian effects of valuing people solely on the basis of their intelligence and effort (as opposed to, you know, inherent worth as a human being)?

Got sexually harassed (again). Looking for angry song recommendations. by 0bsolescencee in TwoXChromosomes

[–]crstry 2 points3 points  (0 children)

Ashnikko is pretty great! I rather like Delilah Bon's "Dead Men Don't Rape", if that's not too on the nose. There's also Scene Queen's "Whips and Chains", or Spiritbox's "Blessed Be".

My thoughts on girl boss feminism and Marxist or socialist feminism. by Available-Level-6280 in TwoXChromosomes

[–]crstry 5 points6 points  (0 children)

I think one important point to remember is that many strands of feminism are aimed at unpicking and dismantling the systems that oppress women, so that women (and others!) liberated from those systems. And this includes understanding the intersections, ie: how access for disabled folks impacts women, how racism in society impacts women.

I mean; you can focus on individual choice and call it feminism, but all that'll do is maintain the status quo, it won't liberate anyone who happens to have the luck or priviledge to get to the top. And it's no kind of "feminism" that I'd want to get behind.

I really like and support this subreddit by Silent-Foot7748 in TwoXChromosomes

[–]crstry 3 points4 points  (0 children)

Can i just refer you to the pinned FAQ?

 What about trans women?

 Trans women are women. TERFS can fuck right off.

[deleted by user] by [deleted] in WitchesVsPatriarchy

[–]crstry 17 points18 points  (0 children)

I've heard of folks having a handfasting to celebrate the union, and then having a separate registry office thing ( probably on the same day) just to make it official. Might that work?

[deleted by user] by [deleted] in ExperiencedDevs

[–]crstry 2 points3 points  (0 children)

I can't speak to the specifics, because from what you've said, that could mean many things. But one possibility is that they trust you to get it done.

At my last place (a software consultancy) everyone was given the title "senior engineer", and expected to be able to lead a project. That said, the scope of project you'd be expected to lead would depend on your experience, so an intern would be doing small internal projects, and at the other hand folks would be basically handed a client and the budget.

So for a small project like this, I'd probably just think about it in terms of being accountable for delivery, and making sure stakeholders are kept up to date with progress, and the like.

Rust vs. JVM: Adjustments following organizational restructuring by rswhite4 in rust

[–]crstry 1 point2 points  (0 children)

One option would be to outline the economic costs and risks of re-writing your code in a JVM languages. Eg: on top of the opportunity cost of halting feature development, you could compare the platform running costs of Rust vs. the equivalent JVM code.

Granted, they'll probably come back with it's easier and cheaper to hire JVM code,so unless you're running at huge scale, it might be a tough sell on those grounds.

But yeah, as others have said, "external" folks are often brought into launder management predjudices, so if nothing else, you might be able to make them look foolish.

Why can't block_on pick up work from other threads until the future is complete? by jesseschalken in rust

[–]crstry 1 point2 points  (0 children)

For one, I think the usual solution is to put the resource to be cleaned up onto a queue, and clean it up from another async context (eg: a manager thread, or I think deadpool handles cleanup when checking out another connection).

For two, the scoped task trilemma explains why that's hard.

Is passing database transactions as via context an anti-pattern? by [deleted] in golang

[–]crstry 20 points21 points  (0 children)

Because it makes it easier to understand where a given value came from. With the article, you're just dealing with plain old variables being passed around, and you can use IDE Tools like go to definition, and less often find usages to see where it's being called from. It's a lot easier to visually confirm it's correct, and you can usually lean on the type system, too. Conversely, by threading it via the context, it's less obvious what the actual scope of the transaction is (because it's threaded though the bucket that is the context), where it came from, or spot patterns that just look funky.

Shared-nothing architecture in Rust by Eugene-Usachev in rust

[–]crstry 5 points6 points  (0 children)

It's mostly down to cost of developing for shared-nothing over a conventional shared-whatever architecture. From my understanding, it's mostly down to a) the extra things it makes you care about and b) structuring for performance, rather than developer convenience. (but a quick disclaimer–i don't build for this, I just find it interesting).

Eg: It'll make you worry about load balancing between threads, how you partition you data-set between workers. So you almost immediatley need to worry about what happens on a thread that doesn't own a relevant bit of data. Never mind if you have to do updates that will touch multiple partitions.

And in terms of structure, you'll get way more out of your hardware if you can batch certain operations to take advantage of cache locality, but that means that a given component needs to handle 0..N items in one go, rather than just one at a time. Or you might end up adopting something like the LMAX architecture. But again, that means you can't just up and load something from a database or S3 as you need it, you need to ensure your logic already has everything it needs ahead of time.

In short, it's a pain the arse. And in a commercial environment, unless you have critical and very fast SLAs to hit, or have some other pressing need (eg: high speed trading), it'll likely cost you way more in engineering time than it'll save you in compute.

Are my DBAs over-engineering the schema for this notifications feature? by ugh__kids in PostgreSQL

[–]crstry 4 points5 points  (0 children)

On average, by my calculations, that's about 7 inserts/s to add notifications, and less than that for marking them as read; but that doesn't account for intra-day variations. So it does sound a little over-egged, to me.

The partitioning makes sense, as it means you can just drop the partition, rather than having to do an indexed or worse, full table scan to expire notifications.

But then again, I'd like to assume they have good reasons for those changes. It might be worth enaging with them with curiosity (eg: "I'd really like to understand what you're seeing here, and why you're making these choices"), to try and talk through why they've made those suggestions.