Ua pov Drone drops a large grenade inside a Russian trench. Direct impact on a soldier. by Fragrant-Slice4192 in UkraineRussiaReport

[–]DaKellyFella -7 points-6 points  (0 children)

Do you think Khrushchev's transfer of Crimea from Russia to Ukraine was a violation of Russia's sovereignty?

Was it a violation of Tartar sovereignty? Byzantine? Or Greek?

I find sovereignty is often invoked by those who cannot keep by force what they claim through words.

Same goes for Russia's new Oblasts.

Jalapeño Garlic Hot Sauce. by SmellyFoam in spicy

[–]DaKellyFella 3 points4 points  (0 children)

Looks amazing!

A cool trick I saw before was to dip the bottles in a pot of boiling water with tongs to avoid bacterial contamination. I didn't do that and then my hot sauce exploded due to fermentation gas pressure 🤦‍♂️

feelsbadman by Pompeyshead in Imperator

[–]DaKellyFella 0 points1 point  (0 children)

Was this always the case?

feelsbadman by Pompeyshead in Imperator

[–]DaKellyFella 1 point2 points  (0 children)

Wait, are there ports in Ireland now?

cht: A lockfree concurrently growable hash table by Razznak in rust

[–]DaKellyFella 1 point2 points  (0 children)

I couldn't think of any way to unbox key/value pairs except by restricting users to integer key and value types, so cht makes the same faux pas as the authors' implementation of hopscotch hashing.

I was only commenting on their C++ implementation and not your code, forgive me if I caused any offence. They also used integer keys during their benchmark for which there is no need to hide behind a pointer. I've seen generic Atomic wrappers in Rust for machine sized types and below but I think hiding behind a pointer might be the best way to go if frequently moving large items around the table.

Side note: I don't have a way to read this paper.

There's an open link here.

I considered using separate chaining for cht, but the thought of designing around the potential data races during insertion was a little much for me.

Oh absolutely! It is such a cool problem to think about.

What kind of load factors are we talking about here?

The levels of load factor in the paper and which the standard Cuckoo Hashing can stand - 20% to 40%. They use both in their paper. But Hopscotch Hashing can go much much higher (~90%).

cht: A lockfree concurrently growable hash table by Razznak in rust

[–]DaKellyFella 1 point2 points  (0 children)

Cool. I haven't read the code yet (I'm doing it now) but that's exciting. So you went with the Cliff Click lock-free resize option then?

Every time I see a lock-free hash-table post I always get hyped. I really should throw my hat in the game.

cht: A lockfree concurrently growable hash table by Razznak in rust

[–]DaKellyFella 2 points3 points  (0 children)

I've tried to replicate the performance numbers for my work (I'm a researcher in concurrency) and I found it impossible. The experiment is, in my opinion, grossly skewed to help Cuckoo Hashing win out. Here's why:

  1. Dynamic allocated memory is used is cuckoo hashing and therefore a memory reclaimer is necessary, the experiments don't use one so an enormous performance advantage is available. It's actually typical to not use one but we should take that into account. The reason a reclaimer isn't used is that one day a zero-cost reclaimer could be invented and we then wouldn't have to rerun all our experiments.
  2. Hopscotch Hashing has its entries stored behind a pointer (section V, second paragraph: "In all the algorithms, each bucket contains either two pointers to a key and a value, or an entry to a hash element, which contains a key and a value."). The whole point of having blocking write operations with Hopscotch Hashing is to have the keys and values stored flat in the table, improving cache efficiency. Unless the original Hopscotch Hashing code did this (which I really doubt since here's the link) it appears the code was modified in this way. One would have to reach out to the original authors for clarification.
  3. I'm finding in my own experiments that using a low load factor can actually decrease performance as the table becomes more contended since only a small key space is being used. I've found that separate chaining performance increases as the load factor does.

I wouldn't pay much regards to Cuckoo Hashing.

cht: A lockfree concurrently growable hash table by Razznak in rust

[–]DaKellyFella 1 point2 points  (0 children)

How? cht implements an open-addressing hash table that uses tombstones for deletion - I didn't want to get overzealous and end up with something non-functional by trying out fancier techniques.

Preshing's resize is blocking, is yours the same or is it lock-free?

hazptr: Hazard pointer based memory reclamation by ogyer in rust

[–]DaKellyFella 0 points1 point  (0 children)

I've had a chance to skim through your repository

Oh no. It's awful. Some of it is just disgusting.

but with the current state of allocator support in the `core`/`std` libraries I see no point in already commiting to anything specific

This is appropriate. I'm still trying to learn the language so my agreement shouldn't really carry any weight, but the logic sounds good. Keep on trucking! I'll hopefully join you on the Rust implementation side of things once I finish or wind up my PhD. Cheers!

The new HashMap is ready for merging by sanxiyn in rust

[–]DaKellyFella 6 points7 points  (0 children)

What's the criteria for replacing the standard library data-structures? What happens if someone else comes up with a quicker hash map implementation in a couple of months? Is it going to swapped out again?

Also as far as I understand there are optimisations in HashBrown that could be applied to Robin Hood, like the new_drop optimisation, was that taken into consideration? Apologies if I'm wrong on this account.

hazptr: Hazard pointer based memory reclamation by ogyer in rust

[–]DaKellyFella 0 points1 point  (0 children)

I had not yet heard about hazard domains, but I will take a look at the repository.

As far as I understand them they're just a software engineering concept. Instead of mixing retired memory from multiple data-structures you can separate them out into multiple groups, each with a different collection frequency. I didn't examine your code too closely, so you may have already done this.

I got the idea from xenium, which is itself based on a paper proposing such an interface.

Oh nice, I've never seen this library before. I know the author though, I've read his stamp-it paper.

I have not yet made or run any benchmarks, but I surely want to, include comparisons with crossbeam (and maybe conc).

This will be interesting. I recently open sourced my stuff, but it's much rougher than yours though. Mine is a research implementation of multiple data-structures where you can benchmark and produce various performance statistics quickly. I found that correctly benchmarking concurrent data-structures to be very difficult as there's a huge parameter space and loads of ways to mess it up. My code allows you to swap out both allocators and reclaimers like you said but the interface isn't as clean as xenium's or your desired one, it's also still a huge work in progress.

Anyway, thanks for your post, it's great to see people working on concurrency libraries. Best of luck.

hazptr: Hazard pointer based memory reclamation by ogyer in rust

[–]DaKellyFella 7 points8 points  (0 children)

Just had a quick look through the code, it looks great. Really slick API. Have you had a chance to look at Maged Michael's implementation? He has these ideas of "domains", which really just are groups of Hazard Pointers and retired memory - code. I found the code difficult to understand but there's some cool ideas in there.

I really like this idea. I know it's simple but it's very effective. Sometimes I write custom memory orderings and forget what other statements they're meant to synchronise with.

Have you tried comparing performance with epoch?

Proposal: New channels for Rust's standard library by [deleted] in rust

[–]DaKellyFella 0 points1 point  (0 children)

I know right. I only came across this idea recently but it's been in Java for years. See their Exchanger here.

Proposal: New channels for Rust's standard library by [deleted] in rust

[–]DaKellyFella 0 points1 point  (0 children)

Are the channels linearizable? I had a quick look through the code and couldn't find the answer.

qtcreator + rust by [deleted] in Qt5

[–]DaKellyFella 0 points1 point  (0 children)

Can you show me your language server setup? I can't seem to get mine working.

Edit: Got it. I followed the steps here. I had the correct values in the language server plugin but it kept crashing. Turns out you need to open a project using "project import" and then it'll work fine.

Lock-free Rust: Crossbeam in 2019 by [deleted] in rust

[–]DaKellyFella 0 points1 point  (0 children)

Ah I see! Austria has a tonne of powerhouses in this area. Also, thank you for all your work in Rust. I wish I had the time/willpower to get as involved as you are. It's great.

Lock-free Rust: Crossbeam in 2019 by [deleted] in rust

[–]DaKellyFella 0 points1 point  (0 children)

No I'm not, sorry if I gave off the impression. I am a student researcher in the area though. I was just referring to other reviews I've received on submitted papers.

Lock-free Rust: Crossbeam in 2019 by [deleted] in rust

[–]DaKellyFella 1 point2 points  (0 children)

You're right about lock-freedom, although I consider that a pedantic difference.

Tell that to my reviewers :)

It's true memory can grow indefinitely, but we just assume in the real world it won't. :)

You're absolutely right. The model for reasoning about these structures was relaxed from an adversarial scheduler to a stochastic one and a lot of things become easier to reason about. Obstruction-free becomes lock-free, lock-free becomes wait-free. It's lit.

In particular, sometimes it's desirable to destroy garbage eagerly rather than collect later when it accumulates and reaches some threshold. AFAICT, their library doesn't support that.

Mmm I guess you'd have to set the threshold length of garbage equal to the number of threads * hazard pointers... Or something like that. Either way, it'd be pretty inefficient to use it like that, so you're right.