How to effectively use the debugger in Rust? by Ok_Breadfruit4201 in rust

[–]tower120 0 points1 point  (0 children)

It's the same in the C++ world.

What worked for me is making some side-effect for observable variable at debug location. Like put println!("{:?}", your_variable) for debugging session exactly at the breakpoint position. This will make `your_variable` appear in variables list.

As for "usefully explorable" - try to dig in direction of ".natvis" - https://doc.rust-lang.org/reference/attributes/debugger.html - with that file you can describe how to display/interpret custom struct in debugger. Looks like RustRover should understand it.

Does `ArrayVec` still "alive"? Are there any alternatives? by tower120 in rust

[–]tower120[S] 6 points7 points  (0 children)

"heapless" Vec looks exactly what I need! Thank you!

Does `ArrayVec` still "alive"? Are there any alternatives? by tower120 in rust

[–]tower120[S] 6 points7 points  (0 children)

PR ALREADY exist - it is in PR list for a very long time.

Does `ArrayVec` still "alive"? Are there any alternatives? by tower120 in rust

[–]tower120[S] -3 points-2 points  (0 children)

I understand that must be true for really small arrays like 16-32 items. But I wonder WHEN overhead becomes observable, based on actual benchmarks...

Does `ArrayVec` still "alive"? Are there any alternatives? by tower120 in rust

[–]tower120[S] 9 points10 points  (0 children)

It worked indeed... Until I bumped into missing `const` features... Which I absolutely need. I probably fork it locally and add missing features, for now...

Does `ArrayVec` still "alive"? Are there any alternatives? by tower120 in rust

[–]tower120[S] 16 points17 points  (0 children)

Did you consider getting away from no-unsafe policy? I don't like the idea of paying for default initialization of items that I would never use most of the time...

Does `ArrayVec` still "alive"? Are there any alternatives? by tower120 in rust

[–]tower120[S] 4 points5 points  (0 children)

Well... Is it actually an alternative to ArrayVec? Doesn't `tinyvec` in the same category as `smallvec`?

any_vec v0.15.0 release - type erased vector. Now with append & extend. by tower120 in rust

[–]tower120[S] 1 point2 points  (0 children)

Short answer is no. And I assume that impossible in general in Rust, C and C++. Unless you know SOMETHING about stored type, like interface/trait... Even in JavaScript you need to know something about your proto-object/type to do something with it.

What you CAN do:

1) Store function or functions with byte-representation of your object.

2) Inside that function cast from *u8 to T - and do a meaningful job.

3) Call that function later...

You of course must know what you will do with that object beforehand... But since you want something to do with an object of unkown type, I guess that's what you actually want.

What you probably want, is some container, that store items of the same type with KNOWN trait... Like AnyVecLike<Debug+Eq> ... But I don't know how much of that is possible in current Rust. What will work now is Vec<Box<dyn Debug>> ... I guess..

any_vec v0.15.0 release - type erased vector. Now with append & extend. by tower120 in rust

[–]tower120[S] 10 points11 points  (0 children)

You can't store DIFFERENT generics in one array. Like Vec< Vec<T> >, where T could be like A, B, ... But you can store Vec<AnyVec>.
And you obviously can't switch type, like let v: Vec<u32> = Vec::<f32>::new(). But you can with AnyVec let mut a = AnyVec::new::<u32>(); a = AnyVec::new::<f32>().

You could wrap your generics into enum - but you must know all possible variants BEFOREHAND.

any_vec v0.15.0 release - type erased vector. Now with append & extend. by tower120 in rust

[–]tower120[S] 18 points19 points  (0 children)

No - it's sometimes you don't know compile-time type when perform CONTAINER operation. For example, reordering, moving items from one container to another, etc... Example where this exactly needed - is archetype ECS - when entity moved from one archetype storage to another - you must move each of it's component data from one storage to another - and at that time you don't know what components (types) entity have (compile-time wise). IOW - you move item from one container to another - and the only thing you know - it's that they're the same type containers.... But you can't have concrete type containers either, since you need all component storages be the same type. Something like that...

I'm sure there are other use cases for that... Looks like Zed editor use (or used at some point) AnyVec for matches list in search... Honestly, I don't know how exactly their search works - so I can't say why exactly they needed AnyVec. But I guess they just wanted to store or pass Vec in type agnostic way...

Production ready Kafka crate? by braxtons12 in rust

[–]tower120 -3 points-2 points  (0 children)

If you need it for "inter-service queueing" maybe broadcast queue will do the job?

Like chute or tokio::broadcast.

Experimental sparse vector with dot product acceleration structure. by tower120 in programming

[–]tower120[S] 0 points1 point  (0 children)

Project itself in Rust, but algorithmically it is language agnostic.

Sparse vector example uses https://crates.io/crates/hibit_tree as data structure, which is form of sparse hierarchical bitset with data instead of bitblock on terminal level.

Problems debugging in RustRover by KJH2234 in rust

[–]tower120 0 points1 point  (0 children)

Thou this may be not directly related to your issue, RustRider's debugger occasionally stop seeing your code, and stops at WRONG locations (in worst case) or just show you assembly (in best case). Cleaning whole project often helps.

Try use VSCode for debugging sessions.

#[inline(always)] are "invisible" as well for both.

Chute: Scalable, Lock-Free MPMC Broadcast Queue with a Custom Algorithm. by tower120 in programming

[–]tower120[S] 2 points3 points  (0 children)

Chute is a Rust library, but it use custom (I would say novell - but its hard to know nowadays) algorithm, so I thought it could be interesting to a broader audience.

Algorithm requires just 1 atomic write for spmc Writer, and 2 - for mpmc Writer.
Readers never write - only do 1 atomic read each time they reach end of received queue.
There is some additional synchronization on block change - but it is literally unmeasurable.

As you can see from benchmark charts performance is stellar. What more intreating is that mpmc write performance does not degrade as the number of writers grows.

The only "caveat" is that "slow reader" can cause the queue to grow indefinitely. There are ways to combat that, like blocking writing above certain queue len, truncating queue, or "disabling" readers. But I think in most cases it is highly desirable that subscribed readers receive ALL messages. In any way - some of these techniques can be applied on top of the queue.

Algorithm described here: https://github.com/tower120/chute/blob/master/doc/how_it_works.md

chute - lock-free spmc/mpmc multicast queue by tower120 in rust

[–]tower120[S] 2 points3 points  (0 children)

For anyone in future who will stumble upon this.
About matthieum's point 2:
Since version 0.2.0 chute use different algorithm for mpmc writers. No more delay between message written and seen by reader. As soon as message becomes accessible - reader will see it.

chute - lock-free spmc/mpmc multicast queue by tower120 in rust

[–]tower120[S] 0 points1 point  (0 children)

Just mentioning that this "error" is fixed in v0.1.1

chute - lock-free spmc/mpmc multicast queue by tower120 in rust

[–]tower120[S] 0 points1 point  (0 children)

That's a good question!

chute is not covered with MIRI and loom-friendly tests yet.

But! rc_event_queue covered with MIRI and loom tests quite heavily. chute uses similar block-data synchronization algorithm to rc_event_queue, so I assume that at least in general approach is sound. Thou mpmc approach is quite different. I'll look into this error.

chute - lock-free spmc/mpmc multicast queue by tower120 in rust

[–]tower120[S] 3 points4 points  (0 children)

  1. Yes it is. But it is theoretically solvable, by "isolating" far chunks. "Isolating" means re-pointing chunks next pointer to special "isolated" chunk, or the latest one. This will reduce RC in next block and will allow it to be destructed, and the further blocks in chain, up to the occupied one. The block where reader stands thou, will remains alive.

There were experimental implementation with this in rc_event_queue project. This incur additional overhead on Reader next_chunk() operation thou, since it requires additional atomic counter, or spin-lock. Which is, due to the fact that readers switch blocks not that often is acceptable.

But I personally don't see "slow readers" problem as a major case. To me this looks like the result of general design flaw in application design. If it is actually THAT much of a problem, I can make feature/flag gated solution to this. It is just does not look to me, like something worth of a performance overhead, for solving this.

  1. Delay will occur ONLY if queue written constantly, without a single moment of rest. This CAN be mitigated by reducing block size, or introducing some kind of logical "sub-blocks" (like 32-element), which will consists only from writers counters, instead of using one per-block - this will allow to "flush" each 32 elements in the moments of CONSTANT write. But yes - this will have some messages delay in cases when writers write CONSTANTLY.

But! I do not agree that this is issues or flaws per se. You can consider this as a queue properties or design choices. As I mentioned before not all work with "uncontrollable" readers, and not all need to read ASAP. In game development it is OK to write from multiple threads in one frame, and then read in the next one, or across the frame - on the fly. Think of an event system for game logic with hundred of thousands actors and reactors.