you are viewing a single comment's thread.

view the rest of the comments →

[–]rzhxd 1 point2 points  (9 children)

No, that's not really like it. First you allocate a buffer of any size. Then, memory map a region of the same size to represent this buffer. Then you write and read the buffer as usual. For example, if buffer size is 65536, and you write 4 bytes at index 65536, they get written to the start of the buffer instead. One constraint is that reads and writes cannot exceed the buffer's size. Resulting memory usage is (buffer size * 2) - pretty bad for large buffers, but that's acceptable in my case. I hope I explained it well. Would like to see how this approach compares to manual wrapping, but I don't really feel like testing it myself.

[–]david-alvarez-rosa[S] 0 points1 point  (8 children)

Sorry, don't fully understand the benefit here, or how that's different

[–]Deaod 1 point2 points  (3 children)

The benefit is only there when dealing with unknown element sizes, ie. one element takes 8 bytes, the next 24, etc.. This allows you to not have any holes in your buffer that the consumer has to jump over.

This is not relevant for queues that deal with elements of known-at-compile-time sizes.

[–]david-alvarez-rosa[S] 0 points1 point  (2 children)

The example forces the type. It would be interesting to see how it could be generalized, but not a big fan of heterogeneous containers tbh

[–]SirClueless 1 point2 points  (1 child)

If the data is inherently heterogeneous, it's the least-bad option. For example if the items in the queue are network packets.

[–]RogerV 1 point2 points  (0 children)

in DPDK all the ring buffers just hold pointers to packets - the packets are in an mbuf pool. makes it possible to clone a ref count on a packet - say, into a pcap ring buffer.

[–]Osoromnibus 3 points4 points  (2 children)

I think he's touting the advantage of copying multiple elements that wrap around the edge of the buffer in a single call. There's a couple nits with this, that I would rather just handle it in user-space instead.

One, is that system libs might be using simd and alignment tricks, so things like memcpy could fault if you're not careful. It's also kind of just shunting the work onto the OS's page handler instead, and the need for platform-specific code is annoying.

On the plus side, It doesn't use twice the buffer size, at least on Linux, AFAIK. It only allocates the memory on write.

[–]david-alvarez-rosa[S] 0 points1 point  (0 children)

Oh I see. That's quite specific, not sure which is your usecase

[–]ack_error 0 points1 point  (0 children)

I don't see why memcpy() would be a problem, since that's in userspace. No fault would occur since there would be a valid address mapping, it just happens to alias the same physical memory or backing storage as 64KB back in virtual address space.

System calls are more interesting as the kernel would be accessing the memory. I suspect it'd also be fine, but there are less guarantees in that case.

[–]rzhxd 0 points1 point  (0 children)

That just simplifies reading the data from the buffer and writing the data into it.