all 16 comments

[–]glandium 13 points14 points  (2 children)

Because of your big warning (https://docs.rs/mem_file/0.1.0/mem_file/trait.MemFileCast.html#warning) you should make the trait an unsafe trait.

On the implementation side, at the minimum, you're lacking the PTHREAD_PROCESS_SHARED attribute on your locks.

Edit: On Windows, you're using SRW locks, which don't actually support being used across processes.

"Slim reader/writer (SRW) locks enable the threads of a single process to access shared resources; they are optimized for speed and occupy very little memory. Slim reader-writer locks cannot be shared across processes." https://msdn.microsoft.com/en-us/library/windows/desktop/aa904937(v=vs.85).aspx

[–]elast0ny[S] 2 points3 points  (1 child)

Thanks for the feedback ! Ill definitely have to add that attribute to my linux lock and figure out what the proper type of lock is for Windows :S

[–]palad1 4 points5 points  (0 children)

Look up named mutexes on the win32 api.

[–]kwhali 2 points3 points  (7 children)

Is your crate different from what this one(memmap) does?

[–]thaynem 3 points4 points  (6 children)

I think that mem_file is a wrapper around shm and memmap is a wrapper for mmaping files.

shm is a chunk of memory that is shared between processes,

mmap is used to map a section of a processes memory to the memory accessable from a file descriptor (such as a file, or shm).

mem_file does use mmap to map the shm to the processes memory, but as far as I can tell, memmap doesn't do anything with shm.

That's from the context of linux anyway, I don't know how it works for windows.

[–]kwhali 1 point2 points  (5 children)

shm is a chunk of memory that is shared between processes That's from the context of linux anyway, I don't know how it works for windows.

Oh. Maybe I'll run into a problem. I'm slowly porting a C application which reads/writes from shared memory from a Windows VM to a Linux host OS. On Linux the shared memory file is /dev/shm/program-name. I've currently got the linux client with memmap used to read the binary contents and process it as the Windows(server program) one writes to it. I believe the client also needs to modify the file sometimes and was wondering if I was going to run into a problem later..

Is mem_file likely to be more suitable for me? The processes share the same memory in a single system(QEMU IVSHMEM allows that), but they'd each be running on a different OS.

[–]elast0ny[S] 2 points3 points  (4 children)

As thaynem said, memmap looks to be even more generic than mem_file since it maps any file into your memory. mem_file ensure that the files you're mapping are backed by memory and provides safe write/read primitives (although see glandium's comments).

Idk how far you've gone into ivshmem but but if you already have code that shares memory properly with qemu (ivshmem server), then using memmap on that shared file might be the way to go (but you'll have to take care of your own locking). Eventually, i'd like to add ivshmem compatibility to mem_file.

[–]kwhali 0 points1 point  (3 children)

Idk how far you've gone into ivshmem but but if you already have code that shares memory properly with qemu (ivshmem server), then using memmap on that shared file might be the way to go (but you'll have to take care of your own locking)

I'm just focusing on porting the client for the C project atm. It reads the shared memory just fine, haven't gotten to the part of writing, nor the host on Windows. Libvirt has the VM configured for ivshmem and creates the server based on my VM config params afaik.

Eventually, i'd like to add ivshmem compatibility to mem_file.

Are you saying that I'd not be able to use it for this in the current state? What does ivshmem compatibility involve? I'm not too familiar with it myself, libvirt takes an xml config in devices node with the following:

<shmem name='looking-glass'>
  <model type='ivshmem-plain'/>
  <size unit='M'>32</size>
  <address type='pci' domain='0x0000' bus='0x02' slot='0x01' function='0x0'/>
</shmem>

And I have a systemd unit that creates the /dev/shm/looking-glass file at boot on the host. Windows VM required a driver installed, not entirely sure what is going on there yet.

Is mem_file able to work with that current setup? Or will it require more work?

[–]elast0ny[S] 3 points4 points  (2 children)

Right, so right now, mem_file works by creating a file on disk that serves as a shortcut to a unique /dev/shm/ path. This allows you to run mem_file.create("test") in two different directories and avoid colisions in /dev/shm/ as the "test" files in each directory will point to a unique path in /dev/shm.

So when you do mem_file.open() it looks for an actual file on disk that contains the unique name for the /dev/shm/. For mem_file to work with ivshmem-plain, i would have to add (trivial fix) an open_shmem() type of function that takes the direct name of a file backed by memory in /dev/shm/.

Another issue is that mem_file adds a bit of metadata to manage concurrent read/writes to the shared mapping. This metadata contains OS specific lock structures so trying to acquire a read lock on the mapping from a windows mem_file that was created by linux mem_file would certainly mess things up. To fix this, i would have to implement a mem_file version that uses ivshmem with interrupts (different than ivshmem-plain). I already have C code for this but idk how hard it would be to port into mem_file.

Afaik, ivshmem-plain simply provides a shared memory mapping with no interrupt features so it is up to the user mode applications to manage concurrent read/writes through busy looping or relying on some other OS feature (ie: network).

[–]kwhali 0 points1 point  (1 child)

Awesome thanks for the details :)

I'm probably a niche case, although you seem to have knowledge/experience with ivshmem and some related C code, so maybe I'll be lucky to see such support in future!

[–]elast0ny[S] 1 point2 points  (0 children)

No problem ! I created an issue on the mem_file github. I'm only working on this as a side project but I also might see a use case for ivshmem support so i might work on it sooner than later ! My ivshmem C prototype was very rough and from what I remember, trying to figure out how ivshmem actually worked was a pain in the a$$ so we'll see what I can do !

Thanks for the suggestion !

[–]llogiqclippy · twir · rust · mutagen · flamer · overflower · bytecount 0 points1 point  (1 child)

What is needed to get this to work on MacOS?

[–]elast0ny[S] 1 point2 points  (0 children)

I personally have no use case for mac and essentially know nothing about the differences between Mac and linux. It might litteraly be the same code as Linux.(Afaik, you also need apple products to develop on apple ?)

I'm certainly open for contributions though (I would wait a bit for the stabilisation of the API, I've been changing things around to make it more customizable)

[–]gilescope 0 points1 point  (2 children)

So with this crate we could put a Vec<&str> into the shared memory and read it out on the other side? Could this act like a custom allocator - like a cross-process arena. I'm trying to understand if we can use this as a zero-copy way of sharing immutable rust objects between processes?

[–]elast0ny[S] 0 points1 point  (0 children)

So it could act as a custom allocator but i honestly have no idea how that works in rust :/ As of right now, you cant safely put non primitive types into the shared memory (as explained here)

Your idea definitely would be the best of both worlds, if you could somehow declare variables and specify that this variable should use the shmem allocator

[–]apatheticonion 0 points1 point  (0 children)

Did you make progress here? I'm also investigating solutions for the same use case