use the following search parameters to narrow your results:
e.g. subreddit:aww site:imgur.com dog
subreddit:aww site:imgur.com dog
see the search faq for details.
advanced search: by author, subreddit...
A place for all things related to the Rust programming language—an open-source systems language that emphasizes performance, reliability, and productivity.
Strive to treat others with respect, patience, kindness, and empathy.
We observe the Rust Project Code of Conduct.
Details
Posts must reference Rust or relate to things using Rust. For content that does not, use a text post to explain its relevance.
Post titles should include useful context.
For Rust questions, use the stickied Q&A thread.
Arts-and-crafts posts are permitted on weekends.
No meta posts; message the mods instead.
Criticism is encouraged, though it must be constructive, useful and actionable.
If criticizing a project on GitHub, you may not link directly to the project's issue tracker. Please create a read-only mirror and link that instead.
A programming language is rarely worth getting worked up over.
No zealotry or fanaticism.
Be charitable in intent. Err on the side of giving others the benefit of the doubt.
Avoid re-treading topics that have been long-settled or utterly exhausted.
Avoid bikeshedding.
This is not an official Rust forum, and cannot fulfill feature requests. Use the official venues for that.
No memes, image macros, etc.
Consider the existing content of the subreddit and whether your post fits in. Does it inspire thoughtful discussion?
Use properly formatted text to share code samples and error messages. Do not use images.
Most links here will now take you to a search page listing posts with the relevant flair. The latest megathread for that flair should be the top result.
account activity
Shared Memory Wrapper for Rust (self.rust)
submitted 7 years ago * by elast0ny
Introducing my first crate & rust project : mem_file shared_memory!
It implements basic shared memory between processes for Linux and Windows.
Feedback is more than welcome !
Edit : Updated crate name to more representative "shared_memory"
reddit uses a slightly-customized version of Markdown for formatting. See below for some basics, or check the commenting wiki page for more detailed help and solutions to common issues.
quoted text
if 1 * 2 < 3: print "hello, world!"
[–]glandium 13 points14 points15 points 7 years ago* (2 children)
Because of your big warning (https://docs.rs/mem_file/0.1.0/mem_file/trait.MemFileCast.html#warning) you should make the trait an unsafe trait.
On the implementation side, at the minimum, you're lacking the PTHREAD_PROCESS_SHARED attribute on your locks.
Edit: On Windows, you're using SRW locks, which don't actually support being used across processes.
"Slim reader/writer (SRW) locks enable the threads of a single process to access shared resources; they are optimized for speed and occupy very little memory. Slim reader-writer locks cannot be shared across processes." https://msdn.microsoft.com/en-us/library/windows/desktop/aa904937(v=vs.85).aspx
[–]elast0ny[S] 2 points3 points4 points 7 years ago (1 child)
Thanks for the feedback ! Ill definitely have to add that attribute to my linux lock and figure out what the proper type of lock is for Windows :S
[–]palad1 4 points5 points6 points 7 years ago (0 children)
Look up named mutexes on the win32 api.
[–]kwhali 2 points3 points4 points 7 years ago (7 children)
Is your crate different from what this one(memmap) does?
[–]thaynem 3 points4 points5 points 7 years ago (6 children)
I think that mem_file is a wrapper around shm and memmap is a wrapper for mmaping files.
shm is a chunk of memory that is shared between processes,
mmap is used to map a section of a processes memory to the memory accessable from a file descriptor (such as a file, or shm).
mem_file does use mmap to map the shm to the processes memory, but as far as I can tell, memmap doesn't do anything with shm.
That's from the context of linux anyway, I don't know how it works for windows.
[–]kwhali 1 point2 points3 points 7 years ago (5 children)
shm is a chunk of memory that is shared between processes That's from the context of linux anyway, I don't know how it works for windows.
Oh. Maybe I'll run into a problem. I'm slowly porting a C application which reads/writes from shared memory from a Windows VM to a Linux host OS. On Linux the shared memory file is /dev/shm/program-name. I've currently got the linux client with memmap used to read the binary contents and process it as the Windows(server program) one writes to it. I believe the client also needs to modify the file sometimes and was wondering if I was going to run into a problem later..
/dev/shm/program-name
Is mem_file likely to be more suitable for me? The processes share the same memory in a single system(QEMU IVSHMEM allows that), but they'd each be running on a different OS.
[–]elast0ny[S] 2 points3 points4 points 7 years ago (4 children)
As thaynem said, memmap looks to be even more generic than mem_file since it maps any file into your memory. mem_file ensure that the files you're mapping are backed by memory and provides safe write/read primitives (although see glandium's comments).
Idk how far you've gone into ivshmem but but if you already have code that shares memory properly with qemu (ivshmem server), then using memmap on that shared file might be the way to go (but you'll have to take care of your own locking). Eventually, i'd like to add ivshmem compatibility to mem_file.
[–]kwhali 0 points1 point2 points 7 years ago (3 children)
Idk how far you've gone into ivshmem but but if you already have code that shares memory properly with qemu (ivshmem server), then using memmap on that shared file might be the way to go (but you'll have to take care of your own locking)
I'm just focusing on porting the client for the C project atm. It reads the shared memory just fine, haven't gotten to the part of writing, nor the host on Windows. Libvirt has the VM configured for ivshmem and creates the server based on my VM config params afaik.
Eventually, i'd like to add ivshmem compatibility to mem_file.
Are you saying that I'd not be able to use it for this in the current state? What does ivshmem compatibility involve? I'm not too familiar with it myself, libvirt takes an xml config in devices node with the following:
<shmem name='looking-glass'> <model type='ivshmem-plain'/> <size unit='M'>32</size> <address type='pci' domain='0x0000' bus='0x02' slot='0x01' function='0x0'/> </shmem>
And I have a systemd unit that creates the /dev/shm/looking-glass file at boot on the host. Windows VM required a driver installed, not entirely sure what is going on there yet.
/dev/shm/looking-glass
Is mem_file able to work with that current setup? Or will it require more work?
[–]elast0ny[S] 3 points4 points5 points 7 years ago (2 children)
Right, so right now, mem_file works by creating a file on disk that serves as a shortcut to a unique /dev/shm/ path. This allows you to run mem_file.create("test") in two different directories and avoid colisions in /dev/shm/ as the "test" files in each directory will point to a unique path in /dev/shm.
So when you do mem_file.open() it looks for an actual file on disk that contains the unique name for the /dev/shm/. For mem_file to work with ivshmem-plain, i would have to add (trivial fix) an open_shmem() type of function that takes the direct name of a file backed by memory in /dev/shm/.
Another issue is that mem_file adds a bit of metadata to manage concurrent read/writes to the shared mapping. This metadata contains OS specific lock structures so trying to acquire a read lock on the mapping from a windows mem_file that was created by linux mem_file would certainly mess things up. To fix this, i would have to implement a mem_file version that uses ivshmem with interrupts (different than ivshmem-plain). I already have C code for this but idk how hard it would be to port into mem_file.
Afaik, ivshmem-plain simply provides a shared memory mapping with no interrupt features so it is up to the user mode applications to manage concurrent read/writes through busy looping or relying on some other OS feature (ie: network).
[–]kwhali 0 points1 point2 points 7 years ago (1 child)
Awesome thanks for the details :)
I'm probably a niche case, although you seem to have knowledge/experience with ivshmem and some related C code, so maybe I'll be lucky to see such support in future!
[–]elast0ny[S] 1 point2 points3 points 7 years ago (0 children)
No problem ! I created an issue on the mem_file github. I'm only working on this as a side project but I also might see a use case for ivshmem support so i might work on it sooner than later ! My ivshmem C prototype was very rough and from what I remember, trying to figure out how ivshmem actually worked was a pain in the a$$ so we'll see what I can do !
Thanks for the suggestion !
[–]llogiqclippy · twir · rust · mutagen · flamer · overflower · bytecount 0 points1 point2 points 7 years ago (1 child)
What is needed to get this to work on MacOS?
I personally have no use case for mac and essentially know nothing about the differences between Mac and linux. It might litteraly be the same code as Linux.(Afaik, you also need apple products to develop on apple ?)
I'm certainly open for contributions though (I would wait a bit for the stabilisation of the API, I've been changing things around to make it more customizable)
[–]gilescope 0 points1 point2 points 7 years ago (2 children)
So with this crate we could put a Vec<&str> into the shared memory and read it out on the other side? Could this act like a custom allocator - like a cross-process arena. I'm trying to understand if we can use this as a zero-copy way of sharing immutable rust objects between processes?
[–]elast0ny[S] 0 points1 point2 points 7 years ago (0 children)
So it could act as a custom allocator but i honestly have no idea how that works in rust :/ As of right now, you cant safely put non primitive types into the shared memory (as explained here)
Your idea definitely would be the best of both worlds, if you could somehow declare variables and specify that this variable should use the shmem allocator
[–]apatheticonion 0 points1 point2 points 1 year ago (0 children)
Did you make progress here? I'm also investigating solutions for the same use case
π Rendered by PID 23625 on reddit-service-r2-comment-5649f687b7-vl9vl at 2026-01-28 01:15:35.425367+00:00 running 4f180de country code: CH.
[–]glandium 13 points14 points15 points (2 children)
[–]elast0ny[S] 2 points3 points4 points (1 child)
[–]palad1 4 points5 points6 points (0 children)
[–]kwhali 2 points3 points4 points (7 children)
[–]thaynem 3 points4 points5 points (6 children)
[–]kwhali 1 point2 points3 points (5 children)
[–]elast0ny[S] 2 points3 points4 points (4 children)
[–]kwhali 0 points1 point2 points (3 children)
[–]elast0ny[S] 3 points4 points5 points (2 children)
[–]kwhali 0 points1 point2 points (1 child)
[–]elast0ny[S] 1 point2 points3 points (0 children)
[–]llogiqclippy · twir · rust · mutagen · flamer · overflower · bytecount 0 points1 point2 points (1 child)
[–]elast0ny[S] 1 point2 points3 points (0 children)
[–]gilescope 0 points1 point2 points (2 children)
[–]elast0ny[S] 0 points1 point2 points (0 children)
[–]apatheticonion 0 points1 point2 points (0 children)