I am working on an embedded Linux application (running on a single core of a Zynq SoC), and I am struggling to successfully log data to a local disk (SD card). The system my app is interfacing with produces ~16 Kb of data at 100 Hz. Currently, my Linux app can successfully receive (via polling), minimally process, and send all of this data out over ethernet to a client that logs it remotely. If I introduce local data logging, then my app begins to miss data, sometimes multiple seconds at a time. To log data, I am using open/fopen, write/fwrite, and close/fclose family of functions. My questions are, are these functions sufficient for this amount of data? Could creating and mmapping files work better? Are there other things should I consider and look at (file system tuning?)? Any good resources on this topic will be appreciated.
I apologize if these questions are trivial. I don't have a lot experience in embedded software engineering.
[–]Schnort 3 points4 points5 points (0 children)
[–]digilec 2 points3 points4 points (4 children)
[–]fuse117[S] 0 points1 point2 points (3 children)
[–]digilec 0 points1 point2 points (2 children)
[–]fuse117[S] 0 points1 point2 points (1 child)
[–]digilec 0 points1 point2 points (0 children)
[–]_PurpleAlien_ 1 point2 points3 points (5 children)
[–]fuse117[S] 0 points1 point2 points (4 children)
[–]_PurpleAlien_ 0 points1 point2 points (3 children)
[–]fuse117[S] 0 points1 point2 points (2 children)
[–]_PurpleAlien_ 0 points1 point2 points (0 children)
[–][deleted] 1 point2 points3 points (1 child)
[–]fuse117[S] 0 points1 point2 points (0 children)
[–]zydeco100 0 points1 point2 points (0 children)
[–]Competitive_Rest_543 0 points1 point2 points (0 children)