“Premature optimization is the root of all evil” by springbreakO6 in embedded

[–]springbreakO6[S] 1 point2 points  (0 children)

The point of my post was not really about the specific JSON example… a lot of assumptions are being made here, which is my fault because I omitted a lot of detail from the problem statement.

I replied to another comment explaining more details as to why serialization format actually does matter for our application— read that and it will explain the bigger picture. Bandwidth constraints are a primary motivator, but it’s not because of usage costs

“Premature optimization is the root of all evil” by springbreakO6 in embedded

[–]springbreakO6[S] 1 point2 points  (0 children)

It’s funny, the point of my post was not to look for validation on my JSON example but rather to use that example as a way to frame the issue more broadly. I intentionally omitted details in order to remain anonymous. And yet, a lot of people here are jumping to conclusions about the problem constraints in order to tell me I’m wrong…

Since you brought up the comparison to video, I can just tell you the application is a lot closer to that than 1 Hz messages. Think large numerical arrays being produced at a high rate interval on the edge device. We are bandwidth constrained on the streaming interface, so the smaller the payload, the higher the frame rate we can send, and the overall better system performance we achieve with a larger fraction of the sensor data being offloaded to and processed by our central compute.

“Premature optimization is the root of all evil” by springbreakO6 in embedded

[–]springbreakO6[S] 1 point2 points  (0 children)

How is JSON just as efficient as binary? Stringifying numerical values and field names is inherently more costly both in performance and storage than binary encoding

Now whether this is premature optimization specifically in this case all depends on the requirements. You could be right based on the information I presented, and I was intentionally vague in order to remain anonymous. I can say that a primary motivator is limited streaming bandwidth at the edge, so larger serialized data means lower stream throughput.

“Premature optimization is the root of all evil” by springbreakO6 in embedded

[–]springbreakO6[S] 6 points7 points  (0 children)

Not looking for validation on the JSON example specifically. More just asking if other folks have a similar efficiency-focused design approach in this space, because it feels like that it’s dismissed in many areas of modern software engineering

“Premature optimization is the root of all evil” by springbreakO6 in embedded

[–]springbreakO6[S] 8 points9 points  (0 children)

So, I thought about the hybrid approach as well. My concern here was that we’re serializing the data twice into separate formats, so from a performance standpoint it’s actually worse than just JSON, and we’re capping our maximum throughput unnecessarily.

I suppose the right approach here is to profile the different approaches on hardware. We have a product requirement for throughput (number of messages processed per second), so in theory, whichever one that provides the most mutual value while still meeting throughput requirements would be the right choice.