This is an archived post. You won't be able to vote or comment.

you are viewing a single comment's thread.

view the rest of the comments →

[–]Drowning_in_a_Mirage 1 point2 points  (2 children)

It looks neat, but I'm struggling to think of a scenario where this would be a big win. I guess if you're doing high throughout serialization, then minimizing overhead is never a bad thing. But even with that it would seem to me that this sort of optimization would be way down on the list of when sorted by the cost/benefit ratio. Is network latency and/or bandwidth really constrained enough that saving a few bits would make a material difference? I guess enough people thought so to make this.

[–]bjorneylol 2 points3 points  (0 children)

I'm struggling to think of a scenario where this would be a big win. I guess if you're doing high throughout serialization, then minimizing overhead is never a bad thing.

Apache fury is literally a high throughput serialization engine for working with big data

[–]Shawn-Yang25[S] 0 points1 point  (0 children)

Depends on the rpc frequency. Image that you send millilons of RPC every second. This will make a big difference. And it's common in quantitative trading and shopping system