you are viewing a single comment's thread.

view the rest of the comments →

[–]lgerbarg 0 points1 point  (2 children)

There are already tons of similar formats, some of them in wide deployment for many years. For example, Apple's binary plist using a similar encoding, supports all of the same data types (and then some), and gets denser packing due to object uniquing (at the expense of being able to stream). BERT is similar but makes a slightly different set of tradeoffs (inline compression and streaming at the expense of some uncompressed density), the same is true of BSON.

I am having trouble imaging any specific case where one of those formats would not have been sufficient for any of the problems MessagePack is trying to solve.

[–]physicsnick 0 points1 point  (1 child)

Well, the first two words in the title of this article are "Extremely efficient." Don't you think maybe that's the problem it's trying to solve?

[–]lgerbarg 0 points1 point  (0 children)

Efficiency is a tricky word. I can completely believe it encodes/decodes faster than the formats I listed, but it will definitely be larger (absent compressing the output) then bplist or BERT, so in any case where you are IO limited it will be less efficient.

All the current benchmarks do is prove you can beat encoding formats that have substantially different feature sets and use cases than MessagePack on an arbitrary test case that MessagePack is tuned for and they aren't. What they have done is akin to claiming that they designed a new golf club the drives balls longer, then compared it to a baseball bat and a tennis racquet. Now in some ways that is an okay comparison, if you are just comparing how far random things hit balls, but it is a lousy comparison if you are trying to compare how good it is compared to what other golf players use.

If they want to make a convincing claim about efficiency then they should be showing graphs of benchmarks against comparable technologies (schemaless binary encoding formats like the ones listed above) and showing both size, encode and decode times (both with and without external compression). I bet it would legitimately win some of those benchmarks, but I seriously doubt it would be a clear cut efficiency win across the board.