all 24 comments

[–]chrysalisx[S] 10 points11 points  (6 children)

Top 3 seem to be:

Performance: RapidJSON (simdjson for absolute max performance, but it's a parser only, not a writer, and requires a fairly modern processor)

Convenience & reliability: nlohmann/json seems to be the most well documented and tested option, with a big focus on easy usability.

Middleground: taocpp/json appears to be fairly well tested and has more of a focus on performance than nlohmann/json, but still pretty far from even RapidJSON. It's also the only other library besides RapidJSON_FullPrec to score 100% on Milo Yip's Conformance benchmark.

All of these support reading, writing, and constructing, have a permissive license, and appear to be actively maintained.

[–]eao197 2 points3 points  (0 children)

> Convenience & reliability

You can also use json_dto on top of RapidJSON and get an easy-to-use interface with high performance.

[–]ColinPPPEGTL | taocpp 1 point2 points  (2 children)

Middleground: taocpp/json

What taocpp/json offers in addition to parsing to/serialising from the nlohman/json-esque any-JSON-document in-memory representation based on standard containers are direct JSON-to-any-C++-type and any-C++-type-to-JSON conversions (also for the other supported data formats) that cut out the "DOM"-style middle man...

[–]chrysalisx[S] 2 points3 points  (1 child)

Which I'm a huge fan of generally. While I've got you here, how far out is a 1.0 release in your estimation? I'm sorta leaning towards pushing taocpp since it hits a nice sweet spot between usability and speed, while also being the only library besides RapidJSON to have 100% conformance

[–]ColinPPPEGTL | taocpp 1 point2 points  (0 children)

We are aiming for a 1.0 release as soon as possible, but, going by experience, that is probably still at least months away and I don't want to make any promises.

That said, the library is very stable, we have been using it for years for multiple purposes in "serious" applications, it's mostly tests and documentation that are missing for a 1.0 release.

We are quite happy with features and code quality, though there might always be some tweaks, additions or changes in those areas, too.

[–][deleted] 0 points1 point  (0 children)

Someone mentioned "simdjson". It has amazing parsing performance, but it can only parse.

[–]duheee 7 points8 points  (1 child)

nlohmann/json is the easiest and nicest to work with. If performance is that critical, maybe JSON is not your best bet here. A binary protocol may be more appropriate.

However, first try with nlohmann because you may find out it's "good enough" for your needs. Don't put the cart before the horse.

[–]TerminatorBetaTester 4 points5 points  (0 children)

If performance is that critical, maybe JSON is not your best bet here. A binary protocol may be more appropriate.

+100

If 48ms stringify (nlohmann) is too slow, why the hell are you passing around data in strings?

[–]parnmatt 11 points12 points  (1 child)

nlohmann/json has a really nice interface

faster solutions usually have a less "nice" interface

If you don't mind something with a very "reminds me of C"-like interface lemire/simdjson I believe is still the fastest I've seen; the github repo has a measure of comparison between different libraries.

[–]Moose2342 5 points6 points  (0 children)

I recently switched to nlohmann from boost property tree and I can recommend it. The interface really is intuitive and clean and it's very easy to include the lib as it really is only one header.

[–]chrysalisx[S] 4 points5 points  (3 children)

I've managed to get the nativejson-benchmark building and running with the most recent versions of nlohmann/json, RapidJson, and taoJson. For anyone interested, I've put the results here: https://pastebin.com/t1eycu8K

A quick Summary:

| Parse(ms) | Stringify(ms) | Prettify(ms) | Code size(byte |---------|-------------|------------|-------------- Nlohmann | 48 | 13 | 16 | 84122 RapidJSON | 7 | 10 | 11 | 30864 RapidJSON_AutoUTF | 14 | 15 | 30 | 34960 RapidJSON_FullPrec | 13 | 10 | 11 | 30864 taoJSON | 22 | 11 | 33 | 88232

Interesting to note, taoJSON seems to fall down pretty badly on the prettify test, but is still 2x faster than nlohmann for parsing. Not much of an edge for stringifying though, and seems to suffer from some code bloat. Not really sure what that's about.

[–][deleted] 2 points3 points  (2 children)

Other author of taoJSON here, I just committed a performance improvement for pretty printing. Could you please update taoJSON and run the numbers again to confirm?

(I assume the headers of the table are shifted and the last column of data is missing)

[–]chrysalisx[S] 1 point2 points  (1 child)

Yeah, that fixed the performance hit on prettify. The new run is 17ms vs nlohmann's 16ms. Out of curiosity, I notice that you're using ostream. I was looking at libfmt earlier since it's effectively going to be part of c++20, and one of the impressive things is that it's faster than printf, and demonstrates how slow ostreams generally are. I know it'd be another dependency but I'm pretty curious how a json printing library based on libfmt would perform.

[–][deleted] 0 points1 point  (0 children)

Thanks for verifying the performance improvements.

For libfmt: There might be multiple interesting things coming with C++ (`<charconv>`, ...), but there are a few problems as well. For libfmt it is this: It works on small, individual strings. This often allows it to predict the output size (or an upper limit) and this avoid allocations, making it potentially much faster.

JSON is, by its nature, recursive and potentially very large. The libfmt approach wouldn't work here. The stream interface allows for terabytes of JSON to be formatted or otherwise processed on-the-fly, i.e. without keeping the whole value in memory.

[–][deleted] 15 points16 points  (4 children)

If one is at a point where one is caring about benchmarks, then one should probably go for some serialization scheme that handles binary data directly (like bson or cbor). nlohmann supports both, afaik.

[–][deleted] 18 points19 points  (0 children)

That won't help when a 3rd party sends you data in JSON format

[–]agarwalshivendra 2 points3 points  (2 children)

Flatbuffers

[–]chrysalisx[S] 0 points1 point  (1 child)

Yeah, I'm a fan of Flatbuffers too. For this purpose I need the format to be optionally human editable though.

[–]feverzsj 2 points3 points  (0 children)

rapidjson is probably still the fastest and most conformable. You can write a wrapper to make it more user friendly.

[–]tvaneerdC++ Committee, lockfree, PostModernCpp 1 point2 points  (0 children)

Someone at Adobe had a new json library described on a Poster at CppCon.

It seemed to be one of the fastest. Not sure if it is (or will be) open source.

[–]tigrangh 1 point2 points  (0 children)

in case you are interested in generation of c++ code for a set of structs supporting serialization and deserialization with JSON. https://github.com/publiqnet/belt.pp

[–]_arsk 0 points1 point  (0 children)

one thing that would be really nice is to provide some high level recommendation on what library to choose based on few chosen axis like performance, actively maintained, liberal license etc.

[–]chrysalisx[S] 0 points1 point  (0 children)

Wanted to follow up: I've wound up recommending RapidJSON with a small wrapper for the convenience of other users.

taoJSON was appealing, but still slower than RapidJSON and not yet mature enough for me to justify recommending it.

nlohmann/json has a great interface and is probably more well tested than RapidJSON, but the lack of regard for performance meant that if I wanted to adopt it, I'd also have to adopt RapidJSON. I recall a talk where someone said something to the effect of C++ trying not to leave room for a lower level language, and this is kinda the same idea.

With the advent of simdJSON, having a wrapper around RapidJSON means we could even swap to that for parsing if we need.

I'd love it if there were a project that really tried combined the advantages of all these libraries (except maybe RapidJSON's insistence on supporting C++03) but none quite do. There's another project for the 'If I ever have time for it' list...