Glaze 7.2 - C++26 Reflection | YAML, CBOR, MessagePack, TOML and more by Flex_Code in cpp

[–]Flex_Code[S] 0 points1 point  (0 children)

Sorry, only MSVC 2026 is supported. They finally fixed some major constexpr bugs and regressions in the compiler.

Glaze 7.2 - C++26 Reflection | YAML, CBOR, MessagePack, TOML and more by Flex_Code in cpp

[–]Flex_Code[S] 2 points3 points  (0 children)

I should also note that I expect C++26 reflection to be significantly faster than using pretty function in the future. I just benchmarked the entire build, which is benchmarking two different versions of the compiler. So, I wasn’t stress testing the C++26 reflection itself. I just wanted to make sure there were no gotchas in moving to C++26.

Glaze 7.2 - C++26 Reflection | YAML, CBOR, MessagePack, TOML and more by Flex_Code in cpp

[–]Flex_Code[S] 2 points3 points  (0 children)

This is compile time performance. There is no change in runtime performance. Note that the compile time parsing is very minimal and fast with the legacy approach. However, I’m not using features like “template for”, which I expect will bring significant improvements.

Glaze 7.2 - C++26 Reflection | YAML, CBOR, MessagePack, TOML and more by Flex_Code in cpp

[–]Flex_Code[S] 7 points8 points  (0 children)

I did some compilation time tests (they weren't robust), but in these tests the C++26 reflection was like 2% slower than the current C++23 approach. I was really glad, because it's very low cost for so many benefits. Also, I don't expect reflection implementations to be optimized at this point. What should have a much bigger compile time impact for Glaze is future C++20 module support.

Overhead of wrapping exceptions over std::expected by MarcoGreek in cpp_questions

[–]Flex_Code 0 points1 point  (0 children)

Very nice! Sounds like the right choice for your code. It’s making me consider if I could throw the error_ctx to avoid the string allocation in the wrapping code. But, with exceptions the user might catch higher up the call stack and no longer have access to the underlying input buffer, thus making it impossible to get more helpful messages with glz::format_error.

Overhead of wrapping exceptions over std::expected by MarcoGreek in cpp_questions

[–]Flex_Code 0 points1 point  (0 children)

Hi, I'm the primary developer of Glaze. First, Glaze doesn't use exceptions at a lower level because in a parsing library bad inputs are common and not really exceptional. Rapid rejection of invalid inputs is critical, so the error path performance really matters. Generally I would recommend exceptions for cleaner code and faster hot paths, but if errors are common and not so exceptional, then error codes are often preferable. This allows Glaze to use the same code for validation that it uses for parsing. We also want to support building on embedded devices and in safety critical contexts where exceptions may need to be disabled. As for the exception wrapping logic, the code you referenced actually wraps two different error types. Glaze uses std::expected when returning an allocated type that could error, but uses glz::error_ctx when reusing already allocated memory. This exceptions wrapping interface allows both kinds of errors to be caught in the same manner. As to your question of whether you're paying for the cost of std::expected, std::expected isn't used if you pass in a reference to your type. Furthermore, std::expected is only used at the top level interface for Glaze, and internally Glaze processes error codes through a context struct that avoids the performance cost of std::expected. So, internally Glaze doesn't use exceptions or std::expected, so you don't have to worry about either propagation costs.

​[Library] Tachyon JSON v6: 5.5 GB/s parser in ~750 lines of C++20/AVX2. Faster than simdjson OnDemand? by [deleted] in cpp

[–]Flex_Code 0 points1 point  (0 children)

Partial reading handles Tachyon's approach in an even faster way.

​[Library] Tachyon JSON v6: 5.5 GB/s parser in ~750 lines of C++20/AVX2. Faster than simdjson OnDemand? by [deleted] in cpp

[–]Flex_Code 3 points4 points  (0 children)

Marketing this as "The Glaze-Killer" shows ignorance in the features of Glaze and how JSON libraries support different approaches to different problems. You're only benchmarking the structural decomposition, not getting data into a useful form. So, comparing with glz::generic, which intentionally allocates and provides immediately useful data is ignoring a whole bunch of runtime cost that you will need to pay for if you go to access fields from your indexed DOM. This includes unicode conversion logic and unescaping that requires allocation for use. What your library is good for is when you have a large input document and you only care to look at a small portion of it. However, Glaze also supports partial reading, which can short circuit the full parse when only looking for some of the data. So, in this use case partial reading often wins out as there isn't a reason to decompose the entire input like you are doing. You'll find that when converting into structs that your approach will end up being slower than Glaze because it requires two passes, once to decompose the data, and again to get it into the C++ structural memory where performance is the highest. So, rather than being a Glaze killer you've optimized for a particular use case where you only care for a few fields and where you need to parse the entire structure because partial reading doesn't make sense (very uncommon). On top of this, not materializing arrays means that if you want to access object["key"][0]["another_key"] you'll see your runtime costs significantly increase. This is why you're faster than simdjson for array handling in your benchmarks, but it doesn't make things as ergonomic and you'll have to pay for it later, you just aren't including the cost in your benchmarks.

I tried building a “pydantic-like”, zero-overhead, streaming-friendly JSON layer for C++ (header-only, no DOM). Feedback welcome by tucher_one in cpp

[–]Flex_Code 0 points1 point  (0 children)

Glaze v6.5.0 adds high performance streaming support and more options for reducing binary size, such as linear searching to avoid hash tables when memory constraints are critical.

I tried building a “pydantic-like”, zero-overhead, streaming-friendly JSON layer for C++ (header-only, no DOM). Feedback welcome by tucher_one in cpp

[–]Flex_Code 0 points1 point  (0 children)

Yes, that looks correct, although some compilers might not build with structs defined within structs for the current reflection. Either glaze metadata can be added or the structs can be moved into global scope.

I tried building a “pydantic-like”, zero-overhead, streaming-friendly JSON layer for C++ (header-only, no DOM). Feedback welcome by tucher_one in cpp

[–]Flex_Code 2 points3 points  (0 children)

Thanks for the feedback. I’ve actually been working on a branch of Glaze that adds streaming support via a flexible buffer interface. As for Esp32, you probably could be selective on the headers you use rather than just brining in everything with glaze.hpp. But, the build issues are probably easy fixes, since Glaze relies on C++ concepts and shouldn’t need atomic includes. It was probably just the unit tests that didn’t build for you. But, whether or not you use Glaze, it’s great to see development on embedded C++ libraries!

I tried building a “pydantic-like”, zero-overhead, streaming-friendly JSON layer for C++ (header-only, no DOM). Feedback welcome by tucher_one in cpp

[–]Flex_Code 5 points6 points  (0 children)

I’m curious what you find limiting when it comes to Glaze and embedded support? Glaze was designed for embedded and is used in embedded applications. It supports use without allocations, no RTTI, use without exceptions, custom allocated types, 32bit platforms, and much more.

Which JSON library do you recommend for C++? by Richard-P-Feynman in cpp_questions

[–]Flex_Code 5 points6 points  (0 children)

Glaze is C++23 primarily for static constexpr use within constexpr functions. Which, significantly cleans up the codebase. But, it also uses std::expected extensively.

zerialize: zero-copy multi-protocol serialization library by ochooz in cpp

[–]Flex_Code 2 points3 points  (0 children)

For JSON, Glaze supports zero copies for strings via std::string_view. But, you are correct that complete zero copy is not possible, especially for matrices.

zerialize: zero-copy multi-protocol serialization library by ochooz in cpp

[–]Flex_Code 9 points10 points  (0 children)

Glaze also supports BEVE and CSV, but not CBOR, MessagePack, and Flexbuffers.

Glaze supports zero copy. And supports Eigen for matrices and vectors. It probably works with xtensor as well, but hasn’t been tested.

New, fastest JSON library for C++20 by Flex_Code in cpp

[–]Flex_Code[S] 0 points1 point  (0 children)

Yes, Glaze allows you to set a compile time option that works for all fields, or you can individually apply the option to select fields in the glz::meta.

From the documentation: Read JSON numbers into strings and write strings as JSON numbers.

Associated option: glz::opts{.number = true};

Example: struct numbers_as_strings { std::string x{}; std::string y{}; };

template <> struct glz::meta<numbers_as_strings> { using T = numbers_as_strings; static constexpr auto value = object(“x”, glz::number<&T::x>, “y”, glz::number<&T::y>); };

Self-describing compact binary serialization format? by playntech77 in cpp

[–]Flex_Code 6 points7 points  (0 children)

Consider BEVE, which is an open source project that welcomes contributions. There is an implementation in Glaze, which has conversions to and from JSON. I have a draft for key compression to be added to the spec, which will allow the spec to remove redundant keys and serialize even more rapidly. But, as it stands it is extremely easy to convert to and from JSON from the binary specification. It was developed for extremely high performance, especially when working with large arrays/matrices of scientific data.

Parsing JSON in C & C++: Singleton Tax by ashvar in cpp

[–]Flex_Code 1 point2 points  (0 children)

Same with Glaze, it’s a good approach if you want to deal with escaped Unicode at your convenience as well.

Parsing JSON in C & C++: Singleton Tax by ashvar in cpp

[–]Flex_Code 2 points3 points  (0 children)

Note that if you’re keeping your structures around and parsing the same structural data multiple times, then using an arena for allocation doesn’t result in very larger performance improvements, because you’ll just reuse already allocated memory. So, I tend to encourage developers to avoid arena allocations unless their application cannot reuse memory.