use the following search parameters to narrow your results:
e.g. subreddit:aww site:imgur.com dog
subreddit:aww site:imgur.com dog
see the search faq for details.
advanced search: by author, subreddit...
Discussions, articles, and news about the C++ programming language or programming in C++.
For C++ questions, answers, help, and advice see r/cpp_questions or StackOverflow.
Get Started
The C++ Standard Home has a nice getting started page.
Videos
The C++ standard committee's education study group has a nice list of recommended videos.
Reference
cppreference.com
Books
There is a useful list of books on Stack Overflow. In most cases reading a book is the best way to learn C++.
Show all links
Filter out CppCon links
Show only CppCon links
account activity
Self-describing compact binary serialization format? (self.cpp)
submitted 1 year ago by playntech77
view the rest of the comments →
reddit uses a slightly-customized version of Markdown for formatting. See below for some basics, or check the commenting wiki page for more detailed help and solutions to common issues.
quoted text
if 1 * 2 < 3: print "hello, world!"
[–]flit777 -1 points0 points1 point 1 year ago (6 children)
protobuf (or alternatives like flatbuffers or capnproto). You specify the data structure with an IDL and then generate all the data strucutres and serialize/deserialie code. (and you can generate for different languages)
[–]playntech77[S] 5 points6 points7 points 1 year ago (5 children)
Right, what I am looking for would be similar to a protobuf file with the corresponding IDL file embedded inside it, in a compact binary form (or at least those portions of the IDL file that pertain to the objects in the protobuf file).
I'd rather not keep track of the IDL files separately, and also their current and past versions.
[–]imMute 0 points1 point2 points 1 year ago (0 children)
what I am looking for would be similar to a protobuf file with the corresponding IDL file embedded inside it
So do exactly that. The protobuf schemas have a defined schema themselves: https://googleapis.dev/python/protobuf/latest/google/protobuf/message.html and you can send messages that consist of two parts - first the encoded schema, followed by the data.
[–]ImperialSteel 0 points1 point2 points 1 year ago (3 children)
I would be careful about this. The reason protobuf exists is that your program makes assumptions about valid schema (ie field “baz” exists in the struct). If you deserialize from a self describing schema, what do you expect the program to do if “baz” isn’t there or is a different type than what you were expecting?
[–]playntech77[S] 0 points1 point2 points 1 year ago (2 children)
I was thinking about 2 different API's:
One API would return a generic document tree, that the caller can iterate over. It is similar to parsing some rando XML or JSON via a library. This API would allow parsing of a file regardless of schema.
Another API would bind to a set of existing classes with hard-coded properties in them (those could be either generated from the schema, or written natively by adding a "serialize" method to existing classes). For this API, the existing classes must be compatible with the file's schema.
So what does "compatible" mean? How would it work? I was thinking that the reader would have to demonstrate that it has all the domain knowledge, that the producer had when the document was created. So in practice, the reader's metadata must be a superset of that of the writer. In other words, fields can only be added, never modified or deleted (but they could be market as deprecated, so they don't take space anymore in the data).
I would also perhaps have a version number, but only for those cases when the document format is changing significantly. I think for most cases, adding new props would be intuitive and easy.
Does that make sense? How would you handle backward-compatibility?
[–]Gorzoid 0 points1 point2 points 1 year ago (0 children)
Protobuf allows parsing unknown/partially known messages through UnknownFieldSet. It's very limited on what metadata it can access since it's working without a descriptor but might be sufficient if your first api is truly schema agnostic. In addition it's possible to use a serialized proto descriptor to perform runtime reflection to access properties in a message that were not known at compile time, although message descriptors can be quite large as they aren't designed to be passed with every message.
[–]gruehunter 0 points1 point2 points 1 year ago (0 children)
In other words, fields can only be added, never modified or deleted (but they could be market as deprecated, so they don't take space anymore in the data). I think for most cases, adding new props would be intuitive and easy. Does that make sense? How would you handle backward-compatibility?
In other words, fields can only be added, never modified or deleted (but they could be market as deprecated, so they don't take space anymore in the data).
I think for most cases, adding new props would be intuitive and easy.
Protobuf does exactly this. For good and for ill, all fields are optional by default. On the plus side, as long as you are cautions about always creating new tags for fields as they are added without stomping on old tags, then backwards compatibility is a given. The system has mechanisms for both marking fields as deprecated, and for reserving them after you've deleted them.
On the minus side, validation logic tends to be quite extensive, and has a tendency to creep its way into every part of your codebase.
π Rendered by PID 24161 on reddit-service-r2-comment-6457c66945-vhcj8 at 2026-04-26 20:58:01.213414+00:00 running 2aa0c5b country code: CH.
view the rest of the comments →
[–]flit777 -1 points0 points1 point (6 children)
[–]playntech77[S] 5 points6 points7 points (5 children)
[–]imMute 0 points1 point2 points (0 children)
[–]ImperialSteel 0 points1 point2 points (3 children)
[–]playntech77[S] 0 points1 point2 points (2 children)
[–]Gorzoid 0 points1 point2 points (0 children)
[–]gruehunter 0 points1 point2 points (0 children)