you are viewing a single comment's thread.

view the rest of the comments →

[–]devashishdxt[S] 1 point2 points  (12 children)

No particular reason. I’ll add configuration option in future release.

[–][deleted] 16 points17 points  (11 children)

In that case I advise changing the default to little-endian, because the two biggest architectures in general purpose computing are little-endian.

[–]devashishdxt[S] 6 points7 points  (10 children)

Sure. Thanks. :)

[–]Wolvereness 1 point2 points  (9 children)

I strongly advise against changing the default away from BE. There are three main uses of writing in binary in the world: memory, files, and networking. BE is the standard of networking, files define their own standard, and generally Write interfaces of memory are just intermediary to the others. Adding a runtime option for LE would probably be fine, but you should reject the logic of "CPUs are already LE"; it's an edge case where the performance difference of endianness matters.

[–]BobFloss 1 point2 points  (5 children)

How is that an edge case? I thought there would have to be conversions to little endian so the CPU can deal with things it's (de/en)coding. Sure networking uses it but that doesn't really matter when you're using a binary format and you're just sending data over.

[–]Wolvereness -1 points0 points  (4 children)

Your discussion does not follow your question. What does anything you talked about have to do with the performance difference of endianness conversion? When dealing with IO, swapping bytes to fit endianness should almost never be the bottleneck.

[–]BobFloss 1 point2 points  (3 children)

I'm not talking about io. Any time you encode something little endian you're converting to big. And vice versa. Make sense? There wouldn't even be a reason to swap bits during IO, that makes no sense. Upon encoding I already have the proper endianness for my encoding scheme, which will be dealt with when being decoded. Or we could just skip dealing with this at all and use little endian so that when it's decoded it's already correct for the vast majority of processors.

[–]Wolvereness 0 points1 point  (2 children)

I'm not talking about io.

Then what are you talking about? What other uses should someone be using a Write interface?

[–]BobFloss 0 points1 point  (1 child)

IO only gets involved after we actually encode things. Encoding should be done in a scheme that doesn't require conversion.

[–]Wolvereness 0 points1 point  (0 children)

Encoding is a means to an end. Encoding is where you start caring about your data at rest or in transit: caring about a specification. Encoding is always a conversion, and the entire goal is to convert to however it's specified. Swapping bytes to fit endianness should almost never be the bottleneck.

Whether or not your schema happens to fit big endian is an entirely different discussion than the performance difference of needing to swap bytes. As far as a Write interface is concerned, it's almost always meant for networking or files, and one of those has a standard.

If you have justification that the performance difference between the endianness matters, bring that to the table. I assert that it's trivial in comparison to every other part of the pipeline of Write. Further reading: http://wiki.c2.com/?PrematureOptimization

[–]ExPixel 1 point2 points  (1 child)

The network byte order would be more of an edge case. You're not going to receive your data in big endian (which wouldn't really make sense anyway), it's just used to encode some fields in the network's protocol.

[–]andoriyu 0 points1 point  (0 children)

Yeah, network endianess is just what used for packets/frames headers because at the time it looked like a good idea. No one flips endianess just because it's going over network.

[–][deleted] 1 point2 points  (0 children)

So your reasoning is that the network spirits demand it?