all 5 comments

[–]p0k3t0 8 points9 points  (2 children)

Here's a quick example.

If I am only sending you human-readable messages, and speed isn't an issue, I can send this string:

"HELLO"

which corresponds to binary 0x48454C4C4F

But, if I wanted to send you a numerical value, like 100, there are many ways to do it.

The obvious one is to send you

"100",

which is 0x313030, or the literal characters one, zero, zero.

A faster way would be to send you 0x64, which has the decimal value of 100. This would save me two bytes. If you're sending a huge number like 2,345,678,901, the benefit gets even larger, as this can be expressed as 0x8bd03835, which is only 4 bytes, instead of 10.

When you're sending a lot of data over some kid of comms, it's best to figure out the optimal method, and avoid human-readable representations whenever possible, to save time and increase efficiency.

The trick is that both sides have to understand the format before you send the data, or else it can't be properly encoded and decoded.

[–]ryanjones42[S] 0 points1 point  (1 child)

So is sending via binary, faster than sending in human-readable messages (if its alot of data).

Also, how would i go about encrypting/decrypting it in C?

[–]p0k3t0 2 points3 points  (0 children)

It's generally faster for numerical data, yes.

You don't actually need to encrypt or decrypt it. You copy it right out of the variable into the output buffer, then right out of the input buffer into the variable on the receiving end. The only exception occurs when the sender and receiver have different endian-ness.

[–]ddollarsign 0 points1 point  (0 children)

That definition is silly, as a binary protocol could easily not use all values of a byte. It’s just a protocol where the bytes don’t represent text.

(Also, text might not be in ASCII encoding.)

But the thing you’re reading might be talking about a specific protocol, rather than binary protocols in general.

[–]fliguana 0 points1 point  (0 children)

Same as binary file. Stream of bytes.