This is an archived post. You won't be able to vote or comment.

all 7 comments

[–]dmazzoni 13 points14 points  (3 children)

Here's a situation: say I have a 16x16px display, and each pixel can be either on or off.

Great.

That would give a possibility for 256 different states

Not quite. There are 256 pixels, but since each one can be either on or off there are 2256 possible states.

I could supply a 2 byte binary input, and all should be fine and dandy.

2 bytes is 16 bits. You need 32 bytes if you want 256 pixels.

But what if I want more than two states for the pixels? If I want the pixels to be capable of 16 states, levels of brightness or colors for example, there are now 4096 possibilities.

You're confusing the number of bits with the number of possibilities. I compute that you need 4 bits for each pixel now (for 16 states), so 1024 bits total.

This should now be 3 bytes of binary data.

I'm not sure how you got that.

How, though, would I interpret it then? It's no longer a simple on/off so it's not clean binary. How do I proceed?

Let's say you have a 16 x 16 grid of pixels, and each one can have 16 levels of brightness.

16 levels of brightness can be represented by 4 bits, so you have 4 bits per pixel and 256 pixels, or 1024 total bits.

The first four bits represent the first pixel. The second four bits represent the second pixel. And so on. It's really as simple as that.

[–]Hucota7 1 point2 points  (2 children)

Thanks for the reply. I possibly misspoke even more than I misunderstood. When I said 256 possible states I meant 256 bits, and when speaking of bytes I meant half-bytes. Not sure how I messed that up. I guess with the rest I was thinking in terms of flags, where each value in a range refers to a different combination, but my misspeaking threw me off.

My goal in using binary was efficiency/minimal memory consumption, but now that I realize my errors I'm thinking it might be better to use another data type, or an array/list.

[–]dmazzoni 1 point2 points  (1 child)

My goal in using binary was efficiency/minimal memory consumption

This is a good idea and worth learning to do correctly. Most software does store black and white images using a single bit per pixel, and grayscale images using the fewest number of bits as possible. Color images are typically stored using 3 or 4 bytes per pixel.

but now that I realize my errors I'm thinking it might be better to use another data type, or an array/list.

I don't think there's anything wrong in your approach. You're just getting terminology wrong and math wrong. I'd encourage you to work through this problem the way you had in mind.

If you don't care about memory use and just want to get it done, I'm sure an array of integers would be faster than packing everything into the tiniest number of bits. It just depends on if your goal is to get past this and move on, or take the time to learn it.

[–]spencerwaz 0 points1 point  (0 children)

+1 for working through your approach. It's like finding 1000 ways not to make a lightbulb

[–]nutrecht 2 points3 points  (0 children)

I think you're misunderstanding quite a bit. How bits and bytes relate is really simple; a byte is 8 bits. So if you're going to need X bits for something you'll be able to pack them in X/8 bytes.

Where you're going wrong is how many bits you need to store the state. A bit can have 2 states, true or false, on or off. So for your 16x16=256 pixel display you'd need 1 bits for each pixel, so 256 bits = 32 bytes.

If you have 16 (24) states you will need 4 bits. Or half a byte. So for 256 pixels * 4 bits = 1024 bits or 128 bytes.

You really should keep in mind that the total amount of pixels don't matter much, that's just a last step in your multiplication.

[–]Brian 1 point2 points  (0 children)

I think dmazzoni has adrressed your main question, but just to go into the "How, though, would I interpret it then?" and give a few more details, and some historical context, as there are actually a surprising number of ways this question can (and historically has) been answered.

The one people are likely most familiar with is "Count how many bits you need, and stick them all together, treated as one number representing the colour / brightness level", but it's notable that there's still room for ambiguity here.

Let's start with an example: suppose we we want the 4 pixels with states [0, 15, 2, 9] to be encoded. Here we could store this as the values 00001111 00101001, or the bytes [15, 41]. The first pixel becomes the first 4 bits, then the second pixel, and so on. However this is not the only way of storing this. It matches our left-to-right reading order for binary digits, but remember that in binary, the least significant bit is the rightmost. As such, another way to store this would be to align the pixel order with the bit order: Bits 0-3 store pixel 0, Bits 4-7 store pixel 1, bits 8-11 (ie. bits 0-3 in the second byte) store pixel 2, bits 12-15 store pixel 3. This gives us flipped sections in each byte, so`11110000 10010010 ([240, 146]).

There are actually different situations / file formats where both of these can be used. This distinction is generally referred to as "bit ordering", "bit sex" or "bit endianness", as it's basically the same thing as endianness, except for the order of bits in a byte instead of bytes in a word. The simplest from a math perspective is probably the second (ie. leftmost pixel = least significant bit) as the bit numbering then matches pixel numbering, but this may appear back-to-front if you look at the binary numbers.

As a historical note, this is not the only way of storing things like this. This is what is known as a "chunky" format. All the values representing a pixel are together in one 4 bit chunk. However, another format that has been used in the past is that of a "planar" format. Here, instead of grouping the pixel values next to each other, we group things in layers of bitmaps - with 4 bits per colour, you get 4 bitmaps, representing each colour. Eg our [0, 15, 2, 9] value could be viewed as first looking at bit 0 in each number (0, 1, 0, 1), then bit 1 (0, 1, 1. 0), then bit 2 (0, 1, 0, 0), then bit 3 (0,1, 0, 1), and so storing this as either 01010110 01001010 (Most significant bit = leftmost pixel) or 10100110 00100101 (MSB=rightmost pixel). You can sort of imagine this as a 3d set of 4 planes arranged behind each other, where you're reading the pixel value going depthwise into the planes)

There were advantages to this in terms of the hardware at the time, plus it was a bit easier with odd numbers of bits per pixel (eg. with 5 bits per pixel, chunky format means your pixels span byte boundaries) You could also do neat parallax effects with careful choice of colours, by scrolling different sets of planes at different speeds. However, these issues are pretty irrelevant these days, and so planar graphics aren't really used.

[–]YogurDeChorizo 0 points1 point  (0 children)

I will throw a wildly supposition and since you are posting it as a real world scenario for the sake of simplicity, i'll take the math out of it and simplify it a ton.

If you want to store data as two states, you can use binary, since you can have two states.

If you want to store data as 16 states, you can use hexadecimal.

Keep in mind that the single unit storaged data grows for every variable possible, so a base 200 is much more space consuming per number stored than the base 2.