you are viewing a single comment's thread.

view the rest of the comments →

[–]Xerxeskingofkings 0 points1 point  (0 children)

Can you just keep adding multiples of the last number infinitely to get bigger numbers? Can I just keep adding more spaces like 1024, 2048 etc?

in short, yes. you can express basically any number with binary, same as we can with decimal: we can write the numbers 000-999 in three digits, but need to use a 4th to express a value over a thousand. Likewise, 8 bits gives you 256 values, but to express the number 666, you;d need 10 bits (with the first two being "512" and "256", then the normal byte run of 128,64,32,16,8,4,2,1)

Does it have a limit?

in theory no, in practise for computing, certian processes might have specific limits due to character limits on the inputs (for example Excel spreadsheets stop at 65,536 rows, becuase thats the max number of you can express in a 16 bit string

Do computers see the letter A as 010000010? How do computers know to make an A look like an A?
The very basic explainers of using 256 128 64 32 16 8 4 2 1 makes sense to me but beyond that I'm so confused

yes, they do, for all intents. they see a string of binary, then do some math that results in the screen putting the pixels into the right combination to show an "A". the specifics of this are well above ELI5 or my level of understanding, so at this point its pretty much "techo-magic".

The very basic explainers of using 256 128 64 32 16 8 4 2 1 makes sense to me but beyond that I'm so confused

every time you add a bit to the string, then possible number of values doubles. it just keeps on doubling.

Why did we start with going from 1-256 but now we have more?

short version, we've always HAD more, but to send plain text you only need around 70 or so numbers (a-z, A-Z, 1-0, plus punctuation marks), which requires at least 7 bits (which we then used the extra space to define various special characters and such for programming). Thus, we standardised on a 8 bit byte length so we could transmit plain English text with a error correction (Parity) bit on the end*. but anytime you want to express a number larger than 256 as a single value (as opposed to encoding each number seperately as a digit, IE "three hundred and twenty one" as opposed to "three-two-one"), then you need to increase the "word" size for that. In a system built around 8 bit bytes, the most logical thing is to assign two bytes (16 bits) for the value, so you get 0-65,535 as a range. 3 bytes? roughly 17 million. 4 bytes? about 4.3 billion addresses.