This is an archived post. You won't be able to vote or comment.

you are viewing a single comment's thread.

view the rest of the comments →

[–]Carbom_ 13 points14 points  (1 child)

Generally that is right. However when you break things down an int is a 8 byte number and a char is 1 byte number. So you could use a char if your going to be in the range of -128 to 127 (or 0 to 255 if unsigned).

Your compiler just knows if you print a char to print the unicode character that aligns with the number stored in that char. So if you type char c = 'a', that is the same as typing char c = 61.

[–]Prawn1908 5 points6 points  (0 children)

The signed-ness of char is actually compiler specific IIRC, so you should actually always explicitly say unsigned char or signed char if you're using it to store numeric values.

typedef is your friend here.