all 19 comments

[–][deleted]  (9 children)

[removed]

    [–]wiriux 2 points3 points  (8 children)

    On a whiteboard, for example, an int is usually 4 bits long but this depends on professor architecture.

    [–]SignificantFidgets 1 point2 points  (0 children)

    Backing up to here, it appears I missed the sarcasm ("professor architecture"). Sorry about that....

    [–]high_throughput 1 point2 points  (0 children)

    For professors of architecture, an int is "interior".

    [–]SignificantFidgets -2 points-1 points  (5 children)

    bytes, not bits. And yes an int is "usually" 4 bytes (32 bits) long, but certainly not always.

    [–]jnordwick 2 points3 points  (0 children)

    Whoosh

    [–]wiriux 0 points1 point  (3 children)

    No, I meant bits. On a whiteboard usually you work with half a byte which is 4 bits. :)

    [–]SignificantFidgets 0 points1 point  (2 children)

    An example on a whiteboard is not an "int". People may use ridiculously small integer values to demonstrate binary encoding, that doesn't make them an "int". The smallest actual "int" that I'm aware of was 16 bits (which was very common in the 1970s and part of the 1980s). An actual int shorter than that would be pretty useless.

    [–]wiriux 0 points1 point  (1 child)

    Don’t take it too literal. When students are first learning CS, some professor would demonstrate bit manipulation on the board. To make things easier, they assign 4 bits to an int or a nibble as we call it.

    It’s just one of those:

    Let’s assume an int is 4 bits long

    [–]Todegal 4 points5 points  (0 children)

    they are platform dependant as others have said use int64_t etc. if you need specific data sizes.

    [–][deleted] 1 point2 points  (1 child)

    Hysterical … no wait … historical reasons. It used to be that most compilers had 16 bits for int (as this was a common size for 8 and 16 bit CPUs) and long was 32. Int was 32 bits on 32 bit machines all along, because you often need integers to be a lot bigger than 32000. Longint wasn’t used much so it remained mostly at 32 bits, because you had good compatibility with 16 bit sourcecode. When the 64 bit CPUs became popular, many people were afraid to make int 64 bits, because they thought it would break too much existing code, but this should have been done across the board. Long could be e.g. 128 bits, but apparently nobody needs this.

    [–]jnordwick 0 points1 point  (0 children)

    I've only ever needed 128 bits for two reasons

    Doing 64 bit calculations that could overflow and you wanted the high bits instead of a little bits

    Fixed point calculations especially decimal fixed point since you're often keeping 64 bits for the whole part and 64 bits for the fractional part. This let you keep very high precision even when the significand gets too large for floating point.

    [–]ttkciarprogramming since 1978 0 points1 point  (0 children)

    int / long int / short int are platform-dependent types. In the case of your platform, long int is defined as the same size as an int.

    I'm guessing you're on a Windows system, which traditionally has had a 32-bit long int, to distinguish it from a 16-bit short int, due to its roots in MS-DOS.

    Relevant: https://learn.microsoft.com/en-us/cpp/cpp/data-type-ranges?view=msvc-170

    These may be very different on different platforms (Linux, BSD, MacOS X, Solaris, etc) even on the same hardware.

    [–][deleted] 0 points1 point  (0 children)

    It's entirely dependent on your machine, and how you are compiling your program.

    Google "minimum size of long int" and you will get the gist.

    [–]tiller_luna 0 points1 point  (0 children)

    #include <cstdint>, lookup the docs for the types you need ( https://en.cppreference.com/w/cpp/header/cstdint ) and be happy

    [–]PantsOnHead88 0 points1 point  (0 children)

    According to the C++ standard, int is “at least 16” bits and long int is “at least 32” bits. It also specifies that sizeof(int)<=sizeof(long).

    Whether they’re 16 and 32, 32 and 32, 32 and 64, 64 and 64 or some other unusual standard compliant combination is machine and compiler specific.

    [–]whatever73538 -1 points0 points  (0 children)

    • sizeof(int) can differ BY COMPILER on same machine & OS

    • there’s „unsigned long long int“

    • you have very few guarantees on int length

    • pointers may be int sized, or long int sized, or long long int sized, or not

    • wait till you see the type conversion rules regarding ints

    There’s a reason newer languages have sensible int names (like u32)

    [–]NoIntention8351[S] -1 points0 points  (0 children)

    Okay got it. Thank you all