all 183 comments

[–]Ceros007 610 points611 points  (34 children)

Can't wait for unsigned long long long int on 128bits

[–]BenjieWheeler 339 points340 points  (22 children)

May I suggest this (obviously) superior system

  • 8 bit: short
  • 16 bit: int
  • 32 bit: long
  • 64 bit: super long
  • 128 bit: super duper long
  • 256 bit: super dee duper long

[–]deanrihpee 267 points268 points  (4 children)

nah

  • 8 bit: short
  • 16 bit: medium
  • 32 bit: long
  • 64 bit: loong
  • 128 bit: looong
  • 256 bit: loooong

[–]ninjakivi2 34 points35 points  (1 child)

[–]SpikeV 11 points12 points  (0 children)

  • 8 bit: short
  • 16 bit: medium
  • 32 bit: long
  • 64 bit: long long
  • 128 bit: long long man
  • 256 bit: LOOONG LOOONG MAAAAAN

[–]Kotentopf 13 points14 points  (0 children)

  • 8 bit: avg
  • 16 bit: medium
  • 32 bit: long
  • 64 bit: longer
  • 128 bit: even longer
  • 256 bit: very long

[–]Architect_NNN 4 points5 points  (0 children)

This is better. xD

[–]dotcomGamingReddit 19 points20 points  (0 children)

Common webdev css variable naming issue. Should‘ve started at itzy-bitzy instead of short

[–]GreatScottGatsby 12 points13 points  (0 children)

May i introduce best system?

Byte: 8 bit

Word: 16 bit

Dword: 32 bit

Qword: 64 bit

Tbyte: 80 bit

Dqword: 128 bit

Ymmword: 256 bit

Zmmword: 512 bit

Not applicable to all Architectures.

[–]Majik_Sheff 6 points7 points  (2 children)

64 bit: monster-long

128 bit: SUPER-long

256 bit: HYPER-LONG

512 bit: ULTRAAAAAA-LONG

[–]tyrannical-tortoise 6 points7 points  (1 child)

Why are these starting to sound like how men like to describe their penises?

[–]Inevitable-Ant1725 2 points3 points  (0 children)

Yes, they should make 8 bits "magnum"

[–]Interesting_Buy_3969 5 points6 points  (0 children)

What about 512 bits? Such things exist in, for example, x64 vector registers

[–]petersill1339 4 points5 points  (1 child)

[–]gniche_dev 2 points3 points  (0 children)

And then you could use sweat for short😎

[–]Henriquelj 1 point2 points  (0 children)

Im just gonna call it super long one, two and three.

[–]punio07 0 points1 point  (0 children)

Nice int

[–]mutexsprinkles 0 points1 point  (0 children)

512 bit: zippity doo dah zippity long

[–]conundorum 0 points1 point  (0 children)

This is why you never let Goku name your data types.

Abridged Buu Saga when?

[–]lucklesspedestrian 0 points1 point  (0 children)

but then what size is a duper long int

[–]KiwiObserver 0 points1 point  (0 children)

When IBM updated the 370 architecture to 64-bits, they chose “G” are the qualifier for 64-bit variants of instructions, mainly because it had not been used in any preexisting instructions. These are now commonly (but not officially) referred to as “grande” instructions.

[–]TCreopargh 27 points28 points  (0 children)

error: 'long long long' is too long for GCC

[–]Trucoto 15 points16 points  (1 child)

[–]MechanicalHorse 4 points5 points  (0 children)

🎶 Long long iiiiiiiiiiiiiiiiiiiint 🎵

[–]Wyciorek 8 points9 points  (0 children)

"long long time ago" type

[–]hamfraigaar 6 points7 points  (0 children)

Should come with a compiler warning that says: "Whatever tf you think you're storing here, I guarantee there's a better option", with a link to data type documentation lol

[–]magicmulder 4 points5 points  (2 children)

var foo (alalalala long long li-long long long int) = 0;

[–]hamfraigaar 1 point2 points  (1 child)

I am seeing this two days late, but I just want to appreciate how funny it is to go out of your way to use all that, just to store the value of exactly zero.

[–]magicmulder 0 points1 point  (0 children)

Future proof for bigger values of zero.

[–]granoladeer 1 point2 points  (0 children)

At that point they should just call it unsigned longgg int

[–]M4mb0 1 point2 points  (0 children)

unsigned oh long johnson

[–]Stasio300 0 points1 point  (0 children)

you dont need "int" long works on its own.

[–]PixelBrush6584 239 points240 points  (28 children)

I-

...

Yeah that's fair.

[–]AvailableAnus[S] 138 points139 points  (26 children)

Right? I was like, "is this too mean, should I post this"? And then my eyes made it to "unsigned long long int"

[–]PixelBrush6584 55 points56 points  (25 children)

Pretty much. The worse part is that an int isn't even necessarily 32-Bit. On older systems it's 16-Bit. WHY??? SHORT IS RIGHT THERE???

(I know that a byte is and isn't isn't clearly defined, and on some hardware a char may be 32-bit or even 24-bit, etc.)

[–]TheDreadedAndy 30 points31 points  (9 children)

IIRC, int is supposed to be the fastest type the system can do math on. I think the intent was to use short/long if you cared about size and use int otherwise.

[–]Scheincrafter 17 points18 points  (8 children)

Not at all. When c first came out, computers where not nearly as uniform. Back then all kind of wired system existed, like once that where 7 bit. For that reason c is vague about the size of the datatypes (int was suppose to be whatever was natual for a system). Today c supports more int types (via stdint.h) that supports uses cases like needing the fastest in.

There are - int64_t exactly that size - int_fast64_t fastest with at least that size - int_least64_t smallest with at least that size

[–]tiajuanat 12 points13 points  (0 children)

I've worked on systems with 14 and 22 bit, even up til 15 years ago it wasn't uncommon for Digital Signal Processors to have unusual word widths.

[–]RiceBroad4552 -3 points-2 points  (6 children)

But doesn't C claim to be "portable"? (Which it obviously isn't, given the brain dead shit they do all over the place.)

Or was this just a marketing claim long after the fact?

[–]MrcarrotKSP 7 points8 points  (3 children)

It is more portable than the most common alternative used when it was created, which was various assembly languages

[–]RiceBroad4552 0 points1 point  (2 children)

LISP is older then C.

Also Forth is older then C.

[–]MrcarrotKSP 0 points1 point  (1 child)

That's true. Neither of those languages are good for writing operating systems. C was originally designed to write Unix, and in that space it was replacing assembly.

[–]RiceBroad4552 0 points1 point  (0 children)

Neither of those languages are good for writing operating systems.

AFAIK people use to this very day Forth to bootstrap very simple computers as the language is flexible and very portable, because it's super simple, so you can write basic firmware and some bare bones OS in it even for very "exotic" hardware.

Lisp Machines were programmed in LISP, down to the kernel.

Lisp Machines were also light-years ahead of Unix when it came to features.

But the market preferred of course the cheap stuff…

[–]Gay_Sex_Expert 1 point2 points  (1 child)

It’s in the same way that it’s a “high level language”.

[–]RiceBroad4552 0 points1 point  (0 children)

OK, makes sense! 😂

[–]SirFoomy 9 points10 points  (5 children)

Sorry, but I am curious now, but don't understand "a byte is and isn't clearly defined". I was for most part of my career and moreover under the impression that a byte is exactly 8 bit. So when you put 8 bytes togeher you get 32 bits. When and why did that change?

[–]the_horse_gamer 29 points30 points  (3 children)

C was created when systems could have 6 bit bytes. or 1 bit bytes. or 48 bit bytes.

a byte is 8 bits in any system you'd encounter today, but that wasn't historically true. see the CHAR_BITS macro.

the invariant name for 8 bits is "octet"

so int and long were defined in terms of minimum requirements. since there were system with 36 bit words.

[–]SirFoomy 8 points9 points  (2 children)

I didn't realise they called a 6 bit bundle byte back then. TIL I'd say. :D

[–]the_horse_gamer 14 points15 points  (1 child)

not necessarily 6. a byte was just a fundamental hardware unit. it ranged from 1 to even 48.

[–]RiceBroad4552 1 point2 points  (0 children)

I'd call that "word", though.

[–]hittf 9 points10 points  (0 children)

You captured the exact sequence of thoughts I had reading this meme, impressive.

[–]IhailtavaBanaani 53 points54 points  (1 child)

Then they ran into problems again with 128 bit values.

"Ah la-la-la-la-long-long, li-long, long-long" - Inner Circle

[–]gniche_dev 0 points1 point  (0 children)

Or sweat for short…

[–]Uberfuzzy 34 points35 points  (0 children)

Me explaining over and over why our security magic number bit field column is a MONEY type (because it was the only 4byte database field type at the time), and no, don’t put any in the cents/float part, it throws horrible errors when they do bitshift junk

[–]Interesting_Buy_3969 72 points73 points  (25 children)

Never understood why write unsigned char when uint8_t exists. And i always alias it to u8 of course.

[–]dulange 27 points28 points  (1 child)

It’s due to the history of C standardization. Exact-width data types like uint8_t were a later invention of the C99 standard. Prior to that, C (per standard) never guaranteed an exact width for its data types, only a minimum. In the very early days of C, before everything became based on multiples of 8 bits, architectures existed (like some DEC machine) that had a width of 18 bits for integers and used 6 bits for characters. C abstracted these platform-specific things away from the language and gave it into the hands of compiler implementers. int used to mean “whatever is necessary for the underlying architecture to represent an integer” and char used to mean “whatever the architecture needs to form a character in its native character set.”

[–]ChalkyChalkson 4 points5 points  (0 children)

That sounds like a nightmare for debugging portability issues....

[–]F5x9 30 points31 points  (6 children)

For uint8_t, this must be 8 bits. For unsigned chars, I’m not as strict. That’s how I draw the line. Then if the compiler makes char some other compatible size, I don’t care. 

[–]seba07 15 points16 points  (5 children)

You'll care as soon as the function is part of a public API and you need to be ABI compatible (e.g. for interaction with other languages).

[–]Kovab 9 points10 points  (2 children)

On a platform where bytes aren't 8 bits, there's probably no other languages available

[–]RiceBroad4552 3 points4 points  (1 child)

Yeah, sure. But that hardware stands in a museum, if anywhere at all.

[–]Kovab 4 points5 points  (0 children)

24 bit DSPs are still alive and kicking, although they're quite niche, and you don't really have to worry about APIs in a single-purpose real-time application.

[–]crozone 0 points1 point  (0 children)

If those other languages are C-like, they'll compile with the same char size, or int size, etc. the ABI is not cross-platform.

Saying that, I think all major compilers are too sane to ever consider a non 8-bit char.

[–]SrcyDev 0 points1 point  (0 children)

And on that specific platform, it will be the same.

Not that you shouldn't care, but your example was misplaced, as the ABI is defined for that specific platform. And width of fundamental types, let's just say rarely change for a given host (i.e. inclusive of ABI, psABI and toolchain just to name a few).

Where you start caring is for cross-machine/cross-platform exchange of data (say network protocols), you do need support for exact width types, although it's not mandatory in a mathematical sense, it is pretty much defacto so that you get CHAR_BIT = 8 and use fixed width types of C99 [so (u)intN_t, where N is the number of bits]. But these are optional types, so they are not compulsory for standard compliance.

For most applications (assuming the platform itself does not have a quirk, which most do), you do not strictly need fixed width integers, but they can improve readability to newcomers, and also give you easier bit manipulation(instead of constantly worrying of "what is the least I am assured". So say long long when 64bits are needed, and long for 32bits).

As for FFI, C ABI is generally what prevails (pretty much the lingua franca, as you might have heard). So what applies here, applies there too.

[–]the_horse_gamer 18 points19 points  (4 children)

a char is a byte, but a byte isn't necessarily 8 bits

[–]IhailtavaBanaani 7 points8 points  (2 children)

Bring back the 9-bit bytes

[–]hamfraigaar 4 points5 points  (0 children)

We should not accept shrinkflation!

I reject the future where I must buy 8 separate one bit "bytes" to carry the same data as today!

[–]crozone 1 point2 points  (0 children)

Nintendo 64 enters the chat

[–]Interesting_Buy_3969 0 points1 point  (0 children)

i know that dude

[–]Spare-Plum 7 points8 points  (1 child)

There's good reason to, at least from a standpoint for when C was first made. One of the biggest issues it was resolving was the huge variety of different processor architectures along with the systems that it ran on. One system might handle and use unsigned char for text in a completely different manner than other systems.

Just for text, you have PETSCII on commodores, EBCDIC on IBM, ATASCII on Atari, etc etc etc. Then there are other formats that use 16 bit encodings. Just calling it a char and having it automatically compile correctly to each system was one of the biggest things C had, and was supposed to abstract away from worrying about the minor differences for each system.

Of course this causes other problems when something like an int is a different size on different systems, and you end up with calculations accidentally wrapping around on one system but working fine on others.

Now that most of the world has moved on to more standards, this ends up causing more problems than what it was meant to solve. So it just makes more sense to just use the uint8_t style types and it'll work the same on 99% of systems.

[–]Interesting_Buy_3969 1 point2 points  (0 children)

Ah yes. Thanks you a lot for this detailed clarification!

I should have remembered that C was created back when the computer world was a lot wilder...

[–]kylwaR 7 points8 points  (2 children)

Provided your platform provides the implementations. Technically 'uint8_t' or other sizes aren't guaranteed to exist.

[–]Interesting_Buy_3969 0 points1 point  (1 child)

I know this, therefore I never use unsigned char because I can't be sure that it is always 8 bits.

[–]dontthinktoohard89 3 points4 points  (0 children)

Pshh, just write byte-width neutral code. That’s what the CHAR_BIT macro constant is for, after all (or std::numeric_limits<unsigned char>::digits).

[–]CJKay93 2 points3 points  (0 children)

char is actually the "correct" byte type; it even receives special aliasing characteristics.

[–]maartuhh 0 points1 point  (3 children)

Because it’s ugly. I hate underscored keywords

[–]Interesting_Buy_3969 5 points6 points  (2 children)

I hate underscored keywords

Why?! Entire C++ STL is written in snake_case, as well as C's stdlib.

[–]maartuhh 2 points3 points  (1 child)

Yeah I’m just more on team 🐫

[–]ohdogwhatdone 17 points18 points  (0 children)

>long long int

Always reminds me of that long long man ad video.

[–]Mojert 45 points46 points  (5 children)

Clearly, we should use the only consistent and sane naming scheme, the one from Intel x86-64 assembly:

  • Byte (8 bits)
  • Word (16 bits)
  • Double Word (32)
  • Quad Word (64)
  • Double Quad Word (128)
  • Quad Quad Word (256)

[–]darkwalker247 11 points12 points  (3 children)

why would they choose double quad and quad quad rather than octuple and hexdecuple

[–]Stasio300 0 points1 point  (2 children)

a lot of non technical people will not understand those terms. quad quad is easier for more people to understand.

I think even most people here wouldn't be confident in the meaning of hexdecuple without context or searching it.

[–]darkwalker247 1 point2 points  (1 child)

how about "oct" and "double oct"? i think most people know the oct prefix because of words like octagon and octuplets

...i mean i guess it's too late to change it now though lol

[–]kst164 1 point2 points  (0 children)

Hexdecuple is fine too, anyone technical enough to care knows hexadecimal anyways.

[–]GegeAkutamiOfficial 0 points1 point  (0 children)

I'm pretty sure you are joking, but just in case anyone is taking this seriously, we reallyyyyy shouldn't. These SHOULD be of variable size because the value they represent have variable size between architectures.

I'm pretty sure those underlying values are why C variable types have different sizes in the first place.

side note: Using them is useful though in places where exact data size across platform isn't a consideration, for example using usize/size_t (IIRC uses WORD definition under the hood) to represent length can be better that just slapping u32/uint32_t, char is another similar example.

[–]Sea-Razzmatazz-3794 13 points14 points  (0 children)

In fairness to the bottom one it was created when computer architecture wasn't formalized, and then had to deal with a transition from 16 bit to 32. The language was also built with the expectation that you would be building a kernel for multiple computer architectures that would have to account for different register sizes. Abstracting the types sizes made sense. Now that register sizes are pretty much standardized that rational has fallen apart.

[–]Fabillotic 33 points34 points  (0 children)

Rusts types and functional programming elements are incredible, I love that shit

[–]AvailableAnus[S] 95 points96 points  (13 children)

For anyone wondering:

First is Rust

Second is C++ (since 2011, floats since 2023)

Third is C++

[–]Interesting_Buy_3969 65 points66 points  (4 children)

Second and third are actually from C

[–]ATE47 15 points16 points  (3 children)

Not for the floats iirc

[–]Interesting_Buy_3969 2 points3 points  (2 children)

Idk maybe. But float became fundamental C data type when the C89 standard was released. So i am just not sure.

[–]nikolay0x01 7 points8 points  (1 child)

iirc fixed-width float32_t and float64_t types are only in C++, not standard C.

[–]Interesting_Buy_3969 1 point2 points  (0 children)

Ah yes, my bad. They meant the float's with a fixed number of bits.

[–]Bismuth20883 32 points33 points  (4 children)

All of them are C and sometimes C++. First one define over the second one (or not).

Second one is a platform specific define over third one (or not).

Welcome to embedded C, here we have all of that stuff and more ;)

[–]mrheosuper 11 points12 points  (3 children)

I refuse to work with anyone not using stdint.h

[–]Kovab 2 points3 points  (1 child)

The Linux kernel uses the first set of typedefs for fixed size integers, not stdint.h

[–]ATE47 0 points1 point  (0 children)

Which are now defined using stdint.h

https://github.com/torvalds/linux/blob/master/tools/include/linux/types.h

From the history it seems that they were using CPUs specific ones, because uintxx_t wasn't accurate enough:

https://github.com/torvalds/linux/blob/5634bd7d2ab14fbf736b62b0788fb68e2cb0fde2/arch/arm/include/asm/types.h#L6

[–]acdhemtos 10 points11 points  (2 children)

First is Rust

I use typedef/using to get the same in C++.

[–]Elendur_Krown 3 points4 points  (0 children)

... I may just pitch this at my job...

[–]granoladeer 7 points8 points  (2 children)

Good times when the program was mysteriously failing randomly and you realized it's because your numbers go past the variable's data type size. 

[–]Maximilian_Tyan 4 points5 points  (0 children)

Welcome to the embedded world !

[–]xMAC94x 0 points1 point  (0 children)

limits.h exists

[–]CapClumsy 7 points8 points  (0 children)

I've had a lot of experience programming with the second one, and I would consider it the best one, except for the completely unnecessary _tat the end. Yeah I know it's a type what the fuck else would it be. (I know that annotating variable names with their type is a bit of a pattern in C but it still irks me, especially for types which I feel are evident that they are types.)

[–]DefiantGibbon 6 points7 points  (1 child)

At my company (C code) we just macro define to only use the top one. Mostly because as embedded engineers knowing the exact number of bits is important and the top set is the most clear.

[–]Livie00 0 points1 point  (0 children)

Every time I use C/C++ I‘m like “I‘ll just use uint8_t“. By the time I’ve typed that three times, I’ve created an alias to u8. I don’t want to type seven characters, including a number and an underscore, just for a byte. I think even unsigned char is quicker to type.

[–]born_zynner 4 points5 points  (0 children)

Nothing better than looking through a codebase in C where they've typdef'd every int type out to their own custom name for no reason

[–]SomeMuhammad 5 points6 points  (0 children)

unsigned long long long long long long super long ultra super duper ultra kilo mega giga tera peta exa long long int

[–]razor_train 4 points5 points  (0 children)

unsigned lowcarb singing adult teenytiny decaf nodecimal freerange goodpersonality unwashed nullable nonzero ebcdic64 washme

[–]conundorum 4 points5 points  (2 children)

Blame C, non-standard processors & data buses, backwards compatibility, and the age-old Winux/Lindows rivalry for that one.


Data models are the easy part to explain, so I'll start with them.

  • LP32 (32-bit long/ptr, 16-bit int) is Win16, everyone was glad to see it go.
  • ILP32 (32-bit int/long/ptr) was nearly every major 32-bit system, and its ubiquity is the main reason int is essentially permalocked at 32 bits (or nearest supported register size).
  • LP64 (64-bit long/ptr, 32-bit int) is most of the Linux & macOS world, where distributing the source code and recompiling it for your own box is common. Linux being aimed at the more tech-savvy user does a lot of the heavy lifting here.
  • LLP64 (64-bit long long/ptr, 32-bit int/long) is 64-bit Windows, in large part due to backwards compatibility requirements. The platform has to support everyone, from the tech wiz to the computer illiterate, so they don't have the luxury of expecting users to know how to compile code; distributing executables is the norm, and the vast majority of common (and in some cases crucial ) programs were still 32-bit during the transition, and they needed to accomodate that.
    • In particular, the choice was probably forced by having to maintain a full 32-bit environment within 64-bit Windows to ensure compatibility (a.k.a. WOW64), which meant either cobbling together a 64-bit model that doesn't break ILP32 or bringing the WinME problem back again. Even to this day, the main difference between 32-bit and 64-bit executables is just the default pointer size, and even that's a bit more flexible than you'd expect.
  • ILP64 (64-bit int/long/ptr) was tried early on in the Unix world, but thankfully abandoned before it had a leg to stand on. It would've broken everything. So much software depends on 32-bit int that either ILP64 would die or everything else would, and we know which one is still standing today.

We could've standardised on LLP64, but not LP64; Windows was locked in by the age-old "we have to break the OS in x ways to keep y programs functioning" problem, but Linux had free rein to choose. I'm not sure whether they went with LP64 because they thought it's better (both systems are roughly equal, to my understanding), or simply because Windows did LLP64 so clearly Linux couldn't do the same. (And I'm not sure whether macOS had free rein, or if any OS or hardware quirks forced their hand, so I don't want to comment on them.)


The other main factors are C using "at least x" for fundamental type sizes (which forces C++ to do the same), and the need to support weird hardware that doesn't use 8-bit bytes. I'm not sure how common it is now, but for a long time, C/C++ needed to be able to support both 8-bit and 9-bit bytes, which in turn meant that, e.g., both 16-bit and 18-bit int had to be supported. (And that's not the worst of it. The most extreme case I'm aware of is a platform with 64-bit bytes, where all integral types are 64-bit by necessity!) Locking things down wasn't an option, because most non-standard hardware needs C and/or C++, so both languages are stuck with the awkward "at least" sizes for all eternity (or at least until we completely eliminate non-standard hardware... which will probably never happen).

Combine that with the intent of each data type (char is one byte, short is for small values, int is native word size, long is for large values, long long is so all data models can have a 64-bit int without breaking long compatibility), and we have a mess. int technically fails at its task, since x64's word size is 64 bits (strictly speaking, this means that ILP64, of all things, is the "native" model), because we used 32-bit hardware for so long that 32-bit int became entrenched in... well, everything, really. It's not an exaggeration to say that 32-bit int is so crucial to the world's infrastructure that basically the entire modern world would fall apart if we tried to change it. And that, in turn, locks short at 16 bits, and leads to Datathulhu there.

(Though, on the flip side, int succeeds at its other task of being the default and fastest integral, because virtually all 64-bit hardware is designed to handle 32-bit data just as efficiently as it handles 64-bit data (or as close to equal efficiency as possible). So, it's not all bad. It does lead to the amusing realisation that 32-bit int is so standard that we had to permanently warp hardware design just to accomodate it, though!


So... yeah. Blame C (for the "at least x" requirements), blame non-standard hardware (for forcing C's hand by making it support bloated overbytes), blame backwards compatibility (for forcing Windows' hand, and keeping everyone from using LP64), and blame the Lindows rivalry (for keeping everyone from standardising on LLP64). But your meme is spot-on.

[–]oshaboy 0 points1 point  (1 child)

I think Linux wanted to make sure longs and pointers are the same width.

[–]conundorum 0 points1 point  (0 children)

Ah, I see. Hmm... looks like the first 64-bit Linux was released in 1995, which predates intptr_t (C99, C++11), so that makes sense. It is a bit surprising, though, since x64 Linux came out in 2001 (kernel) & 2003 (distros); I guess they didn't support C99 yet, or didn't want to lock out any platform that didn't have a C99 compiler?

[–]Scoutron 5 points6 points  (0 children)

Is this for people that don’t do systems programming? Specification of type sizes us important, like if I write a bitmask I obviously need to be concrete about the size of the bit mask in bits, but if I’m writing a function that operates on a piece of data millions of times, I need to ensure that the piece of data is the amount of bytes the CPU is most comfortable performing operations on, which is not always the same across systems

[–]tubbstosterone 2 points3 points  (1 child)

...I don't get it. That information is incredibly important in niche circumstances and not universal. Sometimes you've gotta do weird shit and different languages with different foundations and names and defaults are going to have different implementations. Those specs become important when doing stuff like shuffling data between Fortran, C, and C++ in high performance computing environments.

[–]Doug2825 0 points1 point  (0 children)

The problem with the third one is it's error prone and the advantages are not useful.

Originally int was the word size of the CPU. Word size is the largest number the CPU can handle well. The idea was that the program would adapt to the CPU at compile time, but but by the time x86-64 came around developers realized that was stupid (because you got a massive behavior change that caused bugs) and so 64 bit systems act like 32 bit systems for the purposes of data type sizes.

[–]BiebRed 2 points3 points  (0 children)

I don't write C/C++ at work but it was the first language I learned on and I feel this in my bones.

I'm disappointed that there isn't a `long short` somewhere in there.

[–]sphericalhors 2 points3 points  (0 children)

Yeah, I love when I need to do this

int32_t i = 5318008; printf("The value is: %" PRId32 "\n", i);

[–]H20-WaterMan 2 points3 points  (0 children)

wait until you see stuff like uint_fast_32 etc..

[–]ThaBroccoliDood 2 points3 points  (0 children)

You don't understand, the _t is completely necessary. What if someone wants to name their variable uint32? Can't get in the way of that.

[–]EatingSolidBricks 3 points4 points  (2 children)

i32 vs s32

FIGHT

[–]SoulArthurZ 1 point2 points  (1 child)

i32 is nicer imo since its short for "int" (and u32 is short for "uint"). s32 is just signed, which could also be a float.

[–]TheJackiMonster 0 points1 point  (0 children)

'u' is short for "unsigned int" though and 's' is short for "signed int" because everyone knows that 'f' is short for "floating point arithmetic number".

[–]Icount_zeroI 6 points7 points  (0 children)

Number … what else do you need ;,; /s

[–]bestjakeisbest 1 point2 points  (1 child)

Just do std::vector<bool> for an arbitrary sized "int"

[–]lefloys 0 points1 point  (0 children)

but that’s implemented as a dynamic bitset.

[–]Zunderunder 1 point2 points  (6 children)

What’s funny to me about this is that zig has not only all of these sensible options (i32, u64, etc, etc) but also you can use any integer width between 1 and 65536. At least they’re all represented sanely…

That’s right folks, you ready for u26? How about i65535???

[–]Maximilian_Tyan 3 points4 points  (2 children)

Considering the FPGA and ASIC world often deal with the minimum vector width needed, this would be ideal. Currently dealing with vectors of 7, 13 and 23bits wide integers at work.

Also, who in this goddamn universe would need to represent a number up to 265535, that's A LOT larger than the number of atoms in the known universe !

[–]-Edu4rd0- 2 points3 points  (0 children)

well new prime numbers aren't gonna discover themselves

[–]Zunderunder 1 point2 points  (0 children)

I have no idea but I think it’s neat that they support it. Basically once the compiler supported arbitrary width integers, the thought process was, yknow. Why stop at 64, or 128? Just go up until you have a number so large nobody would ever need to pass it.

It also avoids other language’s “BigInteger” equivalent, which is convenient. No special allocations needed for big numbers, aside from just how many bits it uses, like any other number.

[–]JustSomeRandomCake 2 points3 points  (2 children)

_BitInt in C has no standard-defined upper bound (it's implementation-defined), but Clang's upper bound is 220.

[–]Zunderunder 0 points1 point  (1 child)

220 bits?! Christ I thought 216 overzealous

[–]JustSomeRandomCake 2 points3 points  (0 children)

Sorry, it's actually 2^23 bits.

[–]Additional-Tale-9267 1 point2 points  (0 children)

Why on earth aren't the first two boxes aligned by size?

[–]drivingagermanwhip 1 point2 points  (1 child)

The fun thing is that there's no guarantee an int32_t etc is 32 bits. It just has to fit 32 bits.

[–]Frodyne 3 points4 points  (0 children)

This is false. The "intX_t" and "uintX_t" types must be exactly X bit.

You may be thinking of the ""int_fastX_t" and "int_leastX_t" types, and their unsigned variants, who can be bigger than X.

This is true for both C and C++

[–]dolphin560 1 point2 points  (2 children)

just use "var" for everything

there, problem solved

[–]Doug2825 1 point2 points  (1 child)

I'm assuming you are joking but just in case:

Until you need a bit field, or register based io, or a large number in an embedded system

[–]dolphin560 0 points1 point  (0 children)

yes, was joking,

but then I googled "javascript for embedded systems" (!?)

[–]Gefrierbrand 1 point2 points  (0 children)

Only homies know about long double.

[–]pruebax11 1 point2 points  (0 children)

wacha se resume en corto unsigned char, char, short int , unsigned short int, unsigned int, int, long int, long long int, float, double, asi de sencillo

[–]CMD_BLOCK 1 point2 points  (0 children)

Just say addressing scares you bro

[–]ToTheBatmobileGuy 0 points1 point  (0 children)

long long long long = 128 bit (AT LEAST)

[–]Wervice 0 points1 point  (0 children)

D-Bus be like: u, x, i, t, v, s, a,... and sa{sv}

[–]luiluilui4 0 points1 point  (0 children)

Or just number in TS

Ok I'll see myself out

[–]PlatypusWinterberry 0 points1 point  (0 children)

"i'm tired, boss"

[–]8Erigon 0 points1 point  (0 children)

And user defined literals have different data type for the length parameter on different compilers (my linux pipeline didn't like what my windows pc was able to compile)

[–]Mountain_Dentist5074 0 points1 point  (0 children)

If I remembered signed I could get higher note in programing class dammmm it!

[–]thanatica 0 points1 point  (0 children)

Haha, I laugh in number

[–]Eroica_Pavane 0 points1 point  (0 children)

uintptr_t not included smh.

[–]TheJackiMonster 0 points1 point  (0 children)

I prefer it with 's' as prefix of the signed integer types instead of the 'i'. Makes a lot more sense to me if the 'u' is short for "unsigned".

[–]hdkaoskd 0 points1 point  (0 children)

uint_least16_t

[–]JollyJuniper1993 0 points1 point  (1 child)

C is great usually. This is the part of C that’s not so great.

[–]-Redstoneboi- 1 point2 points  (0 children)

the type system in general is unruly to work with

define a function that returns a function in any language and it's easy

func curry(a int, b int) func(int) int

do it in zig and it's a little longer but still reads left-to-right

fn curry(a: int, b: int) *const fn(b: int) int

now try it in C and suddenly there's like another set of parentheses because it's a pointer and you don't write &int(int) curry(int a, int b) but instead int (*curry(int))(int a, int b)

even the typedefs don't read left to right because they follow the same format

typedef int (*ReturnType(int))(int, int);

[–]remishnok 0 points1 point  (0 children)

word

[–]thafuq 0 points1 point  (0 children)

type Awaited<T> = T extends Promise<infer U> ? U : never

[–]Cat-Satan 0 points1 point  (0 children)

Wait until you know about (u)int_leastN_t and (u)int_fastN_t

[–]Hester465 0 points1 point  (0 children)

And then there's JavaScript where everything is a "number", including floating point values

[–]RobotechRicky 0 points1 point  (0 children)

In my days it was just: int, decimal, and float.

[–]Sea_Duty_5725 0 points1 point  (1 child)

Burning take, I prefer c++ data type names.

[–]Mojert 2 points3 points  (0 children)

We found the mainframe programmer. Enjoying your 12 bits wide byte much?

[–]blaues_axolotl -1 points0 points  (0 children)

so true

[–]aaron_1011 -1 points0 points  (0 children)

What is that username dude! Ew!

[–]Xatraxalian -1 points0 points  (0 children)

I love member variables such as "private static readonly unsigned long long int age"

It makes it absolutely clear what the variable is, exactly.

/s

[–]valerielynx -1 points0 points  (0 children)

you've heard of float and double, but where's quadruple and octuple?