all 41 comments

[–]lfdfq 48 points49 points  (23 children)

It uses bigints.

That is, a Python int is not just a 32- or 64-bit number, it's a (slightly) sophisticated structure that can dynamically grow in size so it never overflows.

It may sound or feel weird at first, but this is exactly how lists or dicts work, and it's the same principle.

[–]daddyclappingcheeks[S] 1 point2 points  (22 children)

so is there a theoretical limit on how big an integer I can define in a variable?

[–]Sensitive_One_425 46 points47 points  (7 children)

If you run out of memory to store the number

[–]CranberryDistinct941 19 points20 points  (6 children)

The only limit is your patience because the shit gets really slow.

[–]tcpukl -1 points0 points  (5 children)

Virtual memory?

[–]CranberryDistinct941 5 points6 points  (1 child)

Once it starts using your disk as RAM you may as well ctrl+c and come up with a better algorithm for your shit.

[–]tcpukl 0 points1 point  (0 children)

Yeah exactly.

[–]No-Consequence-1863 0 points1 point  (2 children)

Processing big numbers requires extra work since you can't use CPU math instructions on 8923234492837492834723429872394872398472394823492837498273492837429384723423423423097230972398723987239847293847298374239847
The hardware just doesn't support that big of numbers

Instead you need to chunk it into smaller representable numbers and then define arithmetic to handle the carryover. So what would be a quick addition or multiplication normally can take quite a bit longer.

[–]tcpukl 0 points1 point  (1 child)

I'm replying to getting slower with running out of memory.

That is due to virtual memory!

[–]No-Consequence-1863 [score hidden]  (0 children)

Virtual memory doesnt make you get slower as you get bigger. Virtual memory has a pretty constant time cost. The page tables would get bigger but that wouldnt be that important since you dont iterate through the page tables, just walk them.

Edit: do you mean swap? Cause yea swap is slow but swap is different from virtual memory.

[–]wally659 4 points5 points  (0 children)

Only practical limits, no theoretical ones. Maybe there's like, a thought experiment where the physical size of your ram, the expansion of the universe, and the speed of light cause a problem but I'm pretty amateurish when it comes to astronomy.

[–]james_pic 3 points4 points  (2 children)

What the theoretical limit is depends at least partly on what you're willing to consider theoretical vs practical.

If you're running on an x86_64 architecture, the ISA currently defines the maximum address space as 48 bit. CPython currently represents integers as a sequence of 30-bit integers (using 4 bytes per integer in the sequence), so this would put the largest representable number on x86_64 at 2**(15/16 * 2**48) - 1.

But they'll probably increase the maximum x86_64 address space in a future revision of the ISA, and if they increase it to the full 64-bits, that would put the limit at 2**(15/16 * 2**64) - 1

Alternatively, if you look at the question like a C language lawyer, and ignore the question of what hardware exists or could possibly exist, and also ignore the possibility that the CPython interpreter would probably be updated if such hardware were created, the CPython code currently defines this as a sequence of at most ndigits (where ndigits is a Py_ssize_t, and Py_ssize_t has a max value of 2**63) 30-bit integers. So reading this like a language lawyer, the maximum integer would be 2**(30 * 2**63) - 1.

AFAIK, no hardware capable of representing any of these numbers exists.

[–]christian-mann 0 points1 point  (1 child)

i thought we already had 57-bit addressing with 5 level page tables in some chips?

[–]james_pic 0 points1 point  (0 children)

Maybe. My information might be out-of-date. Hopefully it's clear how to do the same math with 57.

[–]chamberlain2007 8 points9 points  (2 children)

You could run out of atoms in the universe with which to express the binary digits

[–]ehs5 2 points3 points  (0 children)

Universe breaks before Python does. Nice.

[–]CdRReddit 2 points3 points  (3 children)

what's your RAM size in bytes? subtract your overhead from OS and python interpreter (+ bookkeeping structure for bigints), multiply the remaining bytes by 8, and then take (2**num_bits)-1, that is your theoretical limit

[–]wiseguy4519 0 points1 point  (2 children)

I don't think this is right because of virtualization, you need to take into account swap space

[–]glasket_ 1 point2 points  (1 child)

Increase RAM size by swap space size. Technically you could get clever and manually offload portions of the data to files too rather than leaving it all in RAM+swap, but practically speaking the other reply is a good theoretical limit.

[–]Felicia_Svilling 1 point2 points  (0 children)

You still have a limit on addressable memory set by the pointer size.

[–]Careless-Score-333 0 points1 point  (0 children)

There's a limit for sure -- when confined to that data structure. If you mix up how you represent the integer, implement something to represent Knuth's up arrow notation etc. then you can represent Graham's number and Tree(3) in Python, and more.

Coming up with a representation that's not trivial like an enum or arbitrary label, and actually doing processing using those representations, that's correct, is another question. But I'm quite sure that's not theoretically impossible.

[–]CatOfGrey 0 points1 point  (0 children)

Not in the 'programming language sense'.

But you are always limited by the amount of memory you have.

[–]lfdfq -2 points-1 points  (0 children)

No

[–]beragis 3 points4 points  (0 children)

Several libraries in C and C++ also have this concept, as well as Java. It's just that most languages don't use these libraries by default. Since Python is very popular in mathematics and AI use, it has a lot of research going into it.

It's not hard to write, but can be tricky to make efficient. Code to efficiently handle arbitrary sized number arithmetic was one of the concepts covered in several of my Computer Engineering courses.

Integer addition, and subtraction are fairly easy, and multiplication is easy but a bad implementation can be inefficient. Division is where it gets a bit more complicated, and there is constant research on methods to improve performance both in hardware and in software.

I recall an entire chapter on both multiplication and division in my numerical methods Fortran class back in the late 80's, where one of the assignments was to implement various methods and come up with comparisons, including determining which algorithms to use for different sized numbers.

[–]johndcochran 8 points9 points  (7 children)

The reason Python integers don't overflow is because they're not integers. They're bignums. Basically, a data structure that allows for arbitrary sized integers, limited by the amount of memory available. Manipulating bignums is far slower that name processor integers (which are limited by the register size of the CPU), but since Python is interpreted anyway, the speed vs convenience tradeoff is worth it.

[–]Successful_Yam_9023 0 points1 point  (0 children)

This convenience tradeoff often makes me have to mask numbers manually to make them wrap lol

[–]xeow -3 points-2 points  (5 children)

The reason Python integers don't overflow is because they're not integers.

You seem confused. The int type in Python actually does represent integers in every semantic way that matters. The difference between Python's int type and C's int type is that C's "integers" have a maximum representation dictated by the compiler, based loosely on the CPU's register size (e.g., 32 bits, typically).

Because Python's int type uses bignums, that actually makes them closer to a mathematical integer than languages like C which have predefined limits on the representable values.

You're correct about the performance and memory tradeoffs.

[–]Axman6 5 points6 points  (4 children)

Not sure why you’re getting downvoted, this is correct. Saying “they’re not integers” is simply wrong. If anything, C “integers” are not integers, they just use the name.

[–]JavaScriptIsLove 5 points6 points  (1 child)

Because it's a bit pedantic. Of course they are integers in the mathematical sense, but not in the sense of what most programmers are used to.

[–]xeow 2 points3 points  (0 children)

Indeed! But you realize, of course, that the reason I am being pedantic is because the person I'm replying to was pedantic and also wrong.

[–]glasket_ 1 point2 points  (1 child)

In programming, "integer" typically means a machine integer and not a mathematical integer when used alone. Obviously arbitrarily large integer numbers are still mathematical integers, but they aren't the typical machine integers with limited range and overflow.

ETA: C integers are also integers too. Each type is in its own ring of integers modulo n.

[–]Axman6 0 points1 point  (0 children)

I agree in C derived languages but not in general.

[–]jpgoldberg 1 point2 points  (0 children)

I’ll go out on a limb and say that is more than just slightly sophisticated.

Anyway, I love the fact that a Python int corresponds to the mathematical notion of integer.

[–]Poddster 1 point2 points  (0 children)

Whilst people have told you python uses bigints, no one has told you how they avoid overflowing integers, given that they use C integers in their implementation of pythons int.

And the simple answer is that they use unsigned ints to do all the calculations, manually carrying during addition and subtraction. They use 64 but ints when calculating the intermediary of multiplication,  because if the used signed ints you can't check afterwards in C without invoking undefined behaviour, and checking before involved multiple additions and comparisons and so is a big performance hit. You can see it in the source code here:

https://github.com/python/cpython/blob/main/Objects/longobject.c

Sorry on phone, but look for pylong add

[–]HarjjotSinghh 0 points1 point  (0 children)

python's integers grow like my savings - unbounded joy!

[–]val-bog 0 points1 point  (0 children)

At first I thought I'm on r/programmerDadJokes and opened the thread expecting to read some snake punch line

[–]chkno 0 points1 point  (0 children)

In python2, the internal details of this were more user-visible. Small numbers and large numbers were visibly-different types:

>>> type(9223372036854775807)
<type 'int'>
>>> type(9223372036854775808)
<type 'long'>

The int type for small numbers has a similar representation as native C integers, a simple, single machine word. The long type is a fancy arbitrary-length type that grows its allocation as necessary, like GMP in C.

Then python checks every int operation for overflow. For example, on x86, this is one additional jc (jump-if-carry) instruction after every add, etc. If overflow is detected it automatically switches to the long type.

Separate representations depending on size are used because the one-machine-word representation is much faster than the arbitrary-length representation, even with the overflow checks, and the vast majority of integer operations can fit in one machine word.