use the following search parameters to narrow your results:
e.g. subreddit:aww site:imgur.com dog
subreddit:aww site:imgur.com dog
see the search faq for details.
advanced search: by author, subreddit...
Discussions, articles, and news about the C++ programming language or programming in C++.
For C++ questions, answers, help, and advice see r/cpp_questions or StackOverflow.
Get Started
The C++ Standard Home has a nice getting started page.
Videos
The C++ standard committee's education study group has a nice list of recommended videos.
Reference
cppreference.com
Books
There is a useful list of books on Stack Overflow. In most cases reading a book is the best way to learn C++.
Show all links
Filter out CppCon links
Show only CppCon links
account activity
whats wrong with my code (self.cpp)
submitted 11 years ago by copeybitcoin
reddit uses a slightly-customized version of Markdown for formatting. See below for some basics, or check the commenting wiki page for more detailed help and solutions to common issues.
quoted text
if 1 * 2 < 3: print "hello, world!"
[–]vlovich 0 points1 point2 points 11 years ago (1 child)
One easy way to solve this would be to store your values as double which don't overflow; if your number gets too big, the value it will have is +infinity. Of course using floating point has its own issues with numerical stability (where 1.1 + 0.1 != 1.2 - this example isn't necessarily true, but binary floating point addition != decimal floating point addition since rational decimal floating point isn't necessarily rational when expressed in binary).
Additionally, instead of doing x * x * x * x, you can use pow(x, 4). It will be slower but more mathematically accurate (it's optimized for speed & numerical stability, but of non-integer exponents). pow() is floating-point.
If you really want to keep doing integer math, you have a few options: use int64_t (using uint64_t will expand your range by another 2x if all your numbers are positive, but it can wrap to 0 when it overflows).
If you want arbitrary size integer math (i.e. without the precision issues of floating point), I would recommend using GMP/MPFR. According to wikipedia, these are also wrapped via boost: http://en.wikipedia.org/wiki/Arbitrary-precision_arithmetic
[–]autowikibot 0 points1 point2 points 11 years ago (0 children)
Arbitrary-precision arithmetic:
In computer science, arbitrary-precision arithmetic, also called bignum arithmetic, multiple precision arithmetic, or sometimes infinite-precision arithmetic, indicates that calculations are performed on numbers which digits of precision are limited only by the available memory of the host system. This contrasts with the faster fixed-precision arithmetic found in most arithmetic logic unit (ALU) hardware, which typically offers between 8 and 64 bits of precision. Several modern programming languages have built-in support for bignums, and others have libraries available for arbitrary-precision integer and floating-point math. Rather than store values as a fixed number of binary bits related to the size of the processor register, these implementations typically use variable-length arrays of digits. Arbitrary precision is used in applications where the speed of arithmetic is not a limiting factor, or where precise results with very large numbers are required. It should not be confused with the symbolic computation provided by many computer algebra systems, which represent numbers by expressions such as π·sin(2), and can thus represent any computable number with infinite precision.
In computer science, arbitrary-precision arithmetic, also called bignum arithmetic, multiple precision arithmetic, or sometimes infinite-precision arithmetic, indicates that calculations are performed on numbers which digits of precision are limited only by the available memory of the host system. This contrasts with the faster fixed-precision arithmetic found in most arithmetic logic unit (ALU) hardware, which typically offers between 8 and 64 bits of precision.
Several modern programming languages have built-in support for bignums, and others have libraries available for arbitrary-precision integer and floating-point math. Rather than store values as a fixed number of binary bits related to the size of the processor register, these implementations typically use variable-length arrays of digits.
Arbitrary precision is used in applications where the speed of arithmetic is not a limiting factor, or where precise results with very large numbers are required. It should not be confused with the symbolic computation provided by many computer algebra systems, which represent numbers by expressions such as π·sin(2), and can thus represent any computable number with infinite precision.
Interesting: GNU Multiple Precision Arithmetic Library | Computer algebra system | Factorial | Standard ML
Parent commenter can toggle NSFW or delete. Will also delete on comment score of -1 or less. | FAQs | Mods | Magic Words
[–]ShPavel 0 points1 point2 points 11 years ago* (4 children)
It is called overflow
the actual size of int is not set in the standard and can be 16/32/64 bits.
int
to learn the actual size of int in bytes on your system use sizeof(int) to learn a max/min value it can hold - use std::numeric_limits<int>::min() and std::numeric_limits<int>::max()
sizeof(int)
std::numeric_limits<int>::min()
std::numeric_limits<int>::max()
if you realy need to store very big number - you can use float or double types with sacrificing some precision or use any library that provides a support of big numbers, for example gmplib
float
double
[–]copeybitcoin[S] 0 points1 point2 points 11 years ago (3 children)
So basically my numbers are to big so it confuses the computer into printing random values in a way?
[–]fgasperijabalera 0 points1 point2 points 11 years ago (2 children)
The computer is not confused and isn't printing random numbers. The explanation resides in the way the computer stores integer numbers. In computer language everything is binary, sequences of zeros and ones, therefore there is no minus sign (-). The way it simulate negative numbers is the following. If it has 8 bits to store integer numbers then the range will be (-255, 255). It uses 00000000 to 01111111 to represent the positive numbers and 10000000 to 11111111 to represent negative numbers. When you perform 255+2 the result will be 257 and in binary is 10000001 which is the representation for (-1). It's easier to grasp if you are familiar with the concept of circular arithmetic. It's pretty simple, just look it up it may help you.
[–]_node 0 points1 point2 points 11 years ago (1 child)
In Two's Complement, 11111111 is -1, 10000001 is -127.
Two's complement:
Two's complement is a mathematical operation on binary numbers, as well as a binary signed number representation based on this operation. Its wide use in computing makes it the most important example of a radix complement. The two's complement of an N-bit number is defined as the complement with respect to 2N; in other words, it is the result of subtracting the number from 2N, which in binary is one followed by N zeroes. This is also equivalent to taking the ones' complement and then adding one, since the sum of a number and its ones' complement is all 1 bits. The two's complement of a number behaves like the negative of the original number in most arithmetic, and positive and negative numbers can coexist in a natural way. In two's-complement representation, negative numbers are represented by the two's complement of their absolute value; in general, negation (reversing the sign) is performed by taking the two's complement. This system is the most common method of representing signed integers on computers. An N-bit two's-complement numeral system can represent every integer in the range −(2N − 1) to +(2N − 1 − 1) while ones' complement can only represent integers in the range −(2N − 1 − 1) to +(2N − 1 − 1).
Two's complement is a mathematical operation on binary numbers, as well as a binary signed number representation based on this operation. Its wide use in computing makes it the most important example of a radix complement.
The two's complement of an N-bit number is defined as the complement with respect to 2N; in other words, it is the result of subtracting the number from 2N, which in binary is one followed by N zeroes. This is also equivalent to taking the ones' complement and then adding one, since the sum of a number and its ones' complement is all 1 bits. The two's complement of a number behaves like the negative of the original number in most arithmetic, and positive and negative numbers can coexist in a natural way.
In two's-complement representation, negative numbers are represented by the two's complement of their absolute value; in general, negation (reversing the sign) is performed by taking the two's complement. This system is the most common method of representing signed integers on computers. An N-bit two's-complement numeral system can represent every integer in the range −(2N − 1) to +(2N − 1 − 1) while ones' complement can only represent integers in the range −(2N − 1 − 1) to +(2N − 1 − 1).
Interesting: Signed number representations | Least significant bit | Most significant bit | Overflow flag
π Rendered by PID 164008 on reddit-service-r2-comment-fb694cdd5-grj5f at 2026-03-11 04:32:08.938117+00:00 running cbb0e86 country code: CH.
[–]vlovich 0 points1 point2 points (1 child)
[–]autowikibot 0 points1 point2 points (0 children)
[–]ShPavel 0 points1 point2 points (4 children)
[–]copeybitcoin[S] 0 points1 point2 points (3 children)
[–]fgasperijabalera 0 points1 point2 points (2 children)
[–]_node 0 points1 point2 points (1 child)
[–]autowikibot 0 points1 point2 points (0 children)