Because the integer overflows. When it overflows, the next value is Integer.MIN_VALUE. Relevant JLS
Answer from Bozho on Stack OverflowIf an integer addition overflows, then the result is the low-order bits of the mathematical sum as represented in some sufficiently large two's-complement format. If overflow occurs, then the sign of the result is not the same as the sign of the mathematical sum of the two operand values.
Because the integer overflows. When it overflows, the next value is Integer.MIN_VALUE. Relevant JLS
If an integer addition overflows, then the result is the low-order bits of the mathematical sum as represented in some sufficiently large two's-complement format. If overflow occurs, then the sign of the result is not the same as the sign of the mathematical sum of the two operand values.
The integer storage gets overflowed and that is not indicated in any way, as stated in JSL 3rd Ed.:
The built-in integer operators do not indicate overflow or underflow in any way. Integer operators can throw a
NullPointerExceptionif unboxing conversion (§5.1.8) of a null reference is required. Other than that, the only integer operators that can throw an exception (§11) are the integer divide operator/(§15.17.2) and the integer remainder operator%(§15.17.3), which throw anArithmeticExceptionif the right-hand operand is zero, and the increment and decrement operators++(§15.15.1, §15.15.2) and--(§15.14.3, §15.14.2), which can throw anOutOfMemoryErrorif boxing conversion (§5.1.7) is required and there is not sufficient memory available to perform the conversion.
Example in a 4-bits storage:
MAX_INT: 0111 (7)
MIN_INT: 1000 (-8)
MAX_INT + 1:
0111+
0001
----
1000
In C, the language itself does not determine the representation of certain datatypes. It can vary from machine to machine, on embedded systems the int can be 16 bit wide, though usually it is 32 bit.
The only requirement is that short int <= int <= long int by size. Also, there is a recommendation that int should represent the native capacity of the processor.
All types are signed. The unsigned modifier allows you to use the highest bit as part of the value (otherwise it is reserved for the sign bit).
Here's a short table of the possible values for the possible data types:
width minimum maximum
signed 8 bit -128 +127
signed 16 bit -32 768 +32 767
signed 32 bit -2 147 483 648 +2 147 483 647
signed 64 bit -9 223 372 036 854 775 808 +9 223 372 036 854 775 807
unsigned 8 bit 0 +255
unsigned 16 bit 0 +65 535
unsigned 32 bit 0 +4 294 967 295
unsigned 64 bit 0 +18 446 744 073 709 551 615
In Java, the Java Language Specification determines the representation of the data types.
The order is: byte 8 bits, short 16 bits, int 32 bits, long 64 bits. All of these types are signed, there are no unsigned versions. However, bit manipulations treat the numbers as they were unsigned (that is, handling all bits correctly).
The character data type char is 16 bits wide, unsigned, and holds characters using UTF-16 encoding (however, it is possible to assign a char an arbitrary unsigned 16 bit integer that represents an invalid character codepoint)
width minimum maximum
SIGNED
byte: 8 bit -128 +127
short: 16 bit -32 768 +32 767
int: 32 bit -2 147 483 648 +2 147 483 647
long: 64 bit -9 223 372 036 854 775 808 +9 223 372 036 854 775 807
UNSIGNED
char 16 bit 0 +65 535
In C, the integer(for 32 bit machine) is 32 bit and it ranges from -32768 to +32767.
Wrong. 32-bit signed integer in 2's complement representation has the range -231 to 231-1 which is equal to -2,147,483,648 to 2,147,483,647.
Java will overflow and underflow int values.
max_int = 2147483647 (01111111111111111...1)
min_int = -2147483648 (10000000000000000...0) //first bit is -2^31
under flow:
-2147483648 - 1 = 2147483647
so....
min_int- max_int = -2147483648 - 2147483647
= -2147483648 -(1 + 2147483646)
= (-2147483648 - 1) - 2147483646
= 2147483647 - 2147483646
= 1;
Easy way to wrap your head around it: we have an even number of bits to store a number in. We define 0 as the "middle" point of a range of even numbers. There has to be a difference of 1 either on the positive or negative side of 0 because the middle point is an odd amount in an even set of posible numbers.
Imagine having 8 bits integers. Where is the "most middle" point of that? Either the forth or fifth bit:
00010000 //3 bits on the "positive side", 4 bits on the "negative side"
00001000 //4bits positive, 3 bits negative
I know that an int is a 32 bit number with a range of -2,147,483,648 to 2,147,483,647. I'm learning about overflow, and I am trying to figure out why 2 * Integer.MAX_VALUE returns -2.
Would anyone mind explaining overflow and why this is calculated this way, please?
Thank you in advance! :)
Edit: Thanks for the replies, everyone. I still don't completely understand it, but at least I have a good start to go down the rabbit hole of binary and hexadecimal numbers! :D
Edit #2: I think I get it now, and if anyone is curious this stack overflow question explains it well. Thanks again for the responses.
I tried this (Java 1.4):
int result = Integer.MAX_VALUE + Integer.MAX_VALUE; system.out.println(result); // -2
What I understand so far about Integer in Java:
Integer.MAX_VALUE is 2 ^ 31 - 1 and Integer.MIN_VALUE is -(2 ^ 31)
-Integer.MIN_VALUE will result to Integer.MIN_VALUE because 2 ^ 31 is simply greater than 2 ^ 31 - 1 which makes the integer overflow to Integer.MIN_VALUE again
But I still cannot wrap my head around why (231 - 1) + (231 - 1) = -2 ?
What I'm trying to achieve here is to understand why Integer.MAX_VALUE and Integer.MIN_VALUE is sometimes used in Comparator to implement a > b > c
a.compareTo(b) + b.compareTo(c) <= a.compareTo(c)
Thank you. :)