Note that a is Integer.MAX_VALUE - 1 and b is Integer.MIN_VALUE + 1. So yes, it is indeed subtracting and adding 1 twice in each case. The book is not wrong, but it's a stupid way of teaching about wrap-around overflow. Just printing Integer.MIN_VALUE - 1 and Integer.MAX_VALUE + 1 would have made the point.
int min = Integer.MIN_VALUE -1; // min is set to Integer.MAX_VALUE by underflow
int max = Integer.MAX_VALUE +1; // max is set to Integer.MIN_VALUE by overflow
From the Java Language Specification, §15.18.2:
If an integer addition overflows, then the result is the low-order bits of the mathematical sum as represented in some sufficiently large two's-complement format.
The JLS is the ultimate authority when it comes to questions like this, but I don't recommend reading it as a way to learn Java. You'd be better off going through the Java Language Tutorial. It's fairly comprehensive and the content is very high quality.
Answer from Ted Hopp on Stack OverflowVideos
I was searching for a way to find the min and max value within a number of integers and I came across this code:
Scanner in = new Scanner(System.in);
int maxNum = Integer.MIN_VALUE;
int minNum = Integer.MAX_VALUE;
while (scanner.hasNextInt()) {
int num = scanner.nextInt();
maxNum = Math.max(maxNum, num);
minNum = Math.min(minNum, num);
}
System.out.println("The maximum number: " + maxNum);
System.out.println("The minimum number: " + minNum);I am struggling to wrap my mind around Integer.MAX_VALUE and Integer.MIN_VALUE. Why are they assigned as values and how does it work with the Math. min & max methods. Why is the variable maxNum assigned the value Integer.MIN_VALUE and minNum assigned the value Integer.MAX_VALUE?
Thanks.
The short answer is that this is how twos-complement negation works. It's not overflow, and it wouldn't be detectable without special circuitry in the processor (or equivalent checks in the language runtime).
How Twos-Complement Arithmetic Works
I'll start with the number line, in binary, limiting my wordsize to 3 bits:
011 = 3
010 = 2
001 = 1
000 = 0
111 = -1
110 = -2
101 = -3
100 = -4
Inside the computer, there is an Adder circuit that can combine two bits and produce a result plus carry bit. These circuits are chained together, so that the processor can add two entire words. The carry bit from this addition is exposed to via a processor status register, but is not normally available to high-level languages.
Some examples:
2 010 2 010 2 010
1 001 -1 111 2 010
== ==== == ==== == ====
3 011 1 C001 -4 100
Let's look at those examples individually:
- 2 + 1 = 3, just like you'd expect. Both the addends and the sum are within the range of positive integers for our word size.
- 2 - 1 = 1, again just like you'd expect. Internally this operation sets the carry bit, indicating that the addition overflowed the word size. If we were using unsigned numbers, this would be a problem, but with twos-complement numbers it's OK.
- 2 + 2 = -4, which is definitely not what you'd expect. However, note that the carry bit remains unset. To detect overflow in this case, you'd have to check that two signed inputs resulted in an output with a different sign.
So why isn't overflow checked? The simplest answer is cost.
At the level of the hardware, addition is unsigned (at least on the three processors that I've programmed). Detecting integer overflow would require a separate set of opcodes for signed math, which would mean more transistors, which could be more profitably used elsewhere. In the earlier days of computing that was a huge concern; today, maybe not so much but almost everyone is OK with how math is implemented.
At the level of the language, cost is still a factor. There is the runtime cost of checking every signed operation for overflow, but there is also a programmer cost: imagine having to wrap all expressions (even a for loop) with a try/catch. The .Net runtime apparently gives you the option of enabling this, while Java explicitly does not.
How Twos-Complement Negation Works, and why -MIN_VALUE equals itself
In prose: twos-complement negation flips all of the bits in a number and then adds one.
I use a prose definition because that's almost certainly how it actually works in the hardware (although I'm not a hardware engineer, so can't say for sure, plus different architectures might use different techniques).
Let's see what happens with our 3-bit words:
100 = MIN_VALUE
011 = all bits flipped
100 = after adding 1
Note that there's no carry involved, although you could check for sign of value and result. However, that again would require special circuitry and/or runtime-level checks, to catch a result that will happen almost never.
What Are Some Alternatives, and Why Aren't They Used
One alternative is ones-complement, in which negation is simply inverting all bits. The ones-complement number line for a 3-bit word looks like this:
011 = 3
010 = 2
001 = 1
000 = 0
111 = -0
110 = -1
101 = -2
100 = -3
According to the linked Wikipedia article, there were machines using ones-complement arithmetic; I never used one. Again, I'm not a hardware engineer, but I believe that you need separate operations for addition and subtraction with ones-complement (in addition to separate operations for unsigned math), which again runs into the problem of cost.
The Wikipedia article mentions the problem of "end-around borrow," which may have been an issue with the actual computers that used ones-complement math, but I don't think is a necessary problem. I believe that the carry bit could also serve as a borrow bit.
The bigger problem is that you have two values for zero. Which is going to cause programmers to create a lot
of off-by-one errors when counting, or is going to require a lot of special-case code in the language
runtime (eg: a for loop that knows when it crosses 0 that it has to skip to 1/-1).
Another alternative is to use the high-order bit just as a sign bit, with the low-order bits being the same between positive and negative:
011 = 3
010 = 2
001 = 1
000 = 0
100 = -0
101 = -1
110 = -2
111 = -3
This is how IEEE-754 floating point works. It makes sense when your primary operations are assumed to be multiplication and division, not so much for addition and subtraction. And it still has the issue of two zeros.
Commentary
To me, this question is identical to questions that express outrage over the fact that 0.10 cannot be represented by a floating point number: both indicate a belief that digital computers should be able to exactly represent the real world. Or, in other words, that computers operate according to the laws of mathematics.
I can understand this belief; what I can't understand is the outrage that people express when the belief is shown to be false. A few moment's reflection should make it apparent that the belief cannot be true: computers work with finite quantities, whereas mathematics deals with continuous relations (I was about to say that everything in the real world is continuous, but figured that someone would bring up quantum mechanics).
Faced with this fundamental truth, computer designers -- and language designers, and application programmers -- have to make trade-offs. You might not like the particular tradeoff, but you should seek to understand it rather than simply complain about it. And once you understand the tradeoff, you can look for an environment that made a different tradeoff.
The mathematical reason is that Java implements "arithmetic modulo 2^32". (Or, rather, the CPU implements arithmetic modulo 2^32, and Java exposes the implementation.)
What this means is that, as far as Java's int type goes, numbers that differ by a multiple of 2^32 are considered the same. This means:
- If a number is too big, you subtract 2^32 until it's not too big.
- If a number is too small, you add 2^32 until it's not too small.
- The numbers 2^31 and -2^31 are considered the same, since they differ by 2^32.
Now, Integer.MIN_VALUE is -2^31, so its negation is -(2^31), which is 2^31. However, in arithmetic modulo 2^32, this is considered the same as -2^31, so that's what you get out.
So what are the advantages of arithmetic modulo 2^32? Some of them are...
- It's easier to implement in hardware than any alternative.
- It allows applications to use the same instructions for both signed and unsigned arithmetic.
- It's frequently useful in mathematical applications.
The reason that Java uses arithmetic modulo 2^32 is presumably that Java is simply exposing the way that the CPU implements arithmetic. This is vastly easier and more efficient than any alternative.
Java will overflow and underflow int values.
max_int = 2147483647 (01111111111111111...1)
min_int = -2147483648 (10000000000000000...0) //first bit is -2^31
under flow:
-2147483648 - 1 = 2147483647
so....
min_int- max_int = -2147483648 - 2147483647
= -2147483648 -(1 + 2147483646)
= (-2147483648 - 1) - 2147483646
= 2147483647 - 2147483646
= 1;
Easy way to wrap your head around it: we have an even number of bits to store a number in. We define 0 as the "middle" point of a range of even numbers. There has to be a difference of 1 either on the positive or negative side of 0 because the middle point is an odd amount in an even set of posible numbers.
Imagine having 8 bits integers. Where is the "most middle" point of that? Either the forth or fifth bit:
00010000 //3 bits on the "positive side", 4 bits on the "negative side"
00001000 //4bits positive, 3 bits negative