A BigDecimal is an exact way of representing numbers. A Double has a certain precision. Working with doubles of various magnitudes (say d1=1000.0 and d2=0.001) could result in the 0.001 being dropped altogether when summing as the difference in magnitude is so large. With BigDecimal this would not happen.
The disadvantage of BigDecimal is that it's slower, and it's a bit more difficult to program algorithms that way (due to + - * and / not being overloaded).
If you are dealing with money, or precision is a must, use BigDecimal. Otherwise Doubles tend to be good enough.
I do recommend reading the javadoc of BigDecimal as they do explain things better than I do here :)
java - BigDecimal, precision and scale - Stack Overflow
Double vs BigDecimal in financial programming
Use long or big decimal. Performance problems will most probably arise elsewhere before you factor a big decimal as the culprit
More on reddit.comIGN-11140 toDict() and java.math.BigDecimal
Can BigDecimal stpre irrational numbers?
Videos
A BigDecimal is an exact way of representing numbers. A Double has a certain precision. Working with doubles of various magnitudes (say d1=1000.0 and d2=0.001) could result in the 0.001 being dropped altogether when summing as the difference in magnitude is so large. With BigDecimal this would not happen.
The disadvantage of BigDecimal is that it's slower, and it's a bit more difficult to program algorithms that way (due to + - * and / not being overloaded).
If you are dealing with money, or precision is a must, use BigDecimal. Otherwise Doubles tend to be good enough.
I do recommend reading the javadoc of BigDecimal as they do explain things better than I do here :)
My English is not good so I'll just write a simple example here.
Copydouble a = 0.02;
double b = 0.03;
double c = b - a;
System.out.println(c);
BigDecimal _a = new BigDecimal("0.02");
BigDecimal _b = new BigDecimal("0.03");
BigDecimal _c = _b.subtract(_a);
System.out.println(_c);
Program output:
Copy0.009999999999999998
0.01
Does anyone still want to use double? ;)
A BigDecimal is defined by two values: an arbitrary precision integer and a 32-bit integer scale. The value of the BigDecimal is defined to be .
Precision:
The precision is the number of digits in the unscaled value. For instance, for the number 123.45, the precision returned is 5.
So, precision indicates the length of the arbitrary precision integer. Here are a few examples of numbers with the same scale, but different precision:
- 12345 / 100000 = 0.12345 // scale = 5, precision = 5
- 12340 / 100000 = 0.1234 // scale = 4, precision = 4
- 1 / 100000 = 0.00001 // scale = 5, precision = 1
In the special case that the number is equal to zero (i.e. 0.000), the precision is always 1.
Scale:
If zero or positive, the scale is the number of digits to the right of the decimal point. If negative, the unscaled value of the number is multiplied by ten to the power of the negation of the scale. For example, a scale of -3 means the unscaled value is multiplied by 1000.
This means that the integer value of the ‘BigDecimal’ is multiplied by .
Here are a few examples of the same precision, with different scales:
- 12345 with scale 5 = 0.12345
- 12345 with scale 4 = 1.2345
- …
- 12345 with scale 0 = 12345
- 12345 with scale -1 = 123450 †
BigDecimal.toString:
The toString method for a BigDecimal behaves differently based on the scale and precision. (Thanks to @RudyVelthuis for pointing this out.)
- If
scale == 0, the integer is just printed out, as-is. - If
scale < 0, E-Notation is always used (e.g. 5 scale -1 produces "5E+1") - If
scale >= 0andprecision - scale -1 >= -6a plain decimal number is produced (e.g. 10000000 scale 1 produces "1000000.0") - Otherwise, E-notation is used, e.g. 10 scale 8 produces "1.0E-7" since
precision - scale -1equalsis less than -6.
More examples:
- 19/100 = 0.19 // integer=19, scale=2, precision=2
- 1/1000 = 0.0001 // integer=1, scale = 4, precision = 1
Precision: Total number of significant digits
Scale: Number of digits to the right of the decimal point
See BigDecimal class documentation for details.
Everyone says that BigDecimal should be used when dealing with money but it’s much slower and takes more memory than double. I would think this would be especially important in high frequency low-latency applications like trading. Do people actually use BigDecimal in such systems or do they use doubles with some kind of workaround to handle the precision issue?
Edit: I do have experience working on trading and risk systems and I see doubles used much more often than BigDecimal so I was curious to see if this is more common in actual practice. Most of the systems I worked on only need precision to the penny so I wonder if that’s the reason?
Also, BigDecimal is a pain to use and code written with it look much uglier than plain doubles.