which data type should I use for most accurate calculations? float decimal or python?
floating point - Decimal Python vs. float runtime - Stack Overflow
code golf - Float vs Decimal - Code Golf Stack Exchange
Floating-point numbers
Videos
I am trying to write a scientific simulation in python, precision is very important, what should I use?
Based on the time difference you are seeing, you are likely using Python 2.x. In Python 2.x, the decimal module is written in Python and is rather slow. Beginning with Python 3.2, the decimal module was rewritten in C and is much faster.
Using Python 2.7 on my system, the decimal module is ~180x slower. Using Python 3.5, the decimal module is in only ~2.5x slower.
If you care about decimal performance, Python 3 is much faster.
You get better speed with float because Python float uses the hardware floating point register when available (and it is available on modern computers), whereas Decimal uses full scalar/software implementation.
However, you get better control with Decimal, when you have the classical floating point precision problems with the float types. See the classical StackOverflow Q&A Is floating point math broken? for instance.
Java, 80 79 59 57 bytes
s->s.compareTo(new java.math.BigDecimal(new Float(s))+"")
Outputs a negative integer if the internal floating point value is larger; 0 if they're the same; and a positive integer if the floating point is smaller.
Try it online.
Explanation:
s-> // Method with String parameter and integer return
s.compareTo( // Compare the input to (resulting in neg/0/pos):
new java.math.BigDecimal( // Create a BigDecimal, with value:
new Float(s)) // The (32-bit) floating point number of the input
+"") // Convert that BigDecimal to a String
Minor note: I've used Float (which is 32 bits) and therefore holds slightly different values than in the challenge description. If I would change it to Double (which is 64 bits) the values would be the same as the challenge description. This difference can for certain inputs also result in different outputs (e.g. the "0.09" is 0.0900000035762786865234375 as float, resulting in -23, but 0.0899999999999999966693309261245303787291049957275390625 as double, resulting in 1). The overall functionality would still be the same, though:
Try it online.
Vyxal 3, 1 byte
1
Vyxal It Online!
Outputs 1 if equivalent, 0 if bigger, -1 if smaller. This probably polyglots a lot of languages lol. Leave a comment if you have a polyglot language where the answer is always 1.
Notably, this always outputs 1. That's because all numbers are stored exactly by default. That's an intentional language design decision we went out of our way to accommodate by using a third party library under the hood. Technically, the number type is called VNum which extends spire.Complex[Real], so not a "decimal" type but a type that happens to also be able to store decimals exactly by consequence of also storing things like surds exactly.
There's a slight chance this could be considered invalid by the "don't use decimal type" restriction, but that's up to the challenge asker as to whether the default (and only) generic number type being exact is allowed.
I'm confused in differentiating between these two types of numbers. I'm a newbie.
Can anyone here please explain this?