🌐
Sololearn
sololearn.com › en › Discuss › 1659792 › what-is-the-difference-between-int-double-and-float
What is the difference between int double and float? | Sololearn: Learn to code for FREE!
int; a whole number can both be positive or negative e.g 5, -10, 20 float; a real number (single precision) of 4 bytes. double; also a real number but has double precision (8 bytes).
Top answer
1 of 5
4

The basic types in C are just there to standardize a single unit of memory (a block of bits) that can be used to store a value. We all know that a computer uses bits as its basic unit of information, however a bit is too small to be useful in and of itself, by defining different types we define different blocks of bits and how to interpret them. When we say "int a", the compiler knows that we are dealing with a basic, signed binary number, and knows the amount of memory it needs to set aside to store that number.

The biggest issue is that the exact size definitions are system dependent and not defined in the specifications.

  1. A float and double are both implementations of floating point values in C. floating point numbers are a way to implement decimal and fractional values, of any magnitude in binary and also streamline their arithmetic. It is essentially a binary version of scientific notation. Imagine writing the number 3.14159 In decimal, The mantissa here would be 314159 and the exponent would be -5. You could define a simple function that takes the two numbers (stored in binary) and prints out their combined decimal value, your program could therefore store all decimal numbers as two different numbers in binary, and use its own functions to add them together, print them etc. However, C streamlines this process by giving you a simple container that does all that work for you behind the scene. A floating point standard describes how you would encode the mantissa and exponent into a single binary number. In C, the only difference between a float and a double is the amount of memory set aside to store your number, and thus the greater range and precision of the numbers that you can store. The benefit of this is that you can also describe large integer numbers, for instance, you would need 32 bits to write the number "4 billion" explicitly in binary, but you only need 8 to do the same in floating point (4 x 109). However, you would not be able to write the number 4,123,456,789 as an 8 bit floating point or even a 32 bit floating point number (as you still need to encode the exponent), but it will fit perfectly well in a 32 bit int.

  2. A char and int are just basic binary numbers, their only difference being in length (potentially). A character is defined as the smallest unit of data necessary to hold a single text character for that architecture, this is a bit abstractly defined, but for most computer today running x86 platforms, a char is defined as 8 bits (although, on unicode systems it could be 16 bits). An integer is another basic binary number, but it is required to be at least 16 bits long (typically it is either 32 or 64 on modern systems). So a 8 bit char is capable of encoding 256 unique characters, or a number between 0-255 (or a number between -127 and 128) the difference is only in how you interpret the collection of bits. An 32 bit int can store a number between 0 and slightly over 4 billion, or you could half that range and use it to represent a number between (roughly) -2 billion and 2 billion

On some systems, the size of an int may be the same as the size of float, from the memory point of view they are just collections of bits, however the processor will interpret the values differently, and will actually use different circuits to add two numbers together if they are floats vs if they ints.

2 of 5
2

I don't know about examples, but they are simply different primitive types in C. Both double and float are for floating point numbers (e.g., numbers with fractional parts) and int and char are for whole numbers.

The reason there is more than one type for each class of number is because they take up a different amount of memory, and can therefore me bigger / more precise. There are actually quite a few more besides the four you've listed.

Discussions

c++ - What is the difference between float and double? - Stack Overflow
Bring the best of human thought and AI automation together at your work. Explore Stack Internal ... I've read about the difference between double precision and single precision. However, in most cases, float and double seem to be interchangeable, i.e. using one or the other does not seem to ... More on stackoverflow.com
🌐 stackoverflow.com
[C] How to know when to use a double vs float or a long vs int?
you should use double over float whenever possible IMO - it makes numbers orders of magnitude more accurate, there's basically no downside. More on reddit.com
🌐 r/learnprogramming
7
1
November 28, 2017
floating point - Difference between decimal, float and double in .NET? - Stack Overflow
I'm surprised it hasn't been said ... and double are floating binary point types. 2015-02-03T15:48:58.487Z+00:00 ... @BKSpurgeon: Well, only in the same way that you can say that everything is a binary type, at which point it becomes a fairly useless definition. Decimal is a decimal type in that it's a number represented as an integer significand ... More on stackoverflow.com
🌐 stackoverflow.com
java - What is the difference between the float and integer data type when the size is the same? - Stack Overflow
What the difference between the float and integer data type when size is same? More on stackoverflow.com
🌐 stackoverflow.com
🌐
Quora
quora.com › What-is-the-difference-between-a-Float-Double-and-an-Int-in-C
What is the difference between a 'Float', 'Double', and an 'Int' in C? - Quora
Answer (1 of 4): Float and double are representations of real numbers, int is representation of integers. In a nutshell: separate concept from implementation and representation. Concepts are fixed, implementations and representations are changeable, ...
🌐
W3Schools
w3schools.com › cs › cs_data_types.php
W3Schools.com
Which type you should use, depends on the numeric value. Floating point types represents numbers with a fractional part, containing one or more decimals. Valid types are float and double.
🌐
Quora
quora.com › What-is-the-difference-between-int-char-float-and-double-data-types
What is the difference between int, char, float and double data types? - Quora
Use an int when you don't need fractional numbers and you've no reason to use anything else; on most processors/OS configurations, this is the size of number that the machine can deal with most efficiently; Use a double when you need fractional ...
🌐
Microsoft Learn
learn.microsoft.com › en-us › dotnet › csharp › language-reference › builtin-types › floating-point-numeric-types
Floating-point numeric types - C# reference | Microsoft Learn
January 20, 2026 - They're interchangeable. For example, the following declarations declare variables of the same type: ... The default value of each floating-point type is zero, 0. Each of the floating-point types has the MinValue and MaxValue constants that provide the minimum and maximum finite value of that type. The float and double types also provide constants that represent not-a-number and infinity values.
🌐
Hackr
hackr.io › home › articles › programming
Float vs Double Data Types: What's The Difference [Updated]
January 30, 2025 - To illustrate the practical differences between float and double, I've put together some code examples in various programming languages. The idea here is to demonstrate how each type is declared and used and the impact of their precision in calculations. ... #include <stdio.h> int main() { float floatValue = 3.14159265358979323846; // Single-precision double doubleValue = 3.14159265358979323846; // Double-precision printf("Float value: %.7f\n", floatValue); printf("Double value: %.15f\n", doubleValue); return 0; }
🌐
Arm Learning
learn.arm.com › learning-paths › cross-platform › integer-vs-floats › introduction-integer-float-types
Learn about integer and floating-point conversions: An introduction to integer and floating-point data types
Types may change in the future as big integer arithmetic is particularly useful in cryptography. Wikipedia provides an excellent overview of the 32-bit floating point number representation. A short summary is provided below. Every real number is represented in approximation by the closest 32-bit floating point number: ... A similar expression exists for the 64-bit floating-point number (double ...
Find elsewhere
Top answer
1 of 14
650

Huge difference.

As the name implies, a double has 2x the precision of float[1]. In general a double has 15 decimal digits of precision, while float has 7.

Here's how the number of digits are calculated:

double has 52 mantissa bits + 1 hidden bit: log(253)÷log(10) = 15.95 digits

float has 23 mantissa bits + 1 hidden bit: log(224)÷log(10) = 7.22 digits

This precision loss could lead to greater truncation errors being accumulated when repeated calculations are done, e.g.

float a = 1.f / 81;
float b = 0;
for (int i = 0; i < 729; ++ i)
    b += a;
printf("%.7g\n", b); // prints 9.000023

while

double a = 1.0 / 81;
double b = 0;
for (int i = 0; i < 729; ++ i)
    b += a;
printf("%.15g\n", b); // prints 8.99999999999996

Also, the maximum value of float is about 3e38, but double is about 1.7e308, so using float can hit "infinity" (i.e. a special floating-point number) much more easily than double for something simple, e.g. computing the factorial of 60.

During testing, maybe a few test cases contain these huge numbers, which may cause your programs to fail if you use floats.


Of course, sometimes, even double isn't accurate enough, hence we sometimes have long double[1] (the above example gives 9.000000000000000066 on Mac), but all floating point types suffer from round-off errors, so if precision is very important (e.g. money processing) you should use int or a fraction class.


Furthermore, don't use += to sum lots of floating point numbers, as the errors accumulate quickly. If you're using Python, use fsum. Otherwise, try to implement the Kahan summation algorithm.


[1]: The C and C++ standards do not specify the representation of float, double and long double. It is possible that all three are implemented as IEEE double-precision. Nevertheless, for most architectures (gcc, MSVC; x86, x64, ARM) float is indeed a IEEE single-precision floating point number (binary32), and double is a IEEE double-precision floating point number (binary64).

2 of 14
64

Here is what the standard C99 (ISO-IEC 9899 6.2.5 §10) or C++2003 (ISO-IEC 14882-2003 3.1.9 §8) standards say:

There are three floating point types: float, double, and long double. The type double provides at least as much precision as float, and the type long double provides at least as much precision as double. The set of values of the type float is a subset of the set of values of the type double; the set of values of the type double is a subset of the set of values of the type long double.

The C++ standard adds:

The value representation of floating-point types is implementation-defined.

I would suggest having a look at the excellent What Every Computer Scientist Should Know About Floating-Point Arithmetic that covers the IEEE floating-point standard in depth. You'll learn about the representation details and you'll realize there is a tradeoff between magnitude and precision. The precision of the floating point representation increases as the magnitude decreases, hence floating point numbers between -1 and 1 are those with the most precision.

🌐
freeCodeCamp
freecodecamp.org › news › double-vs-float-in-cpp-the-difference-between-floats-and-doubles
Double VS Float in C++ – The Difference Between Floats and Doubles
May 19, 2022 - float and double both have varying capacities when it comes to the number of decimal digits they can hold. float can hold up to 7 decimal digits accurately while double can hold up to 15.
🌐
LunaNotes
lunanotes.io › summary › understanding-float-double-and-long-double-data-types-in-c-programming
Understanding Float, Double, and Long Double Data Types in C Programming | Video Summary by LunaNotes
November 21, 2025 - Float retains up to 7 digits accurately. Double maintains approximately 16 digits. Long double can represent about 19 digits accurately. Dividing two integers truncates fractional parts, e.g., 4/9 equals 0.
🌐
Quora
quora.com › What-is-the-difference-between-double-long-float-and-int-variables-in-Java-Which-one-should-be-used-for-storing-large-numbers-more-than-two-digits-Why
What is the difference between double, long, float, and int variables in Java? Which one should be used for storing large numbers (more than two digits)? Why? - Quora
Answer: In Java, int and long are primitive integer datatypes. They store integers as two‘s complement numbers. Ints use 32 bits and have a range from -2^{31}=-2147483648 to 2^{31}-1=2147483647, longs use 64 bits and have a range from -2^{63}=-9223372036854775808 to 2^{63}-1=9223372036854775807.
🌐
Reddit
reddit.com › r/learnprogramming › [c] how to know when to use a double vs float or a long vs int?
r/learnprogramming on Reddit: [C] How to know when to use a double vs float or a long vs int?
November 28, 2017 -

Sorry if this seems like a basic question, but I've been learning C over the past several weeks. In some of the tutorials I've been using, I've noticed some of the instructors just use int/float while others tend to default to double when declaring floating point variables.

Is there any sort of best practice in terms of deciding what size variable to use?

Top answer
1 of 16
2627

float (the C# alias for System.Single) and double (the C# alias for System.Double) are floating binary point types. float is 32-bit; double is 64-bit. In other words, they represent a number like this:

10001.10010110011

The binary number and the location of the binary point are both encoded within the value.

decimal (the C# alias for System.Decimal) is a floating decimal point type. In other words, they represent a number like this:

12345.65789

Again, the number and the location of the decimal point are both encoded within the value – that's what makes decimal still a floating point type instead of a fixed point type.

The important thing to note is that humans are used to representing non-integers in a decimal form, and expect exact results in decimal representations; not all decimal numbers are exactly representable in binary floating point – 0.1, for example – so if you use a binary floating point value you'll actually get an approximation to 0.1. You'll still get approximations when using a floating decimal point as well – the result of dividing 1 by 3 can't be exactly represented, for example.

As for what to use when:

  • For values which are "naturally exact decimals" it's good to use decimal. This is usually suitable for any concepts invented by humans: financial values are the most obvious example, but there are others too. Consider the score given to divers or ice skaters, for example.

  • For values which are more artefacts of nature which can't really be measured exactly anyway, float/double are more appropriate. For example, scientific data would usually be represented in this form. Here, the original values won't be "decimally accurate" to start with, so it's not important for the expected results to maintain the "decimal accuracy". Floating binary point types are much faster to work with than decimals.

2 of 16
1303

Precision is the main difference.

Float - 7 digits (32 bit)

Double-15-16 digits (64 bit)

Decimal -28-29 significant digits (128 bit)

Decimals have much higher precision and are usually used within financial applications that require a high degree of accuracy. Decimals are much slower (up to 20X times in some tests) than a double/float.

Decimals and Floats/Doubles cannot be compared without a cast whereas Floats and Doubles can. Decimals also allow the encoding or trailing zeros.

float flt = 1F/3;
double dbl = 1D/3;
decimal dcm = 1M/3;
Console.WriteLine("float: {0} double: {1} decimal: {2}", flt, dbl, dcm);

Result :

float: 0.3333333  
double: 0.333333333333333  
decimal: 0.3333333333333333333333333333
Top answer
1 of 3
118
  • float stores floating-point values, that is, values that have potential decimal places
  • int only stores integral values, that is, whole numbers

So while both are 32 bits wide, their use (and representation) is quite different. You cannot store 3.141 in an integer, but you can in a float.

Dissecting them both a little further:

In an integer, all bits except the leftmost one are used to store the number value. This is (in Java and many computers too) done in the so-called two's complement, which support negatives values. Two's complement uses the leftmost bit to store the positive (0) or negative sign (1). This basically means that you can represent the values of −231 to 231 − 1.

In a float, those 32 bits are divided between three distinct parts: The sign bit, the exponent and the mantissa. They are laid out as follows:

S EEEEEEEE MMMMMMMMMMMMMMMMMMMMMMM

There is a single bit that determines whether the number is negative or non-negative (zero is neither positive nor negative, but has the sign bit set to zero). Then there are eight bits of an exponent and 23 bits of mantissa. To get a useful number from that, (roughly) the following calculation is performed:

M × 2E

(There is more to it, but this should suffice for the purpose of this discussion)

The mantissa is in essence not much more than a 24-bit integer number. This gets multiplied by 2 to the power of the exponent part, which, roughly, is a number between −128 and 127.

Therefore you can accurately represent all numbers that would fit in a 24-bit integer but the numeric range is also much greater as larger exponents allow for larger values. For example, the maximum value for a float is around 3.4 × 1038 whereas int only allows values up to 2.1 × 109.

But that also means, since 32 bits only have 4.2 × 109 different states (which are all used to represent the values int can store), that at the larger end of float's numeric range the numbers are spaced wider apart (since there cannot be more unique float numbers than there are unique int numbers). You cannot represent some numbers exactly, then. For example, the number 2 × 1012 has a representation in float of 1,999,999,991,808. That might be close to 2,000,000,000,000 but it's not exact. Likewise, adding 1 to that number does not change it because 1 is too small to make a difference in the larger scales float is using there.

Similarly, you can also represent very small numbers (between 0 and 1) in a float but regardless of whether the numbers are very large or very small, float only has a precision of around 6 or 7 decimal digits. If you have large numbers those digits are at the start of the number (e.g. 4.51534 × 1035, which is nothing more than 451534 follows by 30 zeroes – and float cannot tell anything useful about whether those 30 digits are actually zeroes or something else), for very small numbers (e.g. 3.14159 × 10−27) they are at the far end of the number, way beyond the starting digits of 0.0000...

2 of 3
3

Floats are used to store a wider range of number than can be fit in an integer. These include decimal numbers and scientific notation style numbers that can be bigger values than can fit in 32 bits. Here's the deep dive into them: http://en.wikipedia.org/wiki/Floating_point

🌐
Reddit
reddit.com › r/c_programming › what is the difference between int and double and how do you know when to use them?
r/C_Programming on Reddit: What is the difference between int and double and how do you know when to use them?
February 6, 2015 - Well they can't really be interchanged. An int is used to represent integer numbers (+- whole numbers). While a double is used to represent a double precision floating point number.
Top answer
1 of 8
219

The default choice for a floating-point type should be double. This is also the type that you get with floating-point literals without a suffix or (in C) standard functions that operate on floating point numbers (e.g. exp, sin, etc.).

float should only be used if you need to operate on a lot of floating-point numbers (think in the order of thousands or more) and analysis of the algorithm has shown that the reduced range and accuracy don't pose a problem.

long double can be used if you need more range or accuracy than double, and if it provides this on your target platform.

In summary, float and long double should be reserved for use by the specialists, with double for "every-day" use.

2 of 8
48

There is rarely cause to use float instead of double in code targeting modern computers. The extra precision reduces (but does not eliminate) the chance of rounding errors or other imprecision causing problems.

The main reasons I can think of to use float are:

  1. You are storing large arrays of numbers and need to reduce your program's memory consumption.
  2. You are targeting a system that doesn't natively support double-precision floating point. Until recently, many graphics cards only supported single precision floating points. I'm sure there are plenty of low-power and embedded processors that have limited floating point support too.
  3. You are targeting hardware where single-precision is faster than double-precision, and your application makes heavy use of floating point arithmetic. On modern Intel CPUs I believe all floating point calculations are done in double precision, so you don't gain anything here.
  4. You are doing low-level optimization, for example using special CPU instructions that operate on multiple numbers at a time.

So, basically, double is the way to go unless you have hardware limitations or unless analysis has shown that storing double precision numbers is contributing significantly to memory usage.

🌐
Unity
discussions.unity.com › unity engine
int vs float ? - Unity Engine - Unity Discussions
October 19, 2010 - I have very little programming/scripting knowledge but this has always confused me. If an int value stores whole numbers and a float stores numbers including any decimal places, why does the int type exist? Doesn’t havi…
🌐
Oracle
docs.oracle.com › javase › tutorial › java › nutsandbolts › datatypes.html
Primitive Data Types (The Java™ Tutorials > Learning the Java Language > Language Basics)
Its range of values is beyond the scope of this discussion, but is specified in the Floating-Point Types, Formats, and Values section of the Java Language Specification. As with the recommendations for byte and short, use a float (instead of double) if you need to save memory in large arrays of floating point numbers.
🌐
Reddit
reddit.com › r/learnpython › why do we need to differentiate between float numbers and integers?
r/learnpython on Reddit: Why do we need to differentiate between float numbers and integers?
August 13, 2022 -

Hi all. Im a complete beginner learning the basics. Im curious as to why Python has two different types of numbers (3 including complex) : Floats and integers. Why cant we just use any number and if we do wanna use a decimal point, we just use a decimal point without having to indicate it as a float? What is the significance of differentiating the two? Thanks!

Top answer
1 of 7
9
There are many reasons: It is faster for the computer to do math with integers than with floats. Floats accumulate errors, so if you don't need them, integers are more robust. For example 0.1+0.2 != 0.3. There are situations in which floats don't make sense but integers do. For example when accessing an item in a list, my_list[3] makes sense but my_list[3.57] does not. Every programming language (except JavaScript) also separates them. Many also have more than 1 type of int and float (for different sizes).
2 of 7
4
we just use a decimal point without having to indicate it as a float We can? print(type(1.5)) # There are at least 2 very good reasons to have separate types, though. The first is logical. Floating point math is inexact. For example: >>> (1 - 0.1) - 0.9 0.0 >>> (1 - 0.05 - 0.05) - 0.9 -1.1102230246251565e-16 As you can see, the more calculations with make with floating point arithmetic, the more likely it is we'll accumulate inaccuracies. This does not happen with integers, and we would certainly like to have the option of avoiding this problem whenever possible. It's not possible to eliminate this problem in rational numbers without giving a fractional number unlimited storage space, which is obviously undesirable in a variety of use cases. The second is practical. The algorithms for integer arithmetic are dramatically different. Integer arithmetic uses 2's complement, and floating point arithmetic uses the IEEE 754 standard for representing decimal numbers and performing computations on them. These different representations are what allows us to preserve precision in integers. Also, integer arithmetic is much faster than floating point arithmetic. When only integer arithmetic is needed over a large amount of data, computation will be significantly faster.