As stated in Python's docs:
Floating point numbers are usually implemented using
doublein C
double in C is normally a 64-bit number (double-precision, as opposed to the single-precision 32-bit float type), and it's also known as float64. So, what we call float in plain Python is usually a 64-bit float, or torch.float64.
PyTorch itself, however, uses float32 by default. AFAIK, this is done to save memory: you'll likely be training models with quite a lot of parameters, and using 32-bit floats will take two times less memory than Python's 64-bit float.
Inconsistency in interpreting python float in pytorch - why? - Stack Overflow
Why PyTorch tensors gives preference to float32 element datatype
python - I load a float32 Hugging Face model, cast it to float16, and save it. How can I load it as float16? - Stack Overflow
torch.tensor([0.01], dtype=torch.float16) * torch.tensor(65536, dtype=torch.float32) returns INF
Why does it appear that PyTorch tensors give preference to using default element datatype of float32 instead of float64?
For eg., the default element datatype for torch.tensor()is float32. This is the opposite with numpy arrays where the default element datatype for numpy.array()is float64. Why don’t PyTorch make it consistent with numpy arrays and make the default element datatype as float64?
(Ps. I know I can change the element datatypes in the tensor but it would be more convenient if the default was float64)