As stated in Python's docs:

Floating point numbers are usually implemented using double in C

double in C is normally a 64-bit number (double-precision, as opposed to the single-precision 32-bit float type), and it's also known as float64. So, what we call float in plain Python is usually a 64-bit float, or torch.float64.

PyTorch itself, however, uses float32 by default. AFAIK, this is done to save memory: you'll likely be training models with quite a lot of parameters, and using 32-bit floats will take two times less memory than Python's 64-bit float.

Answer from ForceBru on Stack Overflow
🌐
GitHub
github.com › huggingface › transformers › issues › 30305
load and train with bf16,saved torch_dtype is float32 · Issue #30305 · huggingface/transformers
April 18, 2024 - load a bf16 model then train with --bf16 and log show [Using auto half precision backend], the final saved model parameter torch_dtype in config.json is float32 1、why saved float32? 2、how do i load the trained model?if load with float32,Can I get a better output?
Author   gyangfan
🌐
Hugging Face
huggingface.co › venkycs › phi-2-instruct › discussions › 1
venkycs/phi-2-instruct · Why change the torch_dtype of phi-2 from float16 to float32?
December 18, 2023 - Thx for your great job! I've noticed that you changed the torch_dtype of phi-2 from float16 (the default value in the official version config.json) to float32. Why do that? Is that a common practic...
Discussions

Inconsistency in interpreting python float in pytorch - why? - Stack Overflow
I couldn't find this explicitly stated in the documentation, though - it specifies types using torch.float32 or torch.float64. 2022-02-12T06:17:26.363Z+00:00 ... Oh I got your point; when float is given explicitly as dtype then pytorch follows python's type casting rule, otherwise naive float ... More on stackoverflow.com
🌐 stackoverflow.com
Why PyTorch tensors gives preference to float32 element datatype
GPUs are optimized for fp32. More on reddit.com
🌐 r/pytorch
10
7
August 13, 2020
python - I load a float32 Hugging Face model, cast it to float16, and save it. How can I load it as float16? - Stack Overflow
In this example, model2 loads as a float32 model (as shown by print_model_layer_dtype(model2)), even though model2 was saved as float16 (as shown in config.json). What is the proper way to load it as float16? Tested with transformers==4.36.2 and Python 3.11.7 on Windows 10. ... Use torch_dtype... More on stackoverflow.com
🌐 stackoverflow.com
torch.tensor([0.01], dtype=torch.float16) * torch.tensor(65536, dtype=torch.float32) returns INF
Hi there, I ran the following code on CPU or GPU, and observed that torch.tensor([0.01], dtype=torch.float16) * torch.tensor(65536, dtype=torch.float32) returns INF. More on github.com
🌐 github.com
3
April 9, 2023
🌐
APXML
apxml.com › courses › getting-started-with-pytorch › chapter-2-advanced-tensor-manipulations › tensor-data-types
PyTorch Tensor Data Types | Float, Int
Original tensor: tensor([1.1000, 2.2000, 3.3000]), dtype: torch.float32 Casted to int64: tensor([1, 2, 3]), dtype: torch.int64 Casted to float16: tensor([1., 2., 3.], dtype=torch.float16), dtype: torch.float16
🌐
Pytorchcourse
pytorchcourse.com › 01-tensors › 03_data_types_and_devices
DTypes & Devices: Choose Your Weapons - PyTorch Course
October 26, 2025 - --- Details about floats --- dtype | Bits | Epsilon | Tiny | Min | Max -------------------------------------------------------------------------------- torch.float64 | 64 | 2.2204e-16 | 2.2251e-308 | -1.7977e+308 | 1.7977e+308 torch.float32 | 32 | 1.1921e-07 | 1.1755e-38 | -3.4028e+38 | 3.4028e+38 torch.float16 | 16 | 9.7656e-04 | 6.1035e-05 | -6.5504e+04 | 6.5504e+04 torch.bfloat16 | 16 | 7.8125e-03 | 1.1755e-38 | -3.3895e+38 | 3.3895e+38
🌐
Codecademy
codecademy.com › docs › pytorch › tensor operations › specifying data types
PyTorch | Tensor Operations | Specifying Data Types | Codecademy
January 13, 2025 - Common data types include: torch.float32 (default): 32-bit floating-point · torch.float64: 64-bit floating-point · torch.int32: 32-bit integer · torch.int64: 64-bit integer · torch.bool: Boolean · tensor.to(torch.<data_type>) In the example ...
Find elsewhere
🌐
Modular
docs.modular.com › max › develop › dtypes
Data types (dtype) - Modular docs
March 19, 2026 - NumPy dtype: float32 MAX tensor dtype: DType.float32 MAX tensor shape: [Dim(2), Dim(2)] NumPy compatibility · Not all MAX dtypes have NumPy equivalents. For example, bfloat16 and float8 types are not natively supported by NumPy. When working with these types, you may need to cast to a compatible type first. torch_dtype_to_max() converts a PyTorch dtype to a MAX dtype: import torch from max.experimental.torch import torch_dtype_to_max # PyTorch tensor pt_tensor = torch.randn(10, 10, dtype=torch.float16) # Convert PyTorch dtype to MAX dtype max_dtype = torch_dtype_to_max(pt_tensor.dtype) print(f"PyTorch {pt_tensor.dtype} → MAX {max_dtype}") # float16 → DType.float16 ·
🌐
GitHub
github.com › pytorch › pytorch › issues › 98691
torch.tensor([0.01], dtype=torch.float16) * torch.tensor(65536, dtype=torch.float32) returns INF · Issue #98691 · pytorch/pytorch
April 9, 2023 - torch.tensor([0.01], dtype=torch.float16) * torch.tensor(65536, dtype=torch.float32) returns INF#98691
Author   wkcn
🌐
GitHub
github.com › hiyouga › LLaMA-Factory › issues › 7401
torch_dtype of vision_config became float32 after full tuning · Issue #7401 · hiyouga/LLaMA-Factory
March 21, 2025 - { "hidden_size": 1280, "in_chans": 3, "model_type": "qwen2_5_vl", "out_hidden_size": 2048, "spatial_patch_size": 14, "tokens_per_second": 2, "torch_dtype": "float32" }
Author   Ego-J
🌐
PyTorch
docs.pytorch.org › reference api › torch.get_default_dtype
torch.get_default_dtype — PyTorch 2.11 documentation
January 1, 2023 - >>> torch.get_default_dtype() # initial default for floating point is torch.float32 torch.float32 >>> torch.set_default_dtype(torch.float64) >>> torch.get_default_dtype() # default is now changed to torch.float64 torch.float64 · On this page · Show Source ·
🌐
GitHub
github.com › pytorch › pytorch › issues › 20755
Type conversion from float64 to float32 (cpu) sometimes crashes · Issue #20755 · pytorch/pytorch
May 21, 2019 - import torch a = torch.rand(3, 3, dtype = torch.float64) print(a.dtype, a.device) # torch.float64 cpu c = a.to(torch.float32) #works b = torch.load('bug.pt') print(b.dtype, b.device) # torc...
Published   May 21, 2019
Author   vadimkantorov
🌐
Sprintchase
sprintchase.com › home › torch.tensor.to: converting data types and/or changing device
torch.Tensor.to: Converting Data Types and/or Changing Device
May 12, 2025 - import torch # Create a tensor tensor = torch.tensor([1, 2, 3], dtype=torch.int32) # Move to GPU and convert to float32 tensor_float_gpu = tensor.to(device="cuda", dtype=torch.float32) print(tensor_float_gpu) # Output: tensor([1., 2., 3.], device='cuda:0') print(tensor_float_gpu.dtype) # Output: torch.float32
🌐
Glaringlee
glaringlee.github.io › tensors.html
torch.Tensor — PyTorch master documentation
Returns a Tensor of size size filled with 1. By default, the returned Tensor has the same torch.dtype and torch.device as this tensor. ... dtype (torch.dtype, optional) – the desired type of returned tensor.
🌐
GeeksforGeeks
geeksforgeeks.org › how-to-get-the-data-type-of-a-pytorch-tensor
How to Get the Data Type of a Pytorch Tensor? - GeeksforGeeks
July 21, 2021 - # import torch import torch # create a tensor with float type a = torch.tensor([100, 200, 2, 3, 4], dtype=torch.float) # display tensor print(a) # display data type print(a.dtype) # create a tensor with double type a = torch.tensor([1, 2, -6, -8, 0], dtype=torch.double) # display tensor print(a) # display data type print(a.dtype) Output: tensor([100., 200., 2., 3., 4.]) torch.float32 tensor([ 1., 2., -6., -8., 0.], dtype=torch.float64) torch.float64 ·
🌐
GitHub
github.com › pytorch › pytorch › issues › 121414
Performance improvement of tensor.to(device, dtype) · Issue #121414 · pytorch/pytorch
March 7, 2024 - To document the memory usage-performance tradeoff between image.to(device="cuda", dtype=torch.float32) and image.to(device="cuda").to(dtype=torch.float32) in the docs of the to() method.
Author   Redjest
🌐
DEV Community
dev.to › hyperkai › type-conversion-with-type-to-and-a-tensor-in-pytorch-2a0g
Type conversion with type(), to() and a tensor in PyTorch - DEV Community
November 5, 2024 - import torch tensor1= torch.tensor([7.+0.j, 0.+0.j, 4.+0.j]) tensor1, tensor1.dtype, tensor1.dtype # (tensor([7.+0.j, 0.+0.j, 4.+0.j]), torch.complex64, torch.complex64) tensor2 = tensor1.type(dtype=torch.int64) tensor2 = tensor1.type(dtype=torch.long) tensor2 = tensor1.type(dtype='torch.LongTensor') tensor2 = tensor1.to(dtype=torch.int64) tensor2 = tensor1.to(dtype=torch.long) tensor2 = tensor1.to(dtype=int) tensor2 = tensor1.long() tensor2, tensor2.dtype, tensor2.type() # (tensor([7, 0, 4]), torch.int64, 'torch.LongTensor') tensor2 = tensor1.type(dtype=torch.float32) tensor2 = tensor1.type(d