Running

sys.getsizeof(float)

does not return the size of any individual float, it returns the size of the float class. That class contains a lot more data than just any single float, so the returned size will also be much bigger.

If you just want to know the size of a single float, the easiest way is to simply instantiate some arbitrary float. For example:

sys.getsizeof(float())

Note that

float()

simply returns 0.0, so this is actually equivalent to:

sys.getsizeof(0.0)

This returns 24 bytes in your case (and probably for most other people as well). In the case of CPython (the most common Python implementation), every float object will contain a reference counter and a pointer to the type (a pointer to the float class), which will each be 8 bytes for 64bit CPython or 4 bytes each for 32bit CPython. The remaining bytes (24 - 8 - 8 = 8 in your case which is very likely to be 64bit CPython) will be the bytes used for the actual float value itself.

This is not guaranteed to work out the same way for other Python implementations though. The language reference says:

These represent machine-level double precision floating point numbers. You are at the mercy of the underlying machine architecture (and C or Java implementation) for the accepted range and handling of overflow. Python does not support single-precision floating point numbers; the savings in processor and memory usage that are usually the reason for using these are dwarfed by the overhead of using objects in Python, so there is no reason to complicate the language with two kinds of floating point numbers.

and I'm not aware of any runtime methods to accurately tell you the number of bytes used. However, note that the quote above from the language reference does say that Python only supports double precision floats, so in most cases (depending on how critical it is for you to always be 100% right) it should be comparable to double precision in C.

Answer from Dennis Soemers on Stack Overflow
Top answer
1 of 5
15

Running

sys.getsizeof(float)

does not return the size of any individual float, it returns the size of the float class. That class contains a lot more data than just any single float, so the returned size will also be much bigger.

If you just want to know the size of a single float, the easiest way is to simply instantiate some arbitrary float. For example:

sys.getsizeof(float())

Note that

float()

simply returns 0.0, so this is actually equivalent to:

sys.getsizeof(0.0)

This returns 24 bytes in your case (and probably for most other people as well). In the case of CPython (the most common Python implementation), every float object will contain a reference counter and a pointer to the type (a pointer to the float class), which will each be 8 bytes for 64bit CPython or 4 bytes each for 32bit CPython. The remaining bytes (24 - 8 - 8 = 8 in your case which is very likely to be 64bit CPython) will be the bytes used for the actual float value itself.

This is not guaranteed to work out the same way for other Python implementations though. The language reference says:

These represent machine-level double precision floating point numbers. You are at the mercy of the underlying machine architecture (and C or Java implementation) for the accepted range and handling of overflow. Python does not support single-precision floating point numbers; the savings in processor and memory usage that are usually the reason for using these are dwarfed by the overhead of using objects in Python, so there is no reason to complicate the language with two kinds of floating point numbers.

and I'm not aware of any runtime methods to accurately tell you the number of bytes used. However, note that the quote above from the language reference does say that Python only supports double precision floats, so in most cases (depending on how critical it is for you to always be 100% right) it should be comparable to double precision in C.

2 of 5
8
import ctypes
ctypes.sizeof(ctypes.c_double)
🌐
Python documentation
docs.python.org › 3 › library › stdtypes.html
Built-in Types — Python 3.14.3 documentation
February 25, 2026 - The advantage of the range type over a regular list or tuple is that a range object will always take the same (small) amount of memory, no matter the size of the range it represents (as it only stores the start, stop and step values, calculating individual items and subranges as needed).
🌐
ActiveState
code.activestate.com › recipes › 580655-finding-the-sizes-of-various-python-data-types
Finding the sizes of various Python data types « Python recipes « ActiveState Code
April 28, 2016 - This recipe shows how to find the sizes of various common data types in Python, both built-in and user-defined.
🌐
Python Data Science Handbook
jakevdp.github.io › PythonDataScienceHandbook › 02.01-understanding-data-types.html
Understanding Data Types in Python | Python Data Science Handbook
ob_size, which specifies the size of the following data members · ob_digit, which contains the actual integer value that we expect the Python variable to represent. This means that there is some overhead in storing an integer in Python as compared to an integer in a compiled language like C, as illustrated in the following figure: Here PyObject_HEAD is the part of the structure containing the reference count, type code, and other pieces mentioned before.
Top answer
1 of 2
10

Yes, an int instance takes up 12 bytes on your system. Integers (like any object) have attributes, i.e. pointers to other objects, which take up additional memory space beyond that used by the object's own value. So 4 bytes for the integer's value, 4 bytes for a pointer to __class__ (otherwise, Python wouldn't know what type the object belonged to and how to start resolving attribute names that are inherited from the int class and its parents), and another 4 for the object's reference count, which is used by the garbage collector.

The type int occupies 436 bytes on your system, which will be pointers to the various methods and other attributes of the int class and whatever other housekeeping information Python requires for the class. The int class is written in C in the standard Python implementation; you could go look at the source code and see what's in there.

2 of 2
2

From the documentation for sys.getsizeof:

getsizeof() calls the object’s __sizeof__ method and adds an additional garbage collector overhead if the object is managed by the garbage collector.

That might be why sys.getsizeof(1) is giving you 12 bytes. As for your first line, keep in mind what the int object is:

>>> int
<type 'int'>

int is the integer type itself, and not an integer. An integer in python actually takes up as many bytes as it needs (which is why you don't need to worry about overflow), while the type is where all that functionality is handled. I believe this distinction is only valid for built-in types, and for user-defined objects the type itself is probably of a similar size as an instance of that type.

🌐
Pytables
pytables.org › usersguide › datatypes.html
Supported data types in PyTables — PyTables 3.10.2 documentation
PyTables uses ordinary strings to represent its types, with most of them matching the names of NumPy scalar types. Usually, a PyTables type consists of two parts: a kind and a precision in bits. The precision may be omitted in types with just one supported precision (like bool) or with a non-fixed size (like string).
🌐
Stack Overflow
stackoverflow.com › questions › 63596313 › size-of-different-data-types-in-python
pycharm - Size of different data types in Python - Stack Overflow
August 26, 2020 - characters are coded 1,2 or 4 bytes per char, depending of the required size for larger character). Int: they are "bigint", so size could grow. bad article. I would expect some more research.
Top answer
1 of 16
964

Just use the sys.getsizeof function defined in the sys module.

sys.getsizeof(object[, default]):

Return the size of an object in bytes. The object can be any type of object. All built-in objects will return correct results, but this does not have to hold true for third-party extensions as it is implementation specific.

Only the memory consumption directly attributed to the object is accounted for, not the memory consumption of objects it refers to.

The default argument allows to define a value which will be returned if the object type does not provide means to retrieve the size and would cause a TypeError.

getsizeof calls the object’s __sizeof__ method and adds an additional garbage collector overhead if the object is managed by the garbage collector.

See recursive sizeof recipe for an example of using getsizeof() recursively to find the size of containers and all their contents.

Usage example, in python 3.0:

>>> import sys
>>> x = 2
>>> sys.getsizeof(x)
24
>>> sys.getsizeof(sys.getsizeof)
32
>>> sys.getsizeof('this')
38
>>> sys.getsizeof('this also')
48

If you are in python < 2.6 and don't have sys.getsizeof you can use this extensive module instead. Never used it though.

2 of 16
660

How do I determine the size of an object in Python?

The answer, "Just use sys.getsizeof", is not a complete answer.

That answer does work for builtin objects directly, but it does not account for what those objects may contain, specifically, what types, such as custom objects, tuples, lists, dicts, and sets contain. They can contain instances each other, as well as numbers, strings and other objects.

A More Complete Answer

Using 64-bit Python 3.6 from the Anaconda distribution, with sys.getsizeof, I have determined the minimum size of the following objects, and note that sets and dicts preallocate space so empty ones don't grow again until after a set amount (which may vary by implementation of the language):

Python 3:

Empty
Bytes  type        scaling notes
28     int         +4 bytes about every 30 powers of 2
37     bytes       +1 byte per additional byte
49     str         +1-4 per additional character (depending on max width)
48     tuple       +8 per additional item
64     list        +8 for each additional
224    set         5th increases to 736; 21nd, 2272; 85th, 8416; 341, 32992
240    dict        6th increases to 368; 22nd, 1184; 43rd, 2280; 86th, 4704; 171st, 9320
136    func def    does not include default args and other attrs
1056   class def   no slots 
56     class inst  has a __dict__ attr, same scaling as dict above
888    class def   with slots
16     __slots__   seems to store in mutable tuple-like structure
                   first slot grows to 48, and so on.

How do you interpret this? Well say you have a set with 10 items in it. If each item is 100 bytes each, how big is the whole data structure? The set is 736 itself because it has sized up one time to 736 bytes. Then you add the size of the items, so that's 1736 bytes in total

Some caveats for function and class definitions:

Note each class definition has a proxy __dict__ (48 bytes) structure for class attrs. Each slot has a descriptor (like a property) in the class definition.

Slotted instances start out with 48 bytes on their first element, and increase by 8 each additional. Only empty slotted objects have 16 bytes, and an instance with no data makes very little sense.

Also, each function definition has code objects, maybe docstrings, and other possible attributes, even a __dict__.

Also note that we use sys.getsizeof() because we care about the marginal space usage, which includes the garbage collection overhead for the object, from the docs:

getsizeof() calls the object’s __sizeof__ method and adds an additional garbage collector overhead if the object is managed by the garbage collector.

Also note that resizing lists (e.g. repetitively appending to them) causes them to preallocate space, similarly to sets and dicts. From the listobj.c source code:

    /* This over-allocates proportional to the list size, making room
     * for additional growth.  The over-allocation is mild, but is
     * enough to give linear-time amortized behavior over a long
     * sequence of appends() in the presence of a poorly-performing
     * system realloc().
     * The growth pattern is:  0, 4, 8, 16, 25, 35, 46, 58, 72, 88, ...
     * Note: new_allocated won't overflow because the largest possible value
     *       is PY_SSIZE_T_MAX * (9 / 8) + 6 which always fits in a size_t.
     */
    new_allocated = (size_t)newsize + (newsize >> 3) + (newsize < 9 ? 3 : 6);

Historical data

Python 2.7 analysis, confirmed with guppy.hpy and sys.getsizeof:

Bytes  type        empty + scaling notes
24     int         NA
28     long        NA
37     str         + 1 byte per additional character
52     unicode     + 4 bytes per additional character
56     tuple       + 8 bytes per additional item
72     list        + 32 for first, 8 for each additional
232    set         sixth item increases to 744; 22nd, 2280; 86th, 8424
280    dict        sixth item increases to 1048; 22nd, 3352; 86th, 12568 *
120    func def    does not include default args and other attrs
64     class inst  has a __dict__ attr, same scaling as dict above
16     __slots__   class with slots has no dict, seems to store in 
                    mutable tuple-like structure.
904    class def   has a proxy __dict__ structure for class attrs
104    old class   makes sense, less stuff, has real dict though.

Note that dictionaries (but not sets) got a more compact representation in Python 3.6

I think 8 bytes per additional item to reference makes a lot of sense on a 64 bit machine. Those 8 bytes point to the place in memory the contained item is at. The 4 bytes are fixed width for unicode in Python 2, if I recall correctly, but in Python 3, str becomes a unicode of width equal to the max width of the characters.

And for more on slots, see this answer.

A More Complete Function

We want a function that searches the elements in lists, tuples, sets, dicts, obj.__dict__'s, and obj.__slots__, as well as other things we may not have yet thought of.

We want to rely on gc.get_referents to do this search because it works at the C level (making it very fast). The downside is that get_referents can return redundant members, so we need to ensure we don't double count.

Classes, modules, and functions are singletons - they exist one time in memory. We're not so interested in their size, as there's not much we can do about them - they're a part of the program. So we'll avoid counting them if they happen to be referenced.

We're going to use a blacklist of types so we don't include the entire program in our size count.

import sys
from types import ModuleType, FunctionType
from gc import get_referents

# Custom objects know their class.
# Function objects seem to know way too much, including modules.
# Exclude modules as well.
BLACKLIST = type, ModuleType, FunctionType


def getsize(obj):
    """sum size of object & members."""
    if isinstance(obj, BLACKLIST):
        raise TypeError('getsize() does not take argument of type: '+ str(type(obj)))
    seen_ids = set()
    size = 0
    objects = [obj]
    while objects:
        need_referents = []
        for obj in objects:
            if not isinstance(obj, BLACKLIST) and id(obj) not in seen_ids:
                seen_ids.add(id(obj))
                size += sys.getsizeof(obj)
                need_referents.append(obj)
        objects = get_referents(*need_referents)
    return size

To contrast this with the following whitelisted function, most objects know how to traverse themselves for the purposes of garbage collection (which is approximately what we're looking for when we want to know how expensive in memory certain objects are. This functionality is used by gc.get_referents.) However, this measure is going to be much more expansive in scope than we intended if we are not careful.

For example, functions know quite a lot about the modules they are created in.

Another point of contrast is that strings that are keys in dictionaries are usually interned so they are not duplicated. Checking for id(key) will also allow us to avoid counting duplicates, which we do in the next section. The blacklist solution skips counting keys that are strings altogether.

Whitelisted Types, Recursive visitor

To cover most of these types myself, instead of relying on the gc module, I wrote this recursive function to try to estimate the size of most Python objects, including most builtins, types in the collections module, and custom types (slotted and otherwise).

This sort of function gives much more fine-grained control over the types we're going to count for memory usage, but has the danger of leaving important types out:

import sys
from numbers import Number
from collections import deque
from collections.abc import Set, Mapping


ZERO_DEPTH_BASES = (str, bytes, Number, range, bytearray)


def getsize(obj_0):
    """Recursively iterate to sum size of object & members."""
    _seen_ids = set()
    def inner(obj):
        obj_id = id(obj)
        if obj_id in _seen_ids:
            return 0
        _seen_ids.add(obj_id)
        size = sys.getsizeof(obj)
        if isinstance(obj, ZERO_DEPTH_BASES):
            pass # bypass remaining control flow and return
        elif isinstance(obj, (tuple, list, Set, deque)):
            size += sum(inner(i) for i in obj)
        elif isinstance(obj, Mapping) or hasattr(obj, 'items'):
            size += sum(inner(k) + inner(v) for k, v in getattr(obj, 'items')())
        # Check for custom object instances - may subclass above too
        if hasattr(obj, '__dict__'):
            size += inner(vars(obj))
        if hasattr(obj, '__slots__'): # can have __slots__ with __dict__
            size += sum(inner(getattr(obj, s)) for s in obj.__slots__ if hasattr(obj, s))
        return size
    return inner(obj_0)

And I tested it rather casually (I should unittest it):

>>> getsize(['a', tuple('bcd'), Foo()])
344
>>> getsize(Foo())
16
>>> getsize(tuple('bcd'))
194
>>> getsize(['a', tuple('bcd'), Foo(), {'foo': 'bar', 'baz': 'bar'}])
752
>>> getsize({'foo': 'bar', 'baz': 'bar'})
400
>>> getsize({})
280
>>> getsize({'foo':'bar'})
360
>>> getsize('foo')
40
>>> class Bar():
...     def baz():
...         pass
>>> getsize(Bar())
352
>>> getsize(Bar().__dict__)
280
>>> sys.getsizeof(Bar())
72
>>> getsize(Bar.__dict__)
872
>>> sys.getsizeof(Bar.__dict__)
280

This implementation breaks down on class definitions and function definitions because we don't go after all of their attributes, but since they should only exist once in memory for the process, their size really doesn't matter too much.

Top answer
1 of 1
162

The short answer

You're getting the size of the class, not of an instance of the class. Call int to get the size of an instance:

>>> sys.getsizeof(int())
28

If that size still seems a little bit large, remember that a Python int is very different from an int in (for example) C. In Python, an int is a fully-fledged object. This means there's extra overhead.

Every Python object contains at least a refcount and a reference to the object's type in addition to other storage; on a 64-bit machine, just those two things alone take up 16 bytes! The int internals (as determined by the standard CPython implementation) have also changed over time, so that the amount of additional storage taken depends on your version.

int objects in CPython 3.11

Integer objects are internally PyLongObject C types representing blocks of memory. The code that defines this type is spread across multiple files. Here are the relevant parts:

typedef struct _longobject PyLongObject;

struct _longobject {
    PyObject_VAR_HEAD
    digit ob_digit[1];
};

#define PyObject_VAR_HEAD      PyVarObject ob_base;

typedef struct {
    PyObject ob_base;
    Py_ssize_t ob_size; /* Number of items in variable part */
} PyVarObject;

typedef struct _object PyObject;

struct _object {
    _PyObject_HEAD_EXTRA
    union {
       Py_ssize_t ob_refcnt;
#if SIZEOF_VOID_P > 4
       PY_UINT32_T ob_refcnt_split[2];
#endif
    };
    PyTypeObject *ob_type;
};

/* _PyObject_HEAD_EXTRA is nothing on non-debug builds */
#  define _PyObject_HEAD_EXTRA

typedef uint32_t digit;

If we expand all the macros and replace all the typedef statements, this is the struct we end up with:

struct PyLongObject {
    Py_ssize_t ob_refcnt;
    PyTypeObject *ob_type;
    Py_ssize_t ob_size; /* Number of items in variable part */
    uint32_t ob_digit[1];
};

uint32_t means "unsigned 32-bit integer" and uint32_t ob_digit[1]; means an array of 32-bit integers is used to hold the (absolute) value of the integer. The "1" in "ob_digit[1]" means the array should be initialized with space for 1 element.

So we have the following bytes to store an integer object in Python (on a 64-bit system):

  • 8 bytes (64 bits, Py_ssize_t, signed) for ob_refcnt - the reference count
  • 8 bytes (64 bits, PyTypeObject*) for ob_type - the pointer to the int class itself
  • 8 bytes (64 bits, Py_ssize_t, signed) for ob_size - which stores how many 32-bit integers are used to store the integer

and finally a variable-length array (with at least 1 element) of

  • 4 bytes (32 bits) to store each part of the integer

The comment that accompanies this definition summarizes Python 3.11's representation of integers. Zero is represented not by an object with size (ob_size) zero (the actual size is always at least 1 though). Negative numbers are represented by objects with a negative size attribute! This comment further explains that only 30 bits of each uint32_t are used for storing the value.

>>> sys.getsizeof(0)
28
>>> sys.getsizeof(1)
28
>>> sys.getsizeof(2 ** 30 - 1)
28
>>> sys.getsizeof(2 ** 30)
32
>>> sys.getsizeof(2 ** 60 - 1)
32
>>> sys.getsizeof(2 ** 60)
36

On CPython 3.10 and older, sys.getsizeof(0) incorrectly returns 24 instead of 28, this was a bug that was fixed. Python 2 had a second, separate type of integer which worked a bit differently, but generally similar.

You will get slightly different results on a 32-bit system.

Find elsewhere
🌐
Real Python
realpython.com › python-data-types
Basic Data Types in Python: A Quick Exploration – Real Python
December 21, 2024 - Python data types are fundamental to the language, enabling you to represent various kinds of data. You use basic data types like int, float, and complex for numbers, str for text, bytes and bytearray for binary data, and bool for Boolean values.
🌐
GeeksforGeeks
geeksforgeeks.org › how-to-find-size-of-an-object-in-python
How to find size of an object in Python? - GeeksforGeeks
July 17, 2023 - Python Objects include List, tuple, Dictionary, etc have different memory sizes and also each object will have a different memory address. Unexpected size means the memory size which we can not expect. But we can get the s ... In Python, objects are the cornerstone of its object-oriented programming paradigm.
🌐
Projectpython
projectpython.net › chapter02
ProjPython – Variables and expressions - Project Python
The first thing to notice about these binary representations is that their lengths differ. The integer 6 needs only three bits, but the integer 999 needs ten bits. To be safe, Python allocates a fixed number of bytes of space in memory for each variable of a normal integer type, which is known ...
🌐
AskPython
askpython.com › home › variable’s memory size in python
Variable's memory size in Python - AskPython
April 29, 2023 - The variable whose data type is char will contain a character only. Every variable has a data type. In Python, we don’t have to specify the data type at the time of declaration. Python interprets the data type of a variable by itself. Every data type takes a fixed amount of memory size only.
🌐
Codingem
codingem.com › home › how to get the size of a python object (examples & theory)
How to Get the Size of a Python Object (Examples & Theory)
December 12, 2022 - To get the size of a Python object, use sys.getsizeof(). This doesn't factor in the inner objects. Use pympler to get the true size.
🌐
Python
docs.python.org › 3.4 › library › stdtypes.html
4. Built-in Types — Python 3.4.10 documentation
The advantage of the range type over a regular list or tuple is that a range object will always take the same (small) amount of memory, no matter the size of the range it represents (as it only stores the start, stop and step values, calculating individual items and subranges as needed).
🌐
GitHub
github.com › python › mypy › issues › 3187
Expected "Sized" for a class defining "__len__" · Issue #3187 · python/mypy
April 18, 2017 - Expected "Sized" for a class defining "__len__"#3187 · Copy link · lopuhin · opened · on Apr 18, 2017 · Issue body actions · This program: class Class: def __len__(self): return 1 x = Class() print(len(x)) gives the following error with mypy 0.501, Python 3.5: $ mypy t.py t.py:7: error: Argument 1 to "len" has incompatible type "Class"; expected "Sized" I expected it to pass the check because Class has __len__ defined.
Author   lopuhin
🌐
Codedamn
codedamn.com › news › python
How to Determine the Size of Objects in Python
July 2, 2023 - Python provides a built-in module named 'sys' which has a method called 'getsizeof()' that can be used to get the size of an object.
🌐
Python
peps.python.org › pep-0353
PEP 353 – Using ssize_t as the index type | peps.python.org
A new type Py_ssize_t is introduced, which has the same size as the compiler’s size_t type, but is signed. It will be a typedef for ssize_t where available. The internal representation of the length fields of all container types is changed from int to ssize_t, for all types included in the standard distribution.