Use time.time() to measure the elapsed wall-clock time between two points:

import time

start = time.time()
print("hello")
end = time.time()
print(end - start)

This gives the execution time in seconds.


Another option since Python 3.3 might be to use perf_counter or process_time, depending on your requirements. Before 3.3 it was recommended to use time.clock (thanks Amber). However, it is currently deprecated:

On Unix, return the current processor time as a floating point number expressed in seconds. The precision, and in fact the very definition of the meaning of “processor time”, depends on that of the C function of the same name.

On Windows, this function returns wall-clock seconds elapsed since the first call to this function, as a floating point number, based on the Win32 function QueryPerformanceCounter(). The resolution is typically better than one microsecond.

Deprecated since version 3.3: The behaviour of this function depends on the platform: use perf_counter() or process_time() instead, depending on your requirements, to have a well defined behaviour.

Answer from NPE on Stack Overflow
🌐
Python
docs.python.org › 3 › library › timeit.html
timeit — Measure execution time of small code snippets
The default timer, which is always time.perf_counter(), returns float seconds.
Top answer
1 of 16
2637

Use time.time() to measure the elapsed wall-clock time between two points:

import time

start = time.time()
print("hello")
end = time.time()
print(end - start)

This gives the execution time in seconds.


Another option since Python 3.3 might be to use perf_counter or process_time, depending on your requirements. Before 3.3 it was recommended to use time.clock (thanks Amber). However, it is currently deprecated:

On Unix, return the current processor time as a floating point number expressed in seconds. The precision, and in fact the very definition of the meaning of “processor time”, depends on that of the C function of the same name.

On Windows, this function returns wall-clock seconds elapsed since the first call to this function, as a floating point number, based on the Win32 function QueryPerformanceCounter(). The resolution is typically better than one microsecond.

Deprecated since version 3.3: The behaviour of this function depends on the platform: use perf_counter() or process_time() instead, depending on your requirements, to have a well defined behaviour.

2 of 16
1222

Use timeit.default_timer instead of timeit.timeit. The former provides the best clock available on your platform and version of Python automatically:

from timeit import default_timer as timer

start = timer()
# ...
end = timer()
print(end - start) # Time in seconds, e.g. 5.38091952400282

timeit.default_timer is assigned to time.time() or time.clock() depending on OS. On Python 3.3+ default_timer is time.perf_counter() on all platforms. See Python - time.clock() vs. time.time() - accuracy?

See also:

  • Optimizing code
  • How to optimize for speed
Discussions

time complexity - Is there any simple way to benchmark Python script? - Stack Overflow
Use command python -m line_profiler .lprof to print benchmark results. For example: ... Timer unit: 1e-06 s Total time: 0.0021632 s File: test.py Function: function at line 1 Line # Hits Time Per Hit % Time Line Contents ============================================================== ... More on stackoverflow.com
🌐 stackoverflow.com
What are the best methods to test the time it takes Python to finish executing a program?
Timeit is fine for more detail you can look into profiling . import cProfile def test_func(): feet = float(input("Enter the measurement feet: ")) return lambda feet: feet * 12 cProfile.run('test_func()') More on reddit.com
🌐 r/learnpython
3
1
May 8, 2021
How to Benchmark Python Code
Benchmarking should never be based on "wall clock" time. As the article observes, other processes will affect your measurements. For 'python -m timeit', use the '-p'/'--process' option to instead measure the CPU time consumed by the code. Other processes can't affect that, and it will be more repeatable. Benchmark measurements should never include the first run of the code. There are lots of "cold start" effects that don't affect 2nd-nth runs. You should also, separately, measure the 1st run, to understand the additional impact of cold starts. Benchmarks should include I/O measurements. They should include actual counts of input and output primitives called (e.g., read(), write()). Benchmarks should come as close as possible to measuring the environment where the code will actually run. For example, unless your environment messes with the Python garbage collector, don't turn the GC off just "because it's not part of my code". More on reddit.com
🌐 r/Python
5
18
November 25, 2022
Benchmark timer & timeout decorator?

You definitely could do a timeout decorator, but you'll need to spawn a separate thread to either run the function or check the timeout. That might cause issues for programs that are not implemented in a way to handle multi-threading, e.g. missing a if name == main limit.

Anyway, here are a bunch: https://pythonexample.com/code/python-timeout-decorator-windows/

More on reddit.com
🌐 r/learnpython
1
1
November 15, 2018
🌐
Towards Data Science
towardsdatascience.com › home › latest › benchmarking python code with timeit
Benchmarking Python code with timeit | Towards Data Science
March 5, 2025 - timer is the timer used; the default one is perf_counter(), and since it’s currently considered the best built-in timer, in most situations it’s best not to touch it; repeat is the number of sessions to be run, each session consisting of number calls of stmt; you will get the results of all these sessions; provided as an integer, it defaults to 5; globals is a dictionary of globals to be provided; you can use it instead of setup or along with it. For small and quick snippets, there is no need to change number and repeat, unless you want your benchmarks to provide very stable results.
Top answer
1 of 15
164

Have a look at timeit, the python profiler and pycallgraph. Also make sure to have a look at the comment below by nikicc mentioning "SnakeViz". It gives you yet another visualisation of profiling data which can be helpful.

timeit

def test():
    """Stupid test function"""
    lst = []
    for i in range(100):
        lst.append(i)

if __name__ == '__main__':
    import timeit
    print(timeit.timeit("test()", setup="from __main__ import test"))

    # For Python>=3.5 one can also write:
    print(timeit.timeit("test()", globals=locals()))

Essentially, you can pass it python code as a string parameter, and it will run in the specified amount of times and prints the execution time. The important bits from the docs:

timeit.timeit(stmt='pass', setup='pass', timer=<default timer>, number=1000000, globals=None) Create a Timer instance with the given statement, setup code and timer function and run its timeit method with number executions. The optional globals argument specifies a namespace in which to execute the code.

... and:

Timer.timeit(number=1000000) Time number executions of the main statement. This executes the setup statement once, and then returns the time it takes to execute the main statement a number of times, measured in seconds as a float. The argument is the number of times through the loop, defaulting to one million. The main statement, the setup statement and the timer function to be used are passed to the constructor.

Note: By default, timeit temporarily turns off garbage collection during the timing. The advantage of this approach is that it makes independent timings more comparable. This disadvantage is that GC may be an important component of the performance of the function being measured. If so, GC can be re-enabled as the first statement in the setup string. For example:

timeit.Timer('for i in xrange(10): oct(i)', 'gc.enable()').timeit()

Profiling

Profiling will give you a much more detailed idea about what's going on. Here's the "instant example" from the official docs:

import cProfile
import re
cProfile.run('re.compile("foo|bar")')

Which will give you:

      197 function calls (192 primitive calls) in 0.002 seconds

Ordered by: standard name

ncalls  tottime  percall  cumtime  percall filename:lineno(function)
     1    0.000    0.000    0.001    0.001 <string>:1(<module>)
     1    0.000    0.000    0.001    0.001 re.py:212(compile)
     1    0.000    0.000    0.001    0.001 re.py:268(_compile)
     1    0.000    0.000    0.000    0.000 sre_compile.py:172(_compile_charset)
     1    0.000    0.000    0.000    0.000 sre_compile.py:201(_optimize_charset)
     4    0.000    0.000    0.000    0.000 sre_compile.py:25(_identityfunction)
   3/1    0.000    0.000    0.000    0.000 sre_compile.py:33(_compile)

Both of these modules should give you an idea about where to look for bottlenecks.

Also, to get to grips with the output of profile, have a look at this post

pycallgraph

NOTE pycallgraph has been officially abandoned since Feb. 2018. As of Dec. 2020 it was still working on Python 3.6 though. As long as there are no core changes in how python exposes the profiling API it should remain a helpful tool though.

This module uses graphviz to create callgraphs like the following:

You can easily see which paths used up the most time by colour. You can either create them using the pycallgraph API, or using a packaged script:

pycallgraph graphviz -- ./mypythonscript.py

The overhead is quite considerable though. So for already long-running processes, creating the graph can take some time.

2 of 15
47

I use a simple decorator to time the func

import time

def st_time(func):
    """
        st decorator to calculate the total time of a func
    """

    def st_func(*args, **keyArgs):
        t1 = time.time()
        r = func(*args, **keyArgs)
        t2 = time.time()
        print("Function=%s, Time=%s" % (func.__name__, t2 - t1))
        return r

    return st_func
🌐
Real Python
realpython.com › python-timer
Python Timer Functions: Three Ways to Monitor Your Code – Real Python
December 8, 2024 - In this example, you made two calls to perf_counter() almost 4 seconds apart. You can confirm this by calculating the difference between the two outputs: 32315.26 - 32311.49 = 3.77. You can now add a Python timer to the example code:
🌐
TutorialsPoint
tutorialspoint.com › concurrency_in_python › concurrency_in_python_benchmarking_and_profiling.htm
Benchmarking & Profiling
This has the function_timer() function inside it. Now, the nested function will grab the time before calling the passed in function. Then it waits for the function to return and grabs the end time.
🌐
Switowski
switowski.com › blog › how-to-benchmark-python-code
How to Benchmark (Python) Code - Sebastian Witowski
Run Python alpine Docker container (a small, barebones image with Python). Mount the current folder inside the Docker container (so we can access the files we want to benchmark). Run the same timeit command as before. And the results seemed more consistent than without using Docker. Rerunning benchmarks multiple times, I was getting results with smaller deviations. I still had a deviation - some runs were slightly slower, and some were slightly faster. However, that was the case for short code examples (running under 1 second).
Find elsewhere
🌐
GitHub
github.com › kamilgregorczyk › pythonBenchmark › blob › master › timer.py
pythonBenchmark/timer.py at master · kamilgregorczyk/pythonBenchmark
class Timer(): def __init__(self, name): print() print("Starting '{}' benchmark...".format(name)) self.name = name ·
Author   kamilgregorczyk
🌐
Super Fast Python
superfastpython.com › home › tutorials › benchmark python with time.time()
Benchmark Python with time.time() - Super Fast Python
October 2, 2023 - We will then surround this statement with benchmarking code. Firstly, we will record the start time using the time.time() function. Afterward, we will record the end time, calculate the overall execution duration, and report the result. Tying this together, the complete example is listed below. Running the example first records the start time, the number of seconds since the epoch. Next, the Python ...
🌐
Udacity
udacity.com › blog › 2021 › 08 › 9847.html
How To Benchmark Python Execution Time | Udacity
August 10, 2021 - Read on to learn how to identify where slowdowns are most likely to occur, measure Python execution time using timeit and write faster Python code.
🌐
Super Fast Python
superfastpython.com › home › tutorials › benchmark python with time.process_time()
Benchmark Python with time.process_time() - Super Fast Python
September 29, 2023 - We will then call this function, and surround the function call with benchmarking code. Firstly, we will record the start time using the time.process_time() function. Afterward, we will record the end time, calculate the overall execution duration, and report the result. Tying this together, the complete example is listed below. Running the example first records the start time, a number from an internal clock for the process. Next, the Python function is called, in this case creating a list of 100 million squared integers.
🌐
Better Programming
betterprogramming.pub › top-3-ways-to-time-python-code-d46b37d418e0
3 Simple ways to time Python code. Benchmark your Python code and figure… | by Stefan Delic | Aug, 2020 | Medium | Better Programming
August 4, 2020 - “This module provides a simple way to time small bits of Python code. It has both a Command-Line Interface as well as a callable one. It avoids a number of common traps for measuring execution times.” · I will provide some examples for all basic use cases.
🌐
PyTorch
docs.pytorch.org › reference api › benchmark utils - torch.utils.benchmark
Benchmark Utils - torch.utils.benchmark — PyTorch 2.11 documentation
class torch.utils.benchmark.Timer(stmt='pass', setup='pass', global_setup='', timer=<built-in function perf_counter>, globals=None, label=None, sub_label=None, description=None, env=None, num_threads=1, language=Language.PYTHON)[source]#
🌐
Mouse Vs Python
blog.pythonlibrary.org › home › python 101: an intro to benchmarking your code
Python 101: An Intro to Benchmarking your code - Mouse Vs Python
January 31, 2020 - In this example, we use the class’s __init__ method to start our timer. The __enter__ method doesn’t need to do anything other then return itself. Lastly, the __exit__ method has all the juicy bits.
🌐
Super Fast Python
superfastpython.com › home › tutorials › benchmark python with timeit
Benchmark Python with timeit - Super Fast Python
October 11, 2023 - The timeit.timeit() benchmarks Python code and reports the duration in seconds. Create a Timer instance with the given statement, setup code and timer function and run its timeit() method with number executions.
🌐
Super Fast Python
superfastpython.com › home › tutorials › benchmark python with timeit.timeit()
Benchmark Python with timeit.timeit() - Super Fast Python
October 5, 2023 - The timeit.timeit() benchmarks Python code and reports the duration in seconds. Create a Timer instance with the given statement, setup code and timer function and run its timeit() method with number executions.
🌐
Super Fast Python
superfastpython.com › home › tutorials › 4 ways to benchmark python code
4 Ways to Benchmark Python Code - Super Fast Python
October 4, 2023 - It does take a little bit of reading of the API to understand, especially for some use cases, which can put off newer Python developers. Nevertheless, we can perform a simple benchmark of a custom function by calling the timeit.timeit() function and specifying the code or function to benchmark and the setup code required. For example, if we have a custom function defined in the current file, we must import this function as setup code for the timeit.timeit() function to be able to “see” it.
🌐
GitHub
github.com › belm0 › perf-timer
GitHub - belm0/perf-timer: An indispensable performance timer for Python
The smaller the timer's overhead, the less it interferes with the normal timing of your program, and the tighter the code loop it can be applied to. The values below represent the typical overhead of one observation, as measured on ye old laptop (2014 MacBook Air 11 1.7GHz i7). $ pip install -r test-requirements.txt $ python benchmarks/overhead.py compare observers: PerfTimer(observer=AverageObserver): 1.5 µs PerfTimer(observer=StdDevObserver): 1.8 µs (default) PerfTimer(observer=HistogramObserver): 6.0 µs compare types: PerfTimer(observer=StdDevObserver): 1.8 µs ThreadPerfTimer(observer=StdDevObserver): 9.8 µs TrioPerfTimer(observer=StdDevObserver): 4.8 µs
Starred by 24 users
Forked by 4 users
Languages   Python 98.7% | Makefile 1.3% | Python 98.7% | Makefile 1.3%
🌐
GitHub
github.com › Erotemic › timerit
GitHub - Erotemic/timerit: Time and benchmark blocks of Python code. A powerful multi-line in-code alternative to timeit.
A common pattern is to create a single Timerit instance, then to repeatedly "reset" it with different labels to test a number of different algorithms. The labels assigned in this way will be incorporated into the report strings that the Timerit instance produces. The "Benchmark Recipe" below shows an example of this pattern.
Author   Erotemic
🌐
GeeksforGeeks
geeksforgeeks.org › python › python-measure-time-taken-by-program-to-execute
Measure time taken by program to execute in Python - GeeksforGeeks
July 15, 2025 - import timeit s = "from math import sqrt" t = ''' def example(): mylist = [] for x in range(100): mylist.append(sqrt(x)) example() ''' print(timeit.timeit(setup=s, stmt=t, number=10000)) ... default_timer() function from the timeit module gives ...