Have a look at timeit, the python profiler and pycallgraph. Also make sure to have a look at the comment below by nikicc mentioning "SnakeViz". It gives you yet another visualisation of profiling data which can be helpful.

timeit

def test():
    """Stupid test function"""
    lst = []
    for i in range(100):
        lst.append(i)

if __name__ == '__main__':
    import timeit
    print(timeit.timeit("test()", setup="from __main__ import test"))

    # For Python>=3.5 one can also write:
    print(timeit.timeit("test()", globals=locals()))

Essentially, you can pass it python code as a string parameter, and it will run in the specified amount of times and prints the execution time. The important bits from the docs:

timeit.timeit(stmt='pass', setup='pass', timer=<default timer>, number=1000000, globals=None) Create a Timer instance with the given statement, setup code and timer function and run its timeit method with number executions. The optional globals argument specifies a namespace in which to execute the code.

... and:

Timer.timeit(number=1000000) Time number executions of the main statement. This executes the setup statement once, and then returns the time it takes to execute the main statement a number of times, measured in seconds as a float. The argument is the number of times through the loop, defaulting to one million. The main statement, the setup statement and the timer function to be used are passed to the constructor.

Note: By default, timeit temporarily turns off garbage collection during the timing. The advantage of this approach is that it makes independent timings more comparable. This disadvantage is that GC may be an important component of the performance of the function being measured. If so, GC can be re-enabled as the first statement in the setup string. For example:

timeit.Timer('for i in xrange(10): oct(i)', 'gc.enable()').timeit()

Profiling

Profiling will give you a much more detailed idea about what's going on. Here's the "instant example" from the official docs:

import cProfile
import re
cProfile.run('re.compile("foo|bar")')

Which will give you:

      197 function calls (192 primitive calls) in 0.002 seconds

Ordered by: standard name

ncalls  tottime  percall  cumtime  percall filename:lineno(function)
     1    0.000    0.000    0.001    0.001 <string>:1(<module>)
     1    0.000    0.000    0.001    0.001 re.py:212(compile)
     1    0.000    0.000    0.001    0.001 re.py:268(_compile)
     1    0.000    0.000    0.000    0.000 sre_compile.py:172(_compile_charset)
     1    0.000    0.000    0.000    0.000 sre_compile.py:201(_optimize_charset)
     4    0.000    0.000    0.000    0.000 sre_compile.py:25(_identityfunction)
   3/1    0.000    0.000    0.000    0.000 sre_compile.py:33(_compile)

Both of these modules should give you an idea about where to look for bottlenecks.

Also, to get to grips with the output of profile, have a look at this post

pycallgraph

NOTE pycallgraph has been officially abandoned since Feb. 2018. As of Dec. 2020 it was still working on Python 3.6 though. As long as there are no core changes in how python exposes the profiling API it should remain a helpful tool though.

This module uses graphviz to create callgraphs like the following:

You can easily see which paths used up the most time by colour. You can either create them using the pycallgraph API, or using a packaged script:

pycallgraph graphviz -- ./mypythonscript.py

The overhead is quite considerable though. So for already long-running processes, creating the graph can take some time.

Answer from exhuma on Stack Overflow
๐ŸŒ
Perfpy
perfpy.com
perfpy: Benchmark Python Snippets Online
We cannot provide a description for this page right now
๐ŸŒ
Python
speed.python.org
Python Speed Center
A performance analysis tool for software projects. It shows performance regresions and allows comparing different applications or implementations
Discussions

time complexity - Is there any simple way to benchmark Python script? - Stack Overflow
Usually I use shell command time. My purpose is to test if data is small, medium, large or very large set, how much time and memory usage will be. Any tools for Linux or just Python to do this? More on stackoverflow.com
๐ŸŒ stackoverflow.com
I made a free online tool to benchmark python code, including libraries. Feedback is welcome!
The tool is available at perfpy.com (or if from mobile, have a look at a prepared example such as https://perfpy.com/14 ). I was a bit annoyed that I had to always prove during merge request why I used some code to my colleagues. So I wrote a tool to test python snippets (including using libraries such as numpy and OpenCV) and share the benchmark results with a link. I've used this on some merge requests at work so that we base decisions on fact and document the micro benchmarks. It seems to work well so I wanted to share :-). What do you think? Can it be a useful little online tool? More on reddit.com
๐ŸŒ r/SideProject
11
43
April 22, 2021
How to Benchmark your Python Code
how does this tool handle asynchronous code in benchmarks? More on reddit.com
๐ŸŒ r/Python
3
32
November 17, 2025
web applications - Scriptable HTTP benchmark (preferable in Python) - Stack Overflow
I'm searching for a good way to stress test a web application. I'm searching for something like ab with a scriptable interface. Ideally, I want to define some tasks, that simulate different actions... More on stackoverflow.com
๐ŸŒ stackoverflow.com
๐ŸŒ
GitHub
github.com โ€บ python โ€บ pyperformance
GitHub - python/pyperformance: Python Performance Benchmark Suite ยท GitHub
The pyperformance project is intended to be an authoritative source of benchmarks for all Python implementations.
Starred by 1K users
Forked by 202 users
Languages ย  Python 84.0% | HTML 14.4% | Shell 1.6%
๐ŸŒ
Readthedocs
pyperformance.readthedocs.io
The Python Performance Benchmark Suite โ€” Python Performance Benchmark Suite 1.14.0 documentation
The pyperformance project is intended to be an authoritative source of benchmarks for all Python implementations.
Top answer
1 of 15
164

Have a look at timeit, the python profiler and pycallgraph. Also make sure to have a look at the comment below by nikicc mentioning "SnakeViz". It gives you yet another visualisation of profiling data which can be helpful.

timeit

def test():
    """Stupid test function"""
    lst = []
    for i in range(100):
        lst.append(i)

if __name__ == '__main__':
    import timeit
    print(timeit.timeit("test()", setup="from __main__ import test"))

    # For Python>=3.5 one can also write:
    print(timeit.timeit("test()", globals=locals()))

Essentially, you can pass it python code as a string parameter, and it will run in the specified amount of times and prints the execution time. The important bits from the docs:

timeit.timeit(stmt='pass', setup='pass', timer=<default timer>, number=1000000, globals=None) Create a Timer instance with the given statement, setup code and timer function and run its timeit method with number executions. The optional globals argument specifies a namespace in which to execute the code.

... and:

Timer.timeit(number=1000000) Time number executions of the main statement. This executes the setup statement once, and then returns the time it takes to execute the main statement a number of times, measured in seconds as a float. The argument is the number of times through the loop, defaulting to one million. The main statement, the setup statement and the timer function to be used are passed to the constructor.

Note: By default, timeit temporarily turns off garbage collection during the timing. The advantage of this approach is that it makes independent timings more comparable. This disadvantage is that GC may be an important component of the performance of the function being measured. If so, GC can be re-enabled as the first statement in the setup string. For example:

timeit.Timer('for i in xrange(10): oct(i)', 'gc.enable()').timeit()

Profiling

Profiling will give you a much more detailed idea about what's going on. Here's the "instant example" from the official docs:

import cProfile
import re
cProfile.run('re.compile("foo|bar")')

Which will give you:

      197 function calls (192 primitive calls) in 0.002 seconds

Ordered by: standard name

ncalls  tottime  percall  cumtime  percall filename:lineno(function)
     1    0.000    0.000    0.001    0.001 <string>:1(<module>)
     1    0.000    0.000    0.001    0.001 re.py:212(compile)
     1    0.000    0.000    0.001    0.001 re.py:268(_compile)
     1    0.000    0.000    0.000    0.000 sre_compile.py:172(_compile_charset)
     1    0.000    0.000    0.000    0.000 sre_compile.py:201(_optimize_charset)
     4    0.000    0.000    0.000    0.000 sre_compile.py:25(_identityfunction)
   3/1    0.000    0.000    0.000    0.000 sre_compile.py:33(_compile)

Both of these modules should give you an idea about where to look for bottlenecks.

Also, to get to grips with the output of profile, have a look at this post

pycallgraph

NOTE pycallgraph has been officially abandoned since Feb. 2018. As of Dec. 2020 it was still working on Python 3.6 though. As long as there are no core changes in how python exposes the profiling API it should remain a helpful tool though.

This module uses graphviz to create callgraphs like the following:

You can easily see which paths used up the most time by colour. You can either create them using the pycallgraph API, or using a packaged script:

pycallgraph graphviz -- ./mypythonscript.py

The overhead is quite considerable though. So for already long-running processes, creating the graph can take some time.

2 of 15
47

I use a simple decorator to time the func

import time

def st_time(func):
    """
        st decorator to calculate the total time of a func
    """

    def st_func(*args, **keyArgs):
        t1 = time.time()
        r = func(*args, **keyArgs)
        t2 = time.time()
        print("Function=%s, Time=%s" % (func.__name__, t2 - t1))
        return r

    return st_func
๐ŸŒ
Switowski
switowski.com โ€บ blog โ€บ how-to-benchmark-python-code
How to Benchmark (Python) Code - Sebastian Witowski
The easiest way to measure how long it takes to run some code is to use the timeit module. You can write python -m timeit your_code(), and Python will print out how long it took to run whatever your_code() does.
๐ŸŒ
GitHub
gist.github.com โ€บ apalala โ€บ 3fbbeb5305584d2abe05
A simple Python benchmark ยท GitHub
A simple Python benchmark. GitHub Gist: instantly share code, notes, and snippets.
Find elsewhere
๐ŸŒ
PyPI
pypi.org โ€บ project โ€บ pytest-benchmark
pytest-benchmark ยท PyPI
A ``pytest`` fixture for benchmarking code. It will group the tests into rounds that are calibrated to the chosen timer.
      ยป pip install pytest-benchmark
    
Published ย  Nov 09, 2025
Version ย  5.2.3
๐ŸŒ
TutorialsPoint
tutorialspoint.com โ€บ concurrency_in_python โ€บ concurrency_in_python_benchmarking_and_profiling.htm
Benchmarking & Profiling
In other words, we can understand it as breaking the big and hard problem into series of smaller and a bit easier problems for optimizing them. In Python, we have a by default module for benchmarking which is called timeit.
๐ŸŒ
Reddit
reddit.com โ€บ r/sideproject โ€บ i made a free online tool to benchmark python code, including libraries. feedback is welcome!
r/SideProject on Reddit: I made a free online tool to benchmark python code, including libraries. Feedback is welcome!
April 22, 2021 - So I wrote a tool to test python snippets (including using libraries such as numpy and OpenCV) and share the benchmark results with a link. I've used this on some merge requests at work so that we base decisions on fact and document the micro benchmarks. It seems to work well so I wanted to share :-). What do you think? Can it be a useful little online tool?
๐ŸŒ
Super Fast Python
superfastpython.com โ€บ home โ€บ tutorials โ€บ 4 ways to benchmark python code
4 Ways to Benchmark Python Code - Super Fast Python
October 4, 2023 - You can benchmark Python code using the Python standard library. Code can be benchmarked manually using the time module. The timeit module provides functions for automatically benchmarking code.
๐ŸŒ
Readthedocs
pyperformance.readthedocs.io โ€บ benchmarks.html
Benchmarks โ€” Python Performance Benchmark Suite 1.14.0 documentation
Measure the performance of the python path/to/hg help command using pyperf.Runner.bench_command(), where python is sys.executable and path/to/hg is the Mercurial program installed in a virtual environmnent. The bench_command() redirects stdout and stderr into /dev/null. See the Mercurial project. Parse the pyperformance/benchmarks/data/w3_tr_html5.html HTML file (132 KB) using html5lib.
๐ŸŒ
Bencher
bencher.dev โ€บ learn โ€บ benchmarking โ€บ python โ€บ pytest-benchmark
How to benchmark Python code with pytest-benchmark | Bencher - Continuous Benchmarking
November 3, 2024 - Bencher has a built-in adapters, so itโ€™s easy to integrate into CI. After following the Quick Start guide, Iโ€™m able to run my benchmarks and track them with Bencher. $ bencher run --adapter python_pytest --file results.json "pytest --benchmark-json results.json game.py"
๐ŸŒ
Reddit
reddit.com โ€บ r/python โ€บ how to benchmark your python code
r/Python on Reddit: How to Benchmark your Python Code
November 17, 2025 -

Hi!

https://codspeed.io/docs/guides/how-to-benchmark-python-code

I just wrote a guide on how to test the performance of your Python code with benchmarks. It 's a good place to start if you never did it!

Happy to answer any question!

๐ŸŒ
GitHub
github.com โ€บ tonybaloney โ€บ rich-bench
GitHub - tonybaloney/rich-bench: A little benchmarking tool for Python ยท GitHub
A little benchmarking tool for Python. Contribute to tonybaloney/rich-bench development by creating an account on GitHub.
Starred by 207 users
Forked by 7 users
Languages ย  Python
Top answer
1 of 4
11

If you're familiar with the python requests package, locust is very easy to write load tests in.

http://locust.io/

I've used it to write all of our perf tests in it.

2 of 4
3

You can maybe look onto these tools:

  1. palb (Python Apache-Like Benchmark Tool) - HTTP benchmark tool with command line interface resembles ab.
    It lacks the advanced features of ab, but it supports multiple URLs (from arguments, files, stdin, and Python code).

  2. Multi-Mechanize - Performance Test Framework in Python
    Multi-Mechanize is an open source framework for performance and load testing.

    • Runs concurrent Python scripts to generate load (synthetic transactions) against a remote site or service.
    • Can be used to generate workload against any remote API accessible from Python.
    • Test output reports are saved as HTML or JMeter-compatible XML.

  3. Pylot (Python Load Tester) - Web Performance Tool
    Pylot is a free open source tool for testing performance and scalability of web services.
    It runs HTTP load tests, which are useful for capacity planning, benchmarking, analysis, and system tuning.
    Pylot generates concurrent load (HTTP Requests), verifies server responses, and produces reports with metrics.
    Tests suites are executed and monitored from a GUI or shell/console.
    ( Pylot on GoogleCode )

  4. The Grinder
    Default script language is Jython.
    Pretty compact how-to guide.

  5. Tsung
    Maybe a bit unusual for the first use but really good for stress-testing.
    Step-by-step guide.

+1 for locust.io in answer above.

๐ŸŒ
Readthedocs
pyperf.readthedocs.io
Python pyperf module โ€” pyperf 2.10.0 documentation
The Python pyperf module is a toolkit to write, run and analyze benchmarks.
๐ŸŒ
GitHub
github.com โ€บ numfocus โ€บ python-benchmarks
GitHub - numfocus/python-benchmarks: A set of benchmark problems and implementations for Python ยท GitHub
A set of benchmark problems and implementations for Python - numfocus/python-benchmarks
Starred by 67 users
Forked by 24 users
Languages ย  CSS 58.5% | JavaScript 24.1% | Python 17.4%
๐ŸŒ
Pybenchmarks
pybenchmarks.org
Python Interpreters Benchmarks
Benchmarks of Python interpreters and compilers.