You can use pytest_generate_tests hook for parametrizing with dynamic data.

First create a fixture that can be called by the test function to get the test data.

# in conftest.py
@pytest.fixture()
def test_data(request):
    return request.param

Now you can parametrize it so that it is called for each test data set:

#in conftest.py
def pytest_generate_tests(metafunc):
    testdata = get_test_data('test_data.json')
    metafunc.parametrize('test_data', testdata, indirect=True)

The testdata passed to the parametrize function has to be a list. So you need to modify the input data a bit before passing it. I modified the get_test_data function a bit.

#in conftest.py
def get_test_data(filename):
    folder_path = os.path.abspath(os.path.dirname(__file__))
    folder = os.path.join(folder_path, 'TestData')
    jsonfile = os.path.join(folder, filename)
    with open(jsonfile) as file:
        data = json.load(file)

    valid_data = [(item, 1) for item in data['valid_data']]
    invalid_data = [(item, 0) for item in data['invalid_data']]

    # data below is a list of tuples, with first element in the tuple being the 
    # arguments for the API call and second element specifies if this is a test 
    # from valid_data set or invalid_data. This can be used for putting in the
    # appropriate assert statements for the API response.
    data = valid_data + invalid_data

    return data

And now your test function could look like:

#in test_API.py
def test_(test_data):
    response = database_api.get_user_info(test_data[0])

    # Add appropriate asserts. 
    # test_data[1] == 1 means 200 response should have been received
    # test_data[1] == 0 means 400 response should have been received

Answer from yeniv on Stack Overflow
🌐
PyPI
pypi.org › project › pytest-json-report
pytest-json-report · PyPI
A pytest plugin to report test results as JSON files
      » pip install pytest-json-report
    
Published   Mar 15, 2022
Version   1.5.0
🌐
PyPI
pypi.org › project › pytest-json
pytest-json · PyPI
Distributed under the terms of the MIT license, “pytest-json” is free and open source software · If you encounter any problems, please file an issue along with a detailed description. ... Download the file for your platform. If you're not sure which to choose, learn more about installing packages. ... Details for the file pytest-json-0.4.0.tar.gz.
      » pip install pytest-json
    
Published   Jan 18, 2016
Version   0.4.0
Top answer
1 of 2
5

You can use pytest_generate_tests hook for parametrizing with dynamic data.

First create a fixture that can be called by the test function to get the test data.

# in conftest.py
@pytest.fixture()
def test_data(request):
    return request.param

Now you can parametrize it so that it is called for each test data set:

#in conftest.py
def pytest_generate_tests(metafunc):
    testdata = get_test_data('test_data.json')
    metafunc.parametrize('test_data', testdata, indirect=True)

The testdata passed to the parametrize function has to be a list. So you need to modify the input data a bit before passing it. I modified the get_test_data function a bit.

#in conftest.py
def get_test_data(filename):
    folder_path = os.path.abspath(os.path.dirname(__file__))
    folder = os.path.join(folder_path, 'TestData')
    jsonfile = os.path.join(folder, filename)
    with open(jsonfile) as file:
        data = json.load(file)

    valid_data = [(item, 1) for item in data['valid_data']]
    invalid_data = [(item, 0) for item in data['invalid_data']]

    # data below is a list of tuples, with first element in the tuple being the 
    # arguments for the API call and second element specifies if this is a test 
    # from valid_data set or invalid_data. This can be used for putting in the
    # appropriate assert statements for the API response.
    data = valid_data + invalid_data

    return data

And now your test function could look like:

#in test_API.py
def test_(test_data):
    response = database_api.get_user_info(test_data[0])

    # Add appropriate asserts. 
    # test_data[1] == 1 means 200 response should have been received
    # test_data[1] == 0 means 400 response should have been received

2 of 2
1

I just wrote a package called parametrize_from_file to solve exactly this problem. Here's how it would work for this example:

import parametrize_from_file

# If the JSON file has the same base name as the test module (e.g. "test_api.py"
# and "test_api.json"), this parameter isn't needed.
path_to_json_file = ...

@parametrize_from_file(path_to_json_file, 'valid_data')
def test_valid_data(id, name):
    request = dict(id=id, name=name)
    response = database_api.get_user_info(request)
    assert response.status_code == 200

@parametrize_from_file(path_to_json_file, 'invalid_data')
def test_invalid_data(id, name):
    request = dict(id=id, name=name)
    response = database_api.get_user_info(request)
    assert response.status_code == 400

You could simplify this code a bit by reorganizing the JSON file slightly:

# test_api.json
{
    "test_id_name_requests": [
       {
           "request": {
               "id": "1234",
               "name": "John"
           },
           "status_code": 200
       },
       {
           "request": {
               "id": "2234",
               "name": "Mary"
           },
           "status_code": 200
       },
       {
           "request": {
               "id": "3234",
               "name": "Kenny"
           },
           "status_code": 200
       },
       {
           "request": {
               "id": "1234",
               "name": "Mary"
           },
           "status_code": 400
       },
       {
           "request": {
               "id": "2234",
               "name": "Kenny"
           },
           "status_code": 400
       },
       {
           "request": {
               "id": "3234",
               "name": "John"
           },
           "status_code": 400
       },
    ],
}

With this file, only one test function is needed and no arguments need to be given to the @parametrize_from_file decorator:

# test_api.py
import parametrize_from_file

@parametrize_from_file
def test_id_name_requests(request, status_code):
    response = database_api.get_user_info(request)
    assert response.status_code == status_code
🌐
Medium
medium.com › grammofy › testing-your-python-api-app-with-json-schema-52677fe73351
Testing Your Python API App with JSON Schema | by Paul Götze | Grammofy | Medium
August 21, 2020 - Let’s suppose we have a simple JSON response for a user endpoint GET /users/:id: Here is an example how you would naively test this response by checking the presence of the properties (client would be a pytest fixture, e.g.
🌐
Pytest with Eric
pytest-with-eric.com › pytest-best-practices › pytest-read-json
5 Easy Ways To Read JSON Input Data In Pytest | Pytest with Eric
January 22, 2026 - Here we’ve used the pathlib module to parse the file but you can also use the json module. The benefit here is we decouple the input from the test module, but you’ll run into test errors if the input file is unavailable. Another way (and one I’m a big fan of) is to provide your input data as a Pytest fixture.
🌐
GitHub
github.com › numirias › pytest-json-report › blob › master › tests › test_jsonreport.py
pytest-json-report/tests/test_jsonreport.py at master · numirias/pytest-json-report
🗒️ A pytest plugin to report test results as JSON. Contribute to numirias/pytest-json-report development by creating an account on GitHub.
Author   numirias
🌐
GitHub
github.com › mattcl › pytest-json
GitHub - mattcl/pytest-json: WIP: generate json reports from pytest runs · GitHub
pytest-json is a plugin for py.test that generates JSON reports for test results
Starred by 15 users
Forked by 5 users
Languages   Python
Find elsewhere
🌐
GitHub
github.com › numirias › pytest-json-report
GitHub - numirias/pytest-json-report: 🗒️ A pytest plugin to report test results as JSON
🗒️ A pytest plugin to report test results as JSON. Contribute to numirias/pytest-json-report development by creating an account on GitHub.
Starred by 153 users
Forked by 45 users
Languages   Python 100.0% | Python 100.0%
🌐
GitHub
github.com › collective › pytest-jsonschema
GitHub - collective/pytest-jsonschema: A pytest plugin to perform JSONSchema validations · GitHub
import json from pathlib import Path def test_package_json_is_valid(schema_validate): data = json.loads(Path("package.json").read_text()) assert schema_validate(data=data, schema_name="package") ... Testing is conducted using pytest.
Author   collective
🌐
automation hacks
automationhacks.io › home › 2020 › 12 › 25
Python API test automation framework (Part 5) Working with JSON and JsonPath - automation hacks
December 25, 2020 - Also, we use json.load() and give it a file to read from directly and return a python object that we could · Alright, so this helps us get a python object. Here is how we can use this in our test. Below is the complete test file. I know it looks huge 😏 Let’s unpack the changes. @pytest.fixture def create_data(): payload = read_file('create_person.json') random_no = random.randint(0, 1000) last_name = f'Olabini{random_no}' payload['lname'] = last_name yield payload def test_person_can_be_added_with_a_json_template(create_data): create_person_with_unique_last_name(create_data) response = r
🌐
QA bash.com
qabash.com › pytest-json-api-assertions-patterns
JSON API Testing: PyTest Assertion Patterns - QA bash.com
June 26, 2025 - Master JSON handling in PyTest—parse API responses, assert keys/values/arrays, schema validation, soft assertions. 15+ battle-tested patterns for robust API tests that cut flakiness 80%!
🌐
GitHub
github.com › ericsalesdeandrade › pytest-read-json-example
GitHub - ericsalesdeandrade/pytest-read-json-example: 5 Easy Ways To Read JSON Input Data In Pytest
This project explains 5 simple ways to read JSON data in Pytest. ... If you don't have Pip installed please follow instructions online on how to do it. To run the Unit Tests, from the root of the repo run
Forked by 3 users
Languages   Python 100.0% | Python 100.0%
🌐
Blogger
hanxue-it.blogspot.com › 2017 › 10 › pytest-testing-and-comparing-json-response-using-pytest-flask.html
Hanxue and IT: Pytest: Testing and Comparing JSON Response Using Pytest-Flask
October 18, 2017 - Let's say we have a Flask API endpoint that returns this JSON { "message": "Staff name and password pair not match", "errors": { ...
🌐
Anaconda.org
anaconda.org › anaconda › pytest-json
Pytest Json :: Anaconda.org
pytest-json is a plugin for py.test that generates JSON reports for test results.
🌐
Carreau
carreau.github.io › pytest-json-report-viewer
Pytest Json-report viewer
You can generate report with pytest --json-report --json-report-file=report.json
🌐
Anaconda.org
anaconda.org › conda-forge › pytest-json-report
pytest-json-report - conda-forge | Anaconda.org
Install pytest-json-report with Anaconda.org. A pytest plugin to report test results as JSON files
Top answer
1 of 1
2

As it would appear, the pytest-json project is rather defunct. The developer/owner of pytest-json-report has this to say (under Related Tools at this link).

pytest-json has some great features but appears to be unmaintained. I borrowed some ideas and test cases from there.

The pytest-json-report project handles exactly the case that I'm requiring: capturing stdout from a subprocess and putting it into the JSON report. A crude example of doing so follows:

import subprocess as sp
import pytest
import sys
import re

def specialAssertHandler(str, assertMessage):
    # because pytest automatically captures stdout,stderr this is all that's needed
    # when the report is generated, this will be in a field named "stdout"
    print(str)
    return assertMessage

def test_subProcessStdoutCapture():
    # NOTE: if you're version of Python 3 is sufficiently mature, add text=True also
    proc = sp.Popen(['find', '.', '-name', '*.json'], stdout=sp.PIPE)

    # NOTE: I had this because on the Ubuntu I was using, this is the version of
    # Python and the return of proc.stdout.read() is a binary object not a string
    if sys.version[0] == 3 and sys.version[6]:
        output = proc.stdout.read().decode()
    elif sys.version[0] == 2:
        # The other version of Python I'm using is 2.7.15, it's exceedingly frustrating
        # that the Python language def changed so between 2 and 3.  In 2, the output
        # was already a string object
        output = proc.stdout.read()

    m = re.search('some string', output)
    assert m is not None, specialAssertHandler(output, "did not find 'some string' in output")

With the above, using the pytest-json-report, the full output of the subprocess is captured by the infrastructure and placed into the afore mentioned report. An excerpt showing this is below:

        {
            "nodeid": "expirment_test.py::test_stdout",
            "lineno": 25,
            "outcome": "failed",
            "keywords": [
                "PyTest",
                "test_stdout",
                "expirment_test.py"
            ],
            "setup": {
                "duration": 0.0002694129943847656,
                "outcome": "passed"
            },
            "call": {
                "duration": 0.02718186378479004,
                "outcome": "failed",
                "crash": {
                    "path": "/home/afalanga/devel/PyTest/expirment_test.py",
                    "lineno": 32,
                    "message": "AssertionError: Expected to find always\nassert None is not None"
                },
                "traceback": [
                    {
                        "path": "expirment_test.py",
                        "lineno": 32,
                        "message": "AssertionError"
                    }
                ],
                "stdout": "./.report.json\n./report.json\n./report1.json\n./report2.json\n./simple_test.json\n./testing_addition.json\n\n",
                "longrepr": "..."
            },
            "teardown": {
                "duration": 0.0004875659942626953,
                "outcome": "passed"
            }
        }

The field longrepr holds the full text of the test case but in the interest of brevety, it is made an ellipsis. In the field crash, the value of assertMessage from my example is placed. This shows that it is possible to place such messages into the report at the point of occurrence instead of post processing.

I think it may be possible to "cleverly" handle this using the hook I referenced in my original question pytest_exception_interact. If I find it is so, I'll update this answer with a demonstration.

🌐
Kiadev
kiadev.net › news › 2025-10-14-advanced-pytest-plugins-fixtures-json-reporting
Mastering PyTest: Custom Plugins, Fixtures and JSON Test Reporting
'Hands-on tutorial demonstrating custom PyTest configuration, fixtures, markers and a plugin that produces JSON summaries to automate and analyze test runs.'