🌐
Pytest with Eric
pytest-with-eric.com › pytest-best-practices › pytest-read-json
5 Easy Ways To Read JSON Input Data In Pytest | Pytest with Eric
May 29, 2023 - We use the inbuilt parser to parse the flag --input-json to pytest, specify a default value and lastly, use that as a fixture.
🌐
GitHub
github.com › ericsalesdeandrade › pytest-read-json-example
GitHub - ericsalesdeandrade/pytest-read-json-example: 5 Easy Ways To Read JSON Input Data In Pytest
This project explains 5 simple ways to read JSON data in Pytest.
Forked by 3 users
Languages   Python 100.0% | Python 100.0%
🌐
PyPI
pypi.org › project › pytest-json-report
pytest-json-report · PyPI
A pytest plugin to report test results as JSON files
      » pip install pytest-json-report
    
Published   Mar 15, 2022
Version   1.5.0
🌐
Sonar Community
community.sonarsource.com › sonarqube cloud
Ability to parse JSON pytest coverage to obtain context information - SonarQube Cloud - Sonar Community
September 30, 2024 - ALM used: GitHub CI system used: GitHub Scanner command used when applicable (private details masked): default Languages of the repository: Python I am looking to find out if there is a way to have sonarcloud parse either the python coverage.py JSON or HTML output.We are looking to pull in the "context information which tells us which tests actually hit which lines of the source code.
🌐
Stack Overflow
stackoverflow.com › questions › 77909025 › passing-json-string-as-argument-to-pytest
python - Passing JSON string as argument to Pytest - Stack Overflow
def pytest_addoption(parser): parser.addoption( '--vm-name', required=True, metavar='vm-name', help='Name of the virtual machine the tests executes on', type=str, dest='vm-name' ) parser.addoption( '--vm-ip', required=True, metavar='vm-ip', help='IP address of the virtual machine the tests executes on', type=str, dest='vm-ip' ) parser.addoption( '--credentials', required=True, metavar='file', help='Path of the JSON file containing credentials', type=load_credentials, dest='credentials' ) load_credentials is a function that accepts a JSON string and initializes a JSON data class with the given string.
🌐
automation hacks
automationhacks.io › home › 2020 › 12 › 25
Python API test automation framework (Part 5) Working with JSON and JsonPath - automation hacks
December 25, 2020 - Also, we use json.load() and give it a file to read from directly and return a python object that we could · Alright, so this helps us get a python object. Here is how we can use this in our test. Below is the complete test file. I know it looks huge 😏 Let’s unpack the changes. @pytest.fixture def create_data(): payload = read_file('create_person.json') random_no = random.randint(0, 1000) last_name = f'Olabini{random_no}' payload['lname'] = last_name yield payload def test_person_can_be_added_with_a_json_template(create_data): create_person_with_unique_last_name(create_data) response = r
🌐
Qabash
qabash.com › practical-json-patterns-api-to-assertions-in-pytest
Practical JSON Patterns: API to Assertions in PyTest - QAbash.com
August 13, 2025 - ❌ No retry mechanism for flaky APIs Fix: Use PyTest retries with exponential backoff for unstable endpoints · Want to start using JSON patterns effectively? Follow this: ... Use json.loads() for parsing, then use assert statements or helper ...
🌐
GitHub
github.com › pytest-dev › pytest › issues › 2306
How to parse py.test output? · Issue #2306 · pytest-dev/pytest
February 11, 2017 - What is the recommended way to parse py.test output? At first I thought --junitxml was the way to go but it seems like that is incompatible with the xdist plugin and also not documented much. Then I thought pytest-json looked great, but ...
Author   emin63
🌐
Stack Overflow
stackoverflow.com › questions › 73428927 › pytest-execution-result-in-json-format
python - Pytest execution result in json format? - Stack Overflow
<?xml version="1.0" encoding="utf-8"?> <testsuites> <testsuite name="pytest" errors="0" failures="1" skipped="0" tests="4" time="0.397" timestamp="2022-08-20T23:07:01.073263" hostname="HP"> <testcase classname="test_example" name="test_example[456-456]" time="0.005" /> <testcase classname="test_example" name="test_example[-999--999]" time="0.006" /> <testcase classname="test_example" name="test_example[0-]" time="0.005" /> <testcase classname="test_example" name="test_example[0.9-0]" time="0.007"> <failure message="ValueError: invalid literal for int() with base 10: '0.9'">monkeypatch = &lt;_p
Find elsewhere
🌐
PyPI
pypi.org › project › pytest-json
pytest-json · PyPI
A formatted example of the jsonapi output can be found in example_jsonapi.json · Contributions are very welcome. Tests can be run with tox, please ensure the coverage at least stays the same before you submit a pull request. Distributed under the terms of the MIT license, “pytest-json” is free and open source software
      » pip install pytest-json
    
Published   Jan 18, 2016
Version   0.4.0
Top answer
1 of 1
6

You can access the path of the currently executed module via request.node.fspath and build the path to the config.json relative to it. request is a fixture provided by pytest. Here's an example based on the directory structure you provided.

# main/conftest.py
import json
import pathlib
import pytest


@pytest.fixture(autouse=True)
def read_config(request):
    file = pathlib.Path(request.node.fspath)
    print('current test file:', file)
    config = file.with_name('config.json')
    print('current config file:', config)
    with config.open() as fp:
        contents = json.load(fp)
    print('config contents:', contents)

If you copy the code above to your conftest.py and run the tests with -s, you should get an output similar to this:

$ pytest -sv
=============================== test session starts ===============================
platform linux -- Python 3.6.5, pytest-3.4.1, py-1.5.3, pluggy-0.6.0 -- /data/gentoo64/usr/bin/python3.6
cachedir: .pytest_cache
rootdir: /data/gentoo64/tmp/so-50329629, inifile:
collected 2 items

main/project1/test_one.py::test_spam
current file: /data/gentoo64/tmp/so-50329629/main/project1/test_one.py
current config: /data/gentoo64/tmp/so-50329629/main/project1/config.json
config contents: {'name': 'spam'}
PASSED
main/project2/test_two.py::test_eggs
current file: /data/gentoo64/tmp/so-50329629/main/project2/test_two.py
current config: /data/gentoo64/tmp/so-50329629/main/project2/config.json
config contents: {'name': 'eggs'}
PASSED

============================= 2 passed in 0.08 seconds ============================

Use parsed config values

You can access the parsed JSON data by returning it in the fixture and using the fixture as one of the test arguments. I slightly modified the fixture from above so it returns the parsed data and removed the autouse=True:

@pytest.fixture
def json_config(request):
    file = pathlib.Path(request.node.fspath.strpath)
    config = file.with_name('config.json')
    with config.open() as fp:
        return json.load(fp)

Now simply use the fixture name in the test arguments, the value will be what the fixture returns. for example:

def test_config_has_foo_set_to_bar(json_config):
    assert json_config['foo'] == 'bar'
Top answer
1 of 2
5

You can use pytest_generate_tests hook for parametrizing with dynamic data.

First create a fixture that can be called by the test function to get the test data.

# in conftest.py
@pytest.fixture()
def test_data(request):
    return request.param

Now you can parametrize it so that it is called for each test data set:

#in conftest.py
def pytest_generate_tests(metafunc):
    testdata = get_test_data('test_data.json')
    metafunc.parametrize('test_data', testdata, indirect=True)

The testdata passed to the parametrize function has to be a list. So you need to modify the input data a bit before passing it. I modified the get_test_data function a bit.

#in conftest.py
def get_test_data(filename):
    folder_path = os.path.abspath(os.path.dirname(__file__))
    folder = os.path.join(folder_path, 'TestData')
    jsonfile = os.path.join(folder, filename)
    with open(jsonfile) as file:
        data = json.load(file)

    valid_data = [(item, 1) for item in data['valid_data']]
    invalid_data = [(item, 0) for item in data['invalid_data']]

    # data below is a list of tuples, with first element in the tuple being the 
    # arguments for the API call and second element specifies if this is a test 
    # from valid_data set or invalid_data. This can be used for putting in the
    # appropriate assert statements for the API response.
    data = valid_data + invalid_data

    return data

And now your test function could look like:

#in test_API.py
def test_(test_data):
    response = database_api.get_user_info(test_data[0])

    # Add appropriate asserts. 
    # test_data[1] == 1 means 200 response should have been received
    # test_data[1] == 0 means 400 response should have been received

2 of 2
1

I just wrote a package called parametrize_from_file to solve exactly this problem. Here's how it would work for this example:

import parametrize_from_file

# If the JSON file has the same base name as the test module (e.g. "test_api.py"
# and "test_api.json"), this parameter isn't needed.
path_to_json_file = ...

@parametrize_from_file(path_to_json_file, 'valid_data')
def test_valid_data(id, name):
    request = dict(id=id, name=name)
    response = database_api.get_user_info(request)
    assert response.status_code == 200

@parametrize_from_file(path_to_json_file, 'invalid_data')
def test_invalid_data(id, name):
    request = dict(id=id, name=name)
    response = database_api.get_user_info(request)
    assert response.status_code == 400

You could simplify this code a bit by reorganizing the JSON file slightly:

# test_api.json
{
    "test_id_name_requests": [
       {
           "request": {
               "id": "1234",
               "name": "John"
           },
           "status_code": 200
       },
       {
           "request": {
               "id": "2234",
               "name": "Mary"
           },
           "status_code": 200
       },
       {
           "request": {
               "id": "3234",
               "name": "Kenny"
           },
           "status_code": 200
       },
       {
           "request": {
               "id": "1234",
               "name": "Mary"
           },
           "status_code": 400
       },
       {
           "request": {
               "id": "2234",
               "name": "Kenny"
           },
           "status_code": 400
       },
       {
           "request": {
               "id": "3234",
               "name": "John"
           },
           "status_code": 400
       },
    ],
}

With this file, only one test function is needed and no arguments need to be given to the @parametrize_from_file decorator:

# test_api.py
import parametrize_from_file

@parametrize_from_file
def test_id_name_requests(request, status_code):
    response = database_api.get_user_info(request)
    assert response.status_code == status_code
🌐
Blogger
hanxue-it.blogspot.com › 2017 › 10 › pytest-testing-and-comparing-json-response-using-pytest-flask.html
Hanxue and IT: Pytest: Testing and Comparing JSON Response Using Pytest-Flask
October 18, 2017 - Let's say we have a Flask API endpoint that returns this JSON { "message": "Staff name and password pair not match", "errors": { ...
🌐
Medium
medium.com › grammofy › testing-your-python-api-app-with-json-schema-52677fe73351
Testing Your Python API App with JSON Schema | by Paul Götze | Grammofy | Medium
August 21, 2020 - Let’s suppose we have a simple JSON response for a user endpoint GET /users/:id: Here is an example how you would naively test this response by checking the presence of the properties (client would be a pytest fixture, e.g.
Top answer
1 of 1
1

I think the most fitting approach for you would be parametrizing the fixture:

import json
import pathlib
import pytest


lines = pathlib.Path('data.json').read_text().split('\n')

@pytest.fixture(params=lines)
def tweet(request):
    line = request.param
    return json.loads(line)


def hashtags(t):
    return ' '.join([h['text'] for h in t['entities']['hashtags']])


def test_hashtag(tweet):
    assert hashtags(tweet) == 'StandWithLouisiana'

This will invoke test_hashtag once with each returned value of tweet:

$ pytest -v
...
test_spam.py::test_hashtag[{"contributors":null,"coordinates":null,"created_at":"Sat Aug 20 01:00:12 +0000 2016","entities":{"hashtags":[{"indices":[97,116],"text":"StandWithLouisiana"}]}}]
test_spam.py::test_hashtag[{"contributors":null,"coordinates":null,"created_at":"Sat Aug 20 01:01:35 +0000 2016","entities":{"hashtags":[]}}]
...

Edit: extending the fixture to provide the expected value

You can include the expected value into tweet fixture parameters, which are then passed through to the test unchanged. In the below example, the expected tags are zipped with the file lines to build pairs of the form (line, tag). The tweet fixture loads the line into a dictionary, passing the tag through, so the tweet argument in the test becomes a pair of values.

import json
import pathlib
import pytest


lines = pathlib.Path('data.json').read_text().split('\n')
expected_tags = ['StandWithLouisiana', '']

@pytest.fixture(params=zip(lines, expected_tags),
                ids=tuple(repr(tag) for tag in expected_tags))
def tweet(request):
    line, tag = request.param
    return (json.loads(line), tag)


def hashtags(t):
    return ' '.join([h['text'] for h in t['entities']['hashtags']])


def test_hashtag(tweet):
    data, tag = tweet
    assert hashtags(data) == tag

The test run yields two tests as before:

test_spam.py::test_hashtag['StandWithLouisiana'] PASSED
test_spam.py::test_hashtag[''] PASSED

Edit 2: using indirect parametrization

Another and probably a more clean approach would be to let the tweet fixture only handle the parsing the tweet from the raw string, moving the parametrization to the test itself. I'm using the indirect parametrization to pass the raw line to the tweet fixture here:

import json
import pathlib
import pytest


lines = pathlib.Path('data.json').read_text().split('\n')
expected_tags = ['StandWithLouisiana', '']

@pytest.fixture
def tweet(request):
    line = request.param
    return json.loads(line)


def hashtags(t):
    return ' '.join([h['text'] for h in t['entities']['hashtags']])


@pytest.mark.parametrize('tweet, tag', 
                         zip(lines, expected_tags),
                         ids=tuple(repr(tag) for tag in expected_tags),
                         indirect=('tweet',))
def test_hashtag(tweet, tag):
    assert hashtags(tweet) == tag

The test run now also yields two tests:

test_spam.py::test_hashtag['StandWithLouisiana'] PASSED
test_spam.py::test_hashtag[''] PASSED
🌐
Serge-m
serge-m.github.io › posts › testing json responses in flask rest apps with pytest
Testing json responses in Flask REST apps with pytest | sergem's personal public notebook
November 27, 2016 - Testing is an essential part of software developmnet process. Unfortunately best prictives for python are established not as good as for example in Java world. Here I try to explain how to test Flask-based web applications. We want to test endpoints behaviour including status codes and parameters ...
Top answer
1 of 2
1

@hoefling answered my question with quite an easy solution of using the following hooks in conftest:

def pytest_assertrepr_compare(op, left, right):...
def pytest_assertion_pass(item, lineno, orig, expl):...

According to the Pytest documentation, I have to add to pytest.ini enable_assertion_pass_hook=true and erase .pyc files, but other than that it works like a charm. Now I have left and right comparisons with the operator used (pytest_assertrepr_compare) and if the test passed (pytest_assertion_pass).

2 of 2
1

You could try this:

import inspect
import json
from typing import Callable

import pytest


@pytest.fixture(scope="function")
def send_to_test() -> Callable:
    return lambda x: (x ** 2, x ** 3)

# Helper function to run and monitor a test
def run_test(test_func, variable, expected_value, actual_value, data):
    try:
        with open("./data.json", "r") as f:
            data = json.load(f)
    except FileNotFoundError:
        data = []

    data.append(
        {
            "func": test_func,
            "variable": variable,
            "expected_value": expected_value,
            "actual_value": actual_value,
        }
    )
    with open("./data.json", "w", encoding="utf-8") as f:
        json.dump(data, f, ensure_ascii=False, indent=4)

    assert actual_value == expected_value


# Reformatted tests
def test_me_1(send_to_test):
    # setup
    data = []
    x, y = send_to_test(3)

    # tests
    run_test(
        test_func=inspect.stack()[0][3],
        variable="x",
        expected_value=3 ** 2,
        actual_value=x,
        data=data,
    )
    run_test(
        test_func=inspect.stack()[0][3],
        variable="y",
        expected_value=3 ** 3,
        actual_value=y,
        data=data,
    )
    run_test(
        test_func=inspect.stack()[0][3],
        variable="y",
        expected_value=3 ** 2,
        actual_value=y,
        data=data,
    )

And so, when you run pytest, a new data.jsonfile is created with the expected content:

[
    {
        "func": "test_me_1",
        "variable": "x",
        "expected_value": 9,
        "actual_value": 9
    },
    {
        "func": "test_me_1",
        "variable": "y",
        "expected_value": 27,
        "actual_value": 27
    },
    {
        "func": "test_me_1",
        "variable": "y",
        "expected_value": 9,
        "actual_value": 27
    }
]