pythontest automation

Python for test automation: pytest

Pytest is a popular testing framework for Python that simplifies the process of writing and executing tests. It provides a robust set of features for testing, including support for functional testing, unit testing, and integration testing. Pytest offers a simple and intuitive syntax for writing test cases, allowing us to focus more on testing rather than dealing with boilerplate code. It supports various types of tests, such as assert-based tests, fixture-based tests, parameterized tests, and test discovery. Pytest also integrates seamlessly with other testing tools and libraries, making it highly versatile and adaptable to different testing needs. Overall, Pytest is widely regarded for its ease of use, flexibility, and powerful capabilities, making it a popular choice among Python developers for writing automated tests.

Pytest also supports creating (or using) plugins, which are modular extensions that enhance the functionality of the Pytest testing framework. These plugins offer additional features, customizations, and integrations to streamline the testing process and cater to specific testing needs. Pytest plugins can extend the framework's capabilities by introducing custom fixtures, markers, hooks, command-line options, and other functionalities. They promote code reuse, scalability, and flexibility by encapsulating reusable functionality into modular units that can be easily shared and reused across projects. With a vibrant ecosystem of community-contributed plugins, developers can leverage existing plugins or create their own to address various testing challenges and integrate Pytest seamlessly with other tools and libraries in the Python ecosystem. Overall, Pytest plugins enhance the power and versatility of Pytest, enabling developers to build comprehensive and efficient test suites tailored to their project requirements.

For example, pytest-playwright plugin is an extension for the Pytest testing framework that integrates Playwright, a browser automation library, with Pytest. The pytest-playwright plugin provides a set of fixtures and utilities that make it easy for us to write and execute browser-based tests using Pytest.

Test functions

In Pytest, defining test functions is straightforward and follows a simple convention. Test functions are ordinary Python functions that begin with the prefix "test_" or end with the suffix "_test". Here we will use the former. These functions typically contain assertions that validate the behavior of the code being tested. Pytest automatically identifies and runs these test functions when executing test suites, by simply running the command pytest in the command line. Developers can organize test functions within Python modules or packages, and Pytest will discover and execute them based on predefined search patterns. Test functions can accept parameters using fixtures, which are special functions provided by Pytest to set up test resources and dependencies. We will dive deeper into fixtures in a later section.

As stated before, for Pytest to find test cases in your Python modules or packages, we will add a "test_" prefix to our functions:

def test_request_should_return_200_ok_when_provided_with_valid_data():
    ...

Structure (AAA)

The AAA (Arrange, Act, Assert) structure is a widely adopted pattern for organizing and writing tests. In this structure, each test case is divided into three distinct sections:

  1. Arrange: In the "Arrange" phase, tests set up preconditions or context necessary for the test to run. This includes initializing objects, setting up dependencies, and environment preparation for the test case.
  2. Act: The "Act" phase involves executing the specific action or behavior that is being tested. This typically involves calling a method or function under test with the arranged setup or input data.
  3. Assert: In the "Assert" phase, the test verifies the expected outcome or behavior resulting from the action taken in the previous phase. This involves making assertions to validate that the actual outcome matches the expected outcome.

By following the AAA structure, tests become more organized, readable, and maintainable. It helps in clearly separating the setup, execution, and verification aspects of a test case, making it easier to understand the purpose of each test and diagnose failures when they occur.

Here is a simple example of structuring a test:

import requests  # requests is a Python package for handling http requests


def test_get_post_by_id_should_return_200_ok_with_post_data_when_provided_with_valid_id():
    # Arrange
    data_id = 1
    url = f"https://jsonplaceholder.typicode.com/posts/{data_id}"

    # Act
    response = requests.get(url)

    # Assert
    assert response.status_code == 200
    data = response.json()
    assert data["id"] == data_id

In reality, we could use another Python package, pydantic, for data validation.

Naming

The name of the test function should provide clear and concise information about the specific scenario or behavior being tested. It's recommended to use descriptive names that indicate the input data, expected outcome, or behavior being tested. This helps us easily understand the purpose of each test case and quickly identify which aspects of the codebase are being tested. This is why test names can become very long, so long infact that they can cause linting errors (line-too-long). In my opinion linting errors for test name lengths should be ignored. Test names in general should be very descriptive and concise.

Assertion

Assertions in Pytest are done by using the built-in keyword assert. The keyword is followed by a single space, and two comma-separated arguments; first is any object, operation or function call that evaluates to a boolean; the second is a optional message that will be shown if the assertion fails, i.e. the first argument evaluates to False.

def add(a: int, b: int) -> int:
    return a + b


elements = [0, 1, 2]


assert add(6, 2) == 8  # PASS: 6 + 2 = 8
assert 4 % 2 == 0, "Number was not even"  # PASS 4 is an even number
assert 3 in elements   # FAIL: the number 3 is not in elements

Test classes

In Pytest, we can define tests as class methods instead of functions. They follow the same naming convention as function-based tests, i.e. methods have a prefix "test_" in their name. Additionally all test functions receive self as the first argument, which is a reference to the class object itself. More about classes in a later section.

class TestMathOperations:
    def test_addition(self):
        assert 2 + 2 == 4

    def test_subtraction(self):
        assert 5 - 3 == 2

    def test_multiplication(self):
        assert 3 * 4 == 12

    def test_division(self):
        assert 10 / 2 == 5

Tags

Pytest tags, also known as markers, are a feature of the Pytest testing framework that allows us to categorize and organize test functions or test classes. With Pytest tags, we can assign one or more descriptive labels or tags to individual test functions or classes, indicating specific characteristics or attributes of the tests. These tags can represent various aspects such as test priority, test type, test environment, or any custom categorization that is relevant to the testing scenario. Pytest tags provide a flexible mechanism for filtering and selecting subsets of tests to run based on specific criteria, enabling us to execute tests selectively and efficiently.

To use Pytest tags, developers can annotate test functions or test classes with markers using the @pytest.mark decorator (more about decorators in a later section). For example, @pytest.mark.smoke can be used to mark a test as a smoke test, while @pytest.mark.slow can be used to mark a test as a slow-running test. Additionally, developers can define custom markers using the pytest.mark decorator and specify custom tags as needed (this happens in a file called pytest.ini at the base of your project). Once the test functions or classes are tagged, developers can utilize Pytest's command-line options or configuration settings to selectively run tests based on the specified tags. For example, the -m command-line option allows developers to specify which tests to run based on the assigned markers.

Fixtures (test setup and teardown)

Pytest fixtures are functions that provide a reliable and reusable way to set up test resources and dependencies before executing test functions. They are commonly used to initialize objects, establish connections, or prepare the environment needed for testing. Pytest fixtures are defined using the @pytest.fixture decorator, and they can accept parameters to customize their behavior based on test requirements. Fixtures can be scoped to different levels, such as function scope, module scope, class scope, or session scope, allowing us to control the lifetime and visibility of the fixture across different tests. By using fixtures, we can encapsulate common setup and teardown logic, promote code reuse, and enhance the maintainability and readability of test code.

There are two ways to take a fixture in use; first we can place the fixture functions name as an argument to our test function as in the example below; or we can give the fixture the autouse argument as True to tell Pytest to run this fixture automatically based on its scope. By default the scope of a fixture is function, i.e. it will run for each test separately.

import pytest


@pytest.fixture
def my_fixture():
    # runs before your test (setup)
    yield  # keyword to 'yield' back execution to our test
    # runs after your test (teardown)


@pytest.fixture(autouse=True)
def my_autouse_fixture():
    ...


def test_something_should_happen_when_this_is_done(my_fixture):
    pass
import pytest


@pytest.fixture
def my_fixture():
    print("before test")
    yield
    print("after test")


def test_something_should_happen_when_this_is_done(my_fixture):
    print("during test")
    assert False

The previous fixture example prints out

test_something.py::test_something_should_happen_when_this_is_done

before test
during test
FAILED
after test

This shows us in which order things happen when executing our test. First, we see "before test" which is the part of my_fixture that is run before the fixture yields control back to the test, then we see "during test" which is printed during actual test execution, and finally we see "after test" which is run after the test function is complete, and control returns back to the fixture:

Parametrization

Test parametrization in Pytest is a feature that allows us to run the same test function with multiple sets of input data or parameters. This enables efficient testing of a function or code snippet against various input values or scenarios, reducing the need for redundant test functions. Parametrization is achieved using the @pytest.mark.parametrize decorator, where we specify the names of input parameters and the corresponding values or data sets to be tested. Pytest then automatically generates and executes separate test cases for each combination of input parameters, providing detailed feedback for each test case.

Here is an example how to use parametrization. The first argument argnames is a list or tuple of argument names that will be available for the test function (here number), the second argument argvalues is a list of tuples with the values we want to pass to the argnames per test run. Here we first pass 1 to number and on another run we pass 10 to number.

@pytest.mark.parametrize(
    ("number",),  # tuple of argument names
    [             # list of argument values begins, this list can contain as many argument values as needed
        (1,),     # argument 'number' will be 1 for the first test run
        (-10),    # argument 'number' will be -10 for the second test run
    ]
)
def test_number_should_be_less_than_10(number: int):
    assert number < 10

Usage

Pytest provides a command-line interface that allows us to execute tests conveniently from the terminal. To use Pytest in the command line, we navigate to the directory containing test files and simply type pytest followed by optional arguments or options to customize the test execution. Pytest automatically discovers and runs all test functions and classes within the directory and its subdirectories, reporting the results to the terminal. We can use various command-line options to customize the test run, such as specifying specific test files or directories to run, filtering tests based on markers or keywords, enabling verbose output for detailed test information, and configuring test coverage reporting.

Here are the most important optional arguments we can give to Pytest:

-m <tag filter> is a way to filter tests based on tags, or markers placed as decorator to test functions or methods. For example: pytest -m web, which would run all tests with @pytest.mark.web or pytest -m "not web", which would run tests that are not marked with @pytest.mark.web. We can also give multiple tags by separating them by and or or: pytest -m "web and api" Be careful about the boolean logic when using multiple tags with and or or or not.

-k <filter> is a way to filter tests based on their name, the filter argument can contain a full test name or a part of a test name. All matching tests will be run. Similar to the marker filter, we can have multiple filters with and, or and not.

Additionally a list of space-separated test file paths can be given as the final argument to run only tests in specific files. For example: pytest tests/my_api_tests.py tests/my_browser_tests.py