This article is a part of our Content Hub. For more in-depth resources, check out our content hub on pytest Tutorial.

In Python development, creating automated tests is only one part of ensuring high-quality software. Equally important is understanding how much of your code is actually covered by tests.

pytest coverage provides insights into untested code, helping developers ensure that critical paths are thoroughly checked.

In this guide, we’ll dive into pytest code coverage, and show how to generate detailed pytest coverage reports.

Overview

pytest coverage refers to the percentage of your Python code that is executed while running automated tests.

Why is pytest coverage reports important?

Pytest coverage reports are important because they highlight which parts of your python code are tested, uncover gaps in your test suite, and help ensure reliable, high-quality software.

How do pytest-cov and coverage.py tools help in pytest coverage reports?

pytest-cov is a pytest plugin for measuring test coverage, while coverage.py is the underlying Python tool that tracks code execution and generates detailed coverage reports.

How to Generate pytest Code Coverage Report?

You can generate a pytest code coverage report by installing pytest-cov and running:

What is pytest Coverage?

pytest coverage refers to the practice of measuring how much of your Python code is exercised by automated tests written with pytest.

pytest code coverage is a key metric in software testing because it helps identify untested parts of your codebase, reduces bugs, and ensures robust software.

High coverage doesn’t guarantee bug-free software, but it highlights areas that need more attention, reducing the risk of hidden issues.

When you integrate coverage analysis with pytest using pytest-cov, you can:

  • Determine which lines of code are executed during tests.
  • Identify gaps in your test suite.
  • Generate detailed pytest coverage reports in multiple formats (terminal, HTML, XML, etc.).

Example:

What is the pytest Coverage Report?

A pytest coverage report shows which parts of your Python code are executed during tests. It highlights tested and untested lines, measures overall coverage percentages, and tracks branch coverage.

These reports help developers identify gaps in their test suites, improve test effectiveness.

You can generate coverage reports in multiple formats, such as terminal, HTML, or XML, making it easy to review coverage locally or integrate it into CI/CD pipelines.

Why pytest for Code Coverage Reports?

pytest has plugins and supported modules for evaluating code coverage. Here are some reasons you want to use pytest for code coverage report generation:

  • It provides a straightforward approach for calculating coverage with a few lines of code.
  • It gives comprehensive statistics of your code coverage score.
  • It features plugins that can help you prettify pytest code coverage reports.
  • It features a command-line utility for executing code coverage.
  • It supports distributed and localized testing.
Note

Get 100 Minutes of Automated Testing for FREE. Try LambdaTest Today!

Using tools like pytest-cov and coverage.py, teams can generate comprehensive pytest coverage reports, analyze which parts of the codebase need additional testing, and track coverage trends over time.

Here are some of the most used pytest code coverage tools.

pytest-cov

pytest-cov is a code coverage plugin and command line utility for pytest. It also provides extended support for coverage.py.

Like coverage.py, you can use pytest-cov to generate HTML or XML reports in pytest and view a pretty code coverage analysis via the browser. Although using pytest-cov involves running a simple command via the terminal, the terminal command becomes longer and more complex as you add more coverage options.
For instance, generating a command-line-only report is as simple as running the following command:

The result of the pytest –cov command is shown below:

pytest

But generating an HTML report requires an additional command:

Where coverage_re is the coverage report directory. Below is the report when viewed via the browser:

coverage_re

Here is a list of widely-used command line options with –cov:

–cov options Description
-cov=PATH Measure coverage for a filesystem path. (multi-allowed)
–cov-report=type To specify the type of report to generate. Specify the type of report to generate. Type can be HTML, XML, annotate, term, term-missing, or lcov.
–cov-config=path Config file for coverage. Default: .coveragerc
–no-cov-on-fail Do not report coverage if the test fails. Default: False
–no-cov Disable coverage report completely (useful for debuggers). Default: False
–cov-reset Reset cov sources accumulated in options so far. Mostly useful for scripts and configuration files.
–cov-fail-under=MIN Fail if the total coverage is less than MIN.
–cov-append Do not delete coverage but append to current. Default: False
–cov-branch Enable branch coverage.
–cov-context Choose the method for setting the dynamic context.

coverage.py

coverage.py

The coverage.py library is one of the most-used pytest code coverage reporting tools. It’s a simple Python tool for producing comprehensive pytest code coverage reports in table format. You can use it as a command-line utility or plug it into your test script as an API to generate coverage analysis.

The API option is recommended if you need to prevent repeating a bunch of terminal commands each time you want to run the coverage analysis.

While its command line utility might require a few patches to prevent reports from getting muffed, the API option provides clean, pre-styled HTML reports you can view via a web browser.

Below is the command for executing code coverage with pytest using coverag.py:

The above command runs all pytest test suites with names starting with “test.”
All you need to do to generate reports while using its API is to specify a destination folder in your test code. It then overwrites the folder’s HTML report in subsequent tests.

Explore the pytest-cov GitHub repository to access the official plugin for creating detailed pytest code coverage reports, complete with setup guides, examples, and configuration tips to boost your Python testing workflow.

Subscribe to the LambdaTest YouTube Channel and stay updated with the latest video tutorials on Selenium testing, Selenium Python, and more!

Demo: How to Generate pytest Code Coverage Report?

Generating a coverage report provides both a summary and detailed insights, helping improve overall code quality and test effectiveness.

The demonstration for generating pytest code coverage includes a test for the following:

  • A plain name tweaker class example to show why you may not achieve 100% code coverage and how you can use its result to extend your test reach.
  • Code coverage demonstration for registration steps using the LambdaTest eCommerce Playground, executed on the cloud grid.

We’ll use Python’s coverage module, coverage.py, to demonstrate the code coverage for all the tests in this tutorial on the pytest code coverage report. So you need to install the coverage.py module since it’s third-party. You’ll also need to install the Selenium WebDriver (to access WebElements) and python-dotenv (to mask your secret keys).

If you are new to Selenium WebDriver, check out our guide on what is Selenium WebDriver.

Create a requirements.txt file in your project root directory and insert the following packages:

Filename – requirements.txt

Next, install the packages using pip:

As mentioned, coverage.py lets you generate and write coverage reports inside an HTML file and view it in the browser. You’ll see how to do this later.

Name Tweaker Class Test

We’ll start by considering an example test for the name tweaking class to demonstrate why you may not achieve 100% code coverage. And you’ll also see how to extend your code coverage.

The name tweaking class contains two methods. One is for concatenating a new and an old name, while the other is for changing an existing name.

Filename: plain_tests/plain_tests.py

To execute the test and get code coverage of less than 100%, we’ll start by omitting a test case for the else statement in the first method and ignore the second method entirely (test_should_changeName).

Filename: run_coverage/name_tweak_coverage.py

Run the test by running the following command:

Go into the coverage_reports folder and run index.html via your browser. The test yields 69% coverage (as shown below) since it omits the two named instances.

coverage_reports

Let’s extend the code coverage.

Although we deliberately ignored the second method in that class, it was easy to forget to include a case for the else statement in the test. That’s because we only focused on validating the true condition. Including a test case that assumes negativity (where the condition returns false) extends the code coverage.

So what if we add a test case for the second method and another one that assumes that the provided name in the first method isn’t LambdaTest?

The code coverage yields 100% since we’re considering all possible scenarios for the class under test.

So, a more inclusive test looks like this:

Filename: run_coverage/name_tweak_coverage.py

Adding the will_not_tweak_names variable covers the else condition in the test. Additionally, calling test_should_changeName from the class instance captures the second method in that class.

code coverage

Extending the coverage this way generates 100% code coverage, as seen below:

Code Coverage on the Cloud Grid

Code Coverage on the Cloud Grid

We’ll use the previous code structure to implement the code coverage on the cloud grid. Here, we’ll write test cases for the registration actions on the LambdaTest eCommerce Playground. Then, we’ll perform pytest testing on cloud-based testing platforms like LambdaTest.

Tests will be run based on the registration actions without providing some parameters. This might involve failure to fill in some form fields or submitting an invalid email address.

LambdaTest is an AI-native test execution platform that enables you to conduct Python web automation on a dependable and scalable online Selenium Grid infrastructure, spanning over 3000 real web browsers and operating systems.

Test Scenario 1:

Submit the registration form with an invalid email address and missing fields.

Test Scenario 2:

Submit the form with all fields filled appropriately (successful registration).

We’ll also see how adding the missing parameters can extend the code coverage. Here is our Selenium automation script:

Filename: setup/setup.py

github

Code Walkthrough:

First, import the Selenium WebDriver to configure the test driver. Get your grid username and access key (passed as LT_GRID_USERNAME and LT_GRID_ACCESS_KEY, respectively) from Settings > Account Settings > Password & Security.

desired capabilities

The desired_caps is a dictionary of the desired capabilities for the test suite. It details your username, access key, browser name, version, build name, and platform type that runs your driver.

Next is the gridURL. We access this using the access key and username declared earlier. We then pass this URL and the desired capability into the driver attribute inside the __init__ function.

Write a testSetup() method that initiates the test suite. It uses the implicitly_wait() function to pause for the DOM to load elements. It then uses the maximize_window() method to expand the chosen browser frame.

However, the tearDown() method helps stop the test instance and closes the browser using the quit() method.

Filename: locators/locators.py

Start by importing the Selenium By object into the file to declare the locator pattern for the DOM. We’ll use the NoSuchElementException to check for an error message in the DOM (in case of invalid inputs).

Next, declare a class to hold the WebElements. Then, create another class to handle the web actions for the registration form.

The element_selector class contains the WebElement locations. Each uses the XPath locator.

web action

The registerUser class accepts the driver attribute to initiate web actions. You’ll get the driver attribute from the setup class while instantiating the registerUser class.

The error_message inside the registerUser class does two things. First, it checks for invalid field error messages in the DOM when the test tries to submit the registration form with unacceptable inputs. The check runs every time inside a try block. So, the test covers it regardless.

Secondly, it runs the code in the try block if it finds an input error message element in the DOM. This prevents the except block from running, flagging it as non-covered code.

Otherwise, Selenium raises a NoSuchElementException. This forces the test to log the print in the except block and mark it as covered code. This feels like a reverse strategy. But it helps code coverage capture more scenarios.

web action methods

So, besides capturing omitted fields (web action methods not included in the test execution), it ensures that the test accounts for an invalid email address or empty string input.

Thus, if the error message is displayed in the DOM, the method returns the error message element. Otherwise, Selenium raises a NoSuchElementException, forcing the test to log the printed message.

The rest of the class methods are web action declarations for the locators in the element_locator class. Excluding the fields that require a click action, the other class methods accept a data parameter, a string that goes into the input fields.

But first, create a test runner file for the code coverage scenarios. You’ll execute this file to run the test and calculate the code coverage.

Filename: run_coverage/run_coverage.py

The above code starts by importing the coverage module. Next, declare an instance of the coverage class and call the start() method at the top of the code. Once code coverage begins, import the test_registration class from scenarioRun.py. Instantiate the class as registration.

The class method, it_should_register_user, is a test method that executes the test case (you’ll see this class in the next section). Use cov.stop() to close the code coverage process. Then, use cov.save() to capture the coverage report.

The cov.html_report() method writes the coverage results into an HTML file inside the declared directory (coverage_reports).

And running this file executes the test and coverage report.

Now, let’s tweak the web action methods inside scenarioRun.py to see the difference in code coverage for each scenario.

Test Scenario 1: Submit the registration form with an invalid email address and some missing fields.

Filename: testscenario/scenarioRun.py

Pay attention to the imported built-in and third-party modules. We start by importing the registerUser and testSettings classes we wrote earlier. The testSettings class contains the testSetup() and tearDown() methods for setting up and closing the test, respectively. We instantiate this class as a setup.

setup.driver

As seen below, the registerUser class instantiates as test_register using the setup.driver attribute. The dotenv package lets you get the test website’s URL from the environment variable.

The testSetup() method initiates the test case (it_should_registerUser method) and prepares the test environment. Next, we launch the website using the test_getWeb() method. This accepts the website URL declared earlier. The inherited property from the unittest test, assertIn, checks whether the declared string is in the title. Use the setup.tearDown() method to close the browser and clean the test environment.

As earlier stated, the rest of the test case omits some methods from the registerUser class to see its effect on code coverage.

test_run_coverage

Test Execution:

To execute the test and code coverage, go into the test_run_coverage folder and run the run_coverage.py file using pytest:

Once the code runs successfully, open the coverage_reports folder and open the index.html file via a browser. The code coverage reads 94%, as shown below.

locator.py

Although the other test files read 100%, locator.py covers 89% of its code, reducing the overall score to 94%. We omitted some web actions and entered an invalid email address while running the test.

Opening locator.py gives more insights into the missing steps (highlighted in red), as shown below.

insights

Although you might expect the coverage to flag the test_fillEmail() method, it doesn’t because the test provided an email address; it was only invalid. The except block is the invalid parameter indicator. And it only runs if the input error message element isn’t in the DOM.

As seen, the test flags the except block this time since the input error message appears in the DOM due to invalid entries.

The test suite runs on the cloud grid with some red flags in the test video, as shown below.

Test Scenario 2: Submit the form with all fields filled appropriately (successful registration).

Test Scenario 2 has a code structure and naming convention similar to Test Scenario 1. However, we’ve expanded the test reach to cover all test steps in Test Scenario 2. Import the needed modules as in the previous scenario. Then instantiate the testSettings and registerUser classes as setup and test_register, respectively.

To get an inclusive test suit, ensure that you execute all the test steps from the registerUser class, as shown below. We expect this to generate 100% code coverage.

Test Execution:

Go into the run_coverage folder and run the pytest command to execute the test_run_coverage.py file:

Open the index.html file inside the coverage reports via a browser to see your pytest code coverage report. It’s now 100%, as shown below. This means the test doesn’t omit any web action.

coverage reports

Below is the test suite execution on the cloud grid:

suite execution

Conclusion

Manually auditing your test suite can be an uphill battle, especially if your application code base is large. While performing Selenium Python testing, leveraging a dedicated code coverage tool boosts your productivity, as it helps you flag untested code parts to detect potential bugs easily. Although you’ll still need human inputs to decide your test requirements and coverage, performing a code coverage analysis gives you clear direction.

Frequently Asked Questions (FAQs)

What is code coverage in pytest?

Code coverage in pytest measures the amount of Python code executed while the tests run. It helps identify which parts of your codebase have not been tested and thus might contain hidden bugs.

How to get Python code coverage?

To get code coverage in Python, you can use the pytest-cov plugin. Install it via pip (pip install pytest-cov), and then run your tests with pytest –cov=your_package_name to generate a coverage report.

How to increase test coverage in pytest?

To increase test coverage in pytest:

  • Identify untested code parts using coverage reports.
  • Write additional tests for those parts, focusing on edge cases and error handling.
  • Continuously review and refactor tests to improve coverage metrics and test suite quality.

How to run pytest with coverage?

Install the coverage plugin and run your tests with coverage enabled to measure how much of your code is tested.

How to run pytest coverage?

Use the coverage option in pytest to execute tests while tracking which lines of code are executed.

How to get coverage report in pytest?

Generate a coverage report in formats like terminal summary, HTML, or XML to see which parts of your code were tested.

How to check pytest coverage?

Run tests with coverage tracking and review the report to check which lines and files were covered.

How to use pytest coverage?

Enable the coverage plugin during test runs and view reports to understand test effectiveness and gaps.

How to read pytest coverage report?

Check the summary for overall coverage percentage and review detailed reports to identify untested areas.


News
Berita
News Flash
Blog
Technology
Sports
Sport
Football
Tips
Finance
Berita Terkini
Berita Terbaru
Berita Kekinian
News
Berita Terkini
Olahraga
Pasang Internet Myrepublic
Jasa Import China
Jasa Import Door to Door

Kiriman serupa