Python > Testing in Python > Test Coverage > Analyzing Coverage Reports
Analyzing Test Coverage with `coverage.py`
This example demonstrates how to use `coverage.py` to measure and analyze test coverage in Python. We'll cover installing the package, running tests with coverage tracking, and interpreting the coverage report to identify areas of code that are not being tested.
Installing `coverage.py`
Before you can use `coverage.py`, you need to install it. This command uses `pip`, the Python package installer, to download and install the `coverage` package. Ensure you have `pip` installed and configured correctly for your Python environment.
pip install coverage
Example Code
This section provides example code. `my_module.py` contains three simple functions: `add`, `subtract`, and `multiply`. `test_my_module.py` uses the `unittest` framework to test these functions. Notice that the test suite doesn't cover the `multiply` function's zero-handling case.
# my_module.py
def add(x, y):
return x + y
def subtract(x, y):
return x - y
def multiply(x, y):
if x == 0 or y == 0:
return 0
return x * y
# test_my_module.py
import unittest
from my_module import add, subtract, multiply
class TestMyModule(unittest.TestCase):
def test_add(self):
self.assertEqual(add(2, 3), 5)
def test_subtract(self):
self.assertEqual(subtract(5, 2), 3)
def test_multiply_positive(self):
self.assertEqual(multiply(2, 3), 6)
if __name__ == '__main__':
unittest.main()
Running Tests with Coverage Tracking
This command tells `coverage.py` to run the `test_my_module.py` file and track which lines of code are executed during the test run. The coverage data is stored in a `.coverage` file.
coverage run test_my_module.py
Generating the Coverage Report
After running the tests, this command generates a textual report showing the coverage results. The `-m` flag includes missing line numbers in the report, making it easier to identify untested code.
coverage report -m
Analyzing the Coverage Report
The report shows the coverage for each module. In this example, `my_module.py` has 8 statements, one of which was not executed during the test run, resulting in 88% coverage. The missing line (6) is in the `multiply` function where it handles zero values for the input. `test_my_module.py` has 100% coverage because all lines were executed during the tests. The TOTAL line provides overall coverage statistics.
# Example Coverage Report Output:
# Name Stmts Miss Cover Missing
# -----------------------------------------
# my_module.py 8 1 88% 6
# test_my_module.py 13 0 100%
# -----------------------------------------
# TOTAL 21 1 95%
Generating an HTML Report
For a more detailed and interactive report, you can generate an HTML report. This creates a directory named `htmlcov` containing HTML files that you can open in your browser. The HTML report allows you to click through your source code and see exactly which lines were executed and which were not.
coverage html
Addressing the Coverage Gap
To address the uncovered line in the `multiply` function, we add a new test case, `test_multiply_zero`, to specifically test the case where one of the inputs is zero. After adding this test and re-running the coverage analysis, the coverage for `my_module.py` should increase to 100%.
# test_my_module.py (modified)
import unittest
from my_module import add, subtract, multiply
class TestMyModule(unittest.TestCase):
def test_add(self):
self.assertEqual(add(2, 3), 5)
def test_subtract(self):
self.assertEqual(subtract(5, 2), 3)
def test_multiply_positive(self):
self.assertEqual(multiply(2, 3), 6)
def test_multiply_zero(self):
self.assertEqual(multiply(2, 0), 0)
if __name__ == '__main__':
unittest.main()
Concepts Behind the Snippet
Test coverage is a metric that indicates the proportion of source code that has been executed by a test suite. High test coverage can give confidence that the code is working as expected. However, 100% coverage does not guarantee bug-free code, as the tests may not cover all possible input values or edge cases. `coverage.py` is a popular tool for measuring test coverage in Python projects.
Real-Life Use Case
In a large software project, test coverage reports can help identify modules or functions that lack sufficient testing. This allows developers to prioritize writing tests for the most critical and vulnerable parts of the codebase. It's especially valuable when refactoring code, as you can ensure that your changes don't introduce regressions by maintaining good test coverage.
Best Practices
Aim for high test coverage, but don't blindly chase 100% coverage. Focus on testing the most important and complex parts of your code, including edge cases and boundary conditions. Write meaningful tests that verify the behavior of your code, rather than just executing every line. Regularly review and update your tests to ensure they remain relevant and effective.
Interview Tip
Be prepared to discuss your experience with test coverage tools and strategies. Explain how you use coverage reports to identify testing gaps and improve the quality of your code. Mention that while high coverage is desirable, it's not a substitute for well-designed and thorough tests.
When to Use Them
Use test coverage analysis as part of your continuous integration and continuous delivery (CI/CD) pipeline. This allows you to automatically track test coverage over time and identify any regressions or improvements. Also, use coverage analysis when refactoring or modifying existing code to ensure that your changes don't break anything.
Alternatives
While `coverage.py` is the most common tool for test coverage in Python, other tools and techniques exist. You can use profilers to measure code execution and identify performance bottlenecks. Static analysis tools can also help find potential bugs and vulnerabilities without running any code.
Pros
Cons
FAQ
-
How do I exclude certain files or directories from coverage analysis?
You can create a `.coveragerc` file in the root of your project and specify files or directories to exclude using the `exclude` option. For example: [run] omit = */migrations/* */__init__.py -
How do I combine coverage data from multiple test runs?
Use the `coverage combine` command to merge multiple `.coverage` files into a single file. This is useful when running tests in parallel or across multiple environments. coverage combine coverage report -m