Benchmarking Pytest with CICD Using GitHub Action | by Kay Jan Wong | Mar, 2024

Making Pytest benchmark automated, actionable, and intuitive

Kay Jan Wong
Towards Data Science
Photo by Lucas Santos on Unsplash
by Lucas Santos on Unsplash

“Your is slow” is something that is easily said, but it would take a lot of trial and error and to find out which part of the code is slow, and how slow is slow? Once the of the code is found, does it well with an input that is 100 times or 1000 times larger, with results averaged across 10 iterations?

This is where pytest-benchmark comes in handy

Complementing the idea of unit testing, which is to test a single unit or part of the codebase, we can expand on this and measure code easily with pytest-benchmark.

This article will touch on how to set up, run, and interpret the benchmark results of pytest-benchmark. To properly enforce benchmarking in a , the advanced sections also touch on how to compare benchmark timing results across runs and reject commits if they fail certain thresholds, and how to store and view historical benchmark timing results in a histogram!

This can simply be done with pip install pytest-benchmark on the Terminal.

To enable additional features, such as visualizing the , we can perform pip install 'pytest-benchmark[histogram]' to install the additional packages required.

Similar to pytest with added benchmark fixture

Source link