I've implemented html reports with pie charts, navigable outcomes, and in spite of all of that pretty, flashy stuff, I was the only audience of the reports. They don't provide real value or help with root cause when investigating a test failure.
Generally, just a stack trace, screenshot if it's a ui test, and a deep diff on assertions for api tests are all you really need for fast root cause analysis.
Aside from designing your project so that tests are simple, don't require a lot of unit tests for abstractions, have easy to read and debug code, you can also put test runs in a database. It doesn't require multiple tables or a complex schema, just simple things you can usually get from your ci/cd pipeline api.
In our case, the metrics of value were test_name
, start_time
, runtime
, status
. There might be other metrics of value, that just depends on the organization.
Of course, in the grand scheme of things, this really only provides value in our end to end tests whether they're api or full end to end from the ui.
There may still be some value reporting these quality metrics with tests that have mocked network traffic as well.
Ultimately, these quality metrics are important for maintaining the overall quality of your test strategy. It provides real data to base decisions on.
I'll get into quality metrics tracked by tagging pull requests in another blog post. In conclusion, maintaining these specific quality metrics is vital to uphold the integrity of testing efforts, preventing issues that could devalue tests.