Firefox 26, A Restrospective in Quality (Part II)

Syndicator

A few days ago a wrote a post detailing a qualitative analysis of Firefox 26 using statistics from Bugzilla. In it I talked about regressions and the volume speaking to a “potential failure, something that was missed, not accounted for, or unable to be tested either by the engineer or by QA”. I’d like to modify that a little by incorporating post-release regressions.

Certainly one would expect the volume of regressions in pre-release to increase as QA, Engineers, or volunteers find and report more regressions. I realize now that simply measuring the volume of regressions might not be a clear indication of quality or a breakdown in process. Perhaps I painted this a bit too broadly.

I’ve just retooled this metric to take a look at pre-release versus post-release regression volume. I think looking at regressions in this way is a bit more telling. After all, a pre-release regression is something we technically knew about before release whereas a post-release regression is something that became known after we released. A high volume of post-release regressions would therefore imply a lower quality release and an opportunity for us to improve.

Just as a reminder, here is the chart comparing all regressions from a few days ago:

chart_regressions

Here is the new chart incorporating regressions found post-release:

chart_regressions_2014-02-03

As you can see the volume of post-release regressions is fairly significant. Perhaps unsurprisingly chemspills seem to correlate to periods of more post-release regressions. Speaking of Firefox 26 specifically, it continues a downward trend marking the fourth release with fewer post-release regressions and fewer regressions overall.

Anyway, that’s all I wanted to call out for this release. I will be back in six weeks to talk about how things shaped up for Firefox 27. I’m hoping we can continue to see improvements through iterating on our process and working closer with other teams.