Raising the question:

nhirata

Per version of the product per run, could you graph the output of how many bugs a test case produces?

Reason I ask is how can we tell the effectiveness of a test case, and what are the best test cases to run?

Personally I would keep all the positive test cases in a need to run basis for each stage of a project and hopefully automate them.  negative test cases… probably the high risk ones.  ie if it breaks then the project would suffer bad news by consumers…  hopefully they’re automated too…

The rest?  not so sure.  In a release cycle you have limited time and one should consider using it effectively if you want to make a deadline.

IMO, this should mean that you want to run as effectively as possible to uncover bugs as quickly as possible.  Risk analysis based on how things are coded, how things work, how things are designed, how long things take to implement… etc… these things should be analyzed to try to hit the high risk areas first.

When running test cases, and they aren’t effective in finding bugs… I would hope they would be effective in determining some thing like a positive result, or you just may be wasting time running it.

Filed under: QA, QMO