Case Conductor pytest plugin proposal

Dave Hunt

Case Conductor is the new test case management tool being developed by Mozilla to replace Litmus. I’ve recently been thinking about how we can improve the relationship between our automated tests and our test case management, and want to sare my thoughts on how a plugin could help our WebQA team do just that.

Annotating tests

Currently our automated tests include a docstring referencing the Litmus ID. This is inconsistent (some even include a full URL to the test case) and hard to do anything with. It’s important to reference the test case, but I see this as the bare minimum.

Current method

1
2
3
4
5
6
def test_empty_search(self, mozwebqa):
    """Litmus 13847"""
    feedback_pg = FeedbackPage(mozwebqa)
    feedback_pg.go_to_feedback_page()
    feedback_pg.search_for('')
    Assert.greater(len(feedback_pg.messages), 0)

I would prefer to use a custom pytest mark, which would accept a single ID or a list. By doing this we can cleanly use the IDs without having to write a regex or conform to a strict docstring format.

Proposed method

1
2
3
4
5
6
@pytest.mark.cc(12345)
def test_empty_search(self, mozwebqa):
    feedback_pg = FeedbackPage(mozwebqa)
    feedback_pg.go_to_feedback_page()
    feedback_pg.search_for('')
    Assert.greater(len(feedback_pg.messages), 0)

Submitting results

There’s already an API in development for Case Conductor, so it would be great to interface directly with it during automated test runs. We could, for example prompt the user for the product, test cycle, and either a test run or a collection of test suites. With these details it should be possible for every automated run to create a new test run in Case Conductor and mark the linked test cases as passed/failed depending on the result. In addition to the existing reports, we can then also offer a link to the Case Conductor report for the relevant test run.

Result reports

We could also use the Case Conductor plugin to enhance the existing HTML report generated by the plugin already in use by WebQA. For example, we could link to the Case Conductor report for the test run, and provide a link for each test case. In the following mockup the new details are highlighted.

Coverage reports

By knowing all test cases in the specified product/cycle/run/suites we can report on the automated coverage. This could be used to set goals such as ‘automate 75% of product A’s tests’, which suddenly become a lot easier to measure. Here’s another mockup of how this command line report may look.

========================================= CASE CONDUCTOR =========================================
------------------------------------- test cases covered (2) -------------------------------------
Entering future date in start date field (test_feedback_custom_date_filter_with_future_start_date)
Entering future date in end date field (test_feedback_custom_date_filter_with_future_end_date)
----------------------------------- test cases not covered (2) -----------------------------------
Filtering by mobile versions
Filtering by desktop versions
----------------------------------------- coverage (50%) -----------------------------------------

We could also use tags to indicate test cases that aren’t worth automating so the coverage is more realistic.

Command options

I would propose several command line options in order to cover the above mentioned functionality. In the form of output from –help, here are my suggestions:

Options:
  case conductor:
    -cc-url=str       url of the case conductor instance
    -cc-username=str  case conductor username
    -cc-password=str  case conductor password
    -cc-product=str   product identifier
    -cc-cycle=str     test cycle identifier
    -cc-run=str       test run identifier
    -cc-suite=str     test suite identifiers (comma separated)
    -cc-coverage      show the coverage report

Some of these would be mandatory but could fail with useful messages if omitted. For example, if the product was not provided then a list of available products could be returned. The same could be done for test cycles and test runs.