Automated testing: Analyzing results


The step that comes after Step 6 in Running WebQA Automated tests is Analyzing Results! The last article was all about how to get set up for automation, and getting the tests running. This post is all about what to do with the information you get from the results.

Next Steps

Step 1: What are your results?

If all of the tests passed, then you can move on to running more tests. Congratulations!

If you have failures or other test results, read on.

 Step 2: XFail or xPass

Did you get test results that match “xFail” or “xPass”? You may be unfamiliar with those terms. Tests that have been x-Failed, or xPassed, have been manually set to automatically fail or pass. *

We set tests to xFail or xPass when we want the test to fail or pass on purpose. We would do this if, for example, there is a known bug which hasn’t been fixed yet. If the test failed every day, we would spend time looking into the reason why. Setting it to ‘xFail’ means that we know the test is not correct, and will not be until the bug has been resolved fixed.

[* To read more, search on ‘xFail’, or read articles like this:]

You should look at our Xfail Dashboard tool to see the status of test cases:

There you can see a list, by project, of tests that are failing and which bugs are assigned to the failure. If bugs are still open, the xFail is still valid. If you have questions regarding your specific test results, please contact us!

BONUS: One of the best ways to get involved with testing is to submit a pull request on an xFail. For each xFailed test, check the corresponding bug. Has the bug been fixed? First verify the bug, after it has been fixed. At that point the xFail may be removed. Take out the code relating to the xFail, and submit it as your pull request! Once your code has been merged, you will be eligible for one of our badges.

Step 3: Test Failure Analysis

What should you do if a test fails? There are a few steps to determine what you do with a test failure.

a) Bug search: Search through Bugzilla to see if the bug was already entered. Enter keywords related to the failure to help your search.

b) Manual testing: When you get a failure, it’s always a good idea to see if the test passes manually following the same steps as the automated test.

c) Determine if it is a problem with the test. Are the locators up to date? Did something change in the code that needs to be updated in the test? If the failure is due to the test needing an update you can either submit a pull request fixing the test, or add a GitHub Issue under the GitHub project

d) I’m just not sure! If you test it and check it in Bugzilla, and you’re still not sure if it’s a real bug, then click this link to hop on IRC and ask us!

Step 4: It’s a bug

If you have determined that the test is failing because of a bug, the next step is to file a bug!

Here are a couple of documents explaining bug writing:

That’s it! Thanks for helping us run automation, and for trouble-shooting when problems arise. This is a complicated process at first, but these are the steps each of our team contributors goes through each time a test starts failing. Any information you can provide regarding the when/where/how of it saves lots of time.

As always, when in doubt you can ask in channel. Click the IRC link to join us and tell us what you are seeing. Thanks for your help!