Performance Testing


Before I dive into performance, here’s a small update/context :

Currently we (Mobile QA) are working on a lot of things at the same time.  Maintaining the XUL project, but also helping QA the native application.  We have outreached some with our newsletter and have at least one active Mozilla (Hurray for Gaby2300!) on the native version of our application. Props to Gaby2300 and leading South American testers to work on testing native fennec!

Anyways, onto the topic of performance.  One of the shipping criteria is that we make fennec faster, and match if not surpass the other browsers currently in market.  During the mobile work week, the QA mobile team had been working on a rough draft of how to go about getting this performance testing done. Doing this by hand was tedious and this is where automation could play a huge part.  We first needed to make sure that we got the right requirements and criteria for the performance test to be run.  Any questions that we had were to try to be accumulated there.  The consensus from mobile dev, management and qa team was a blank onload, and a page doing something like a static twitter page be run on a low end, middle ranged, and high end device both on the network and locally on file.

This rough draft can be seen here :

The performance testing is based on the first draft of the ts performance blogged from vlad :

There were problems with the first iteration of the instructions.  The main issue was that it was getting the time off the computer through the commandline and pushing that time through the URL and computing the end time on the device itself.  This means that the two devices need to be synced.  Android devices using the date function would only go as small as seconds.  Blassey came out with a small java app that would calculate the time in millisecs.   This helped lead the evolution to

From there we were pushing for more of this to be automated.  We ended up getting help from ctalbert [A-team] (For some reason I keep thinking of ctalbert as face and bmoss as hannibal… is that just me?  I’m waiting for bmoss to say “I love it when a plan comes together”) .  cTalbert has changed up the automation a bit and incorporated the SUTagent.  There are a bunch of great changes that we have in his automation.  Warm start, cold start is possible with varying number of repetitions.  Most of all it’s automated.

The main concern I have with this approach is that this does require the devices to be rooted and with certain rooting of devices, this bypasses permissions (which could bid more time on an unrooted device) as well as change the OS (Odin).  However the prerogative is to get this automated.  The only missing link right now is to get the data logged to a server so we see this week to week.  This should be coming later on this week.

While this is getting automated, I have been working on crunching the numbers semi manually and working with waverly to get this done.  xti and AndreaaPod along with Ioana Chiorean’s help have been making this more and more possible.


For the automation, cTalbert can probably correct me if I get some stuff wrong.  Here’s a rough draft of the instructions:

* get code from github
– install git
– git clone

* pull down :

* Root the phone

* install SUTagent :
code :
see specifically the gainroot:

* run sutagent… it should sit and wait with ip address

* Change the config.ini for the xbrowserstartup
** no spaces in names : python script
** control how many times it goes through the loop
** config file: iterations 9
** warm, cold
(cold will reboot; agent can do synchronous reboot)
** SD card needed in phone

Output will go to the webserver


Filed under: mobifx, mobile