I’ve been asked by a number of people about why the submission pages were formulated. Instead of tackling the entire process in one blogpost, I’ve broken it up into two parts. The first is how the requirements were gathered in the perspective of if the submission methods are currently being constructed. Here goes!
Figure out the history of the problem at hand.
You see, Mozilla’s primary general feedback mechanism has been Hendrix since 2005. This was a simple webform that allowed users to post to a product-related newsgroup with their thoughts without any restrictions or direction on how to formulate those thoughts into something usable for Mozilla. It was initially designed to funnel users away from filing non-actionable bugs in Bugzilla. It fulfilled its intention at the time, but proved to not be as useful over time as most comments were just rants. Rants aren’t actionable; rants take too long to read; there’s too much emotion tied to them. So, I got my first set of requirements: feedback submission should be directed, concise and something that alluded to the past, but felt different.
Ok, so what about the various needs within Mozilla right now?
Namely, everyone knew or felt there was a need for a feedback application. There just wasn’t an understanding of what that meant to each group within Mozilla. So, after some requirements gathering, this is what I got.
- Fennec: help users and make our products better (feature requests, bug reports, undiscoverable features)
- Release Management: For betas, we want different type of feedback vs normal releases. Generally, we care if a problem happened in the previously released update. That is, regressions are most important.
- Firefox: a better way (than bugzilla, hendrix and even reporter) for all users to express a problem; a way for us to communicate to beta users what specific things we’re looking for feedback on; and better tools for them to provide it; a way for us to pivot and search through that feedback in a more structured way based on OS, arch, etc; a way for us to get performance metrics and system configuration data from the field
- Find issues, find broken websites
- behavior, demographics, feature feedback, grow user base
- ex. “What addons do you use more often?”, “What are your most used features in firefox?”, “Want to follow us more? Join one of our mailinglists”
- user usage data to analyze and offer to proposees (usually labs and ux)
- learn identifiable issues, collect problems and triage them into bug reports on bugzilla, educate community into becoming more useful testers
- Learn identifiable issues, link people to already defined issues, grow *involved* user base
In there were a lot of correlations of needs between the groups such as issue identification and some form of user information. So, there’s my second set of requirements. How about the people that are going to give Mozilla feedback? I needed to find out what they were about too.
Who’s the Audience?
Using a Marketing-perspective, our betatesters are thought of as “early adopters”. These are folks who are the first people in line to get a new product. They want to set trends. They’re more likely to adopt new services on the web such as Twitter and are more likely to voice their opinion about a product. Ok, so there’s another requirement. We knew that they’re likely to give feedback, the system just needs to be available and highly visible.
Now that I had a set of requirements around the context of the problem at hand, I needed inspiration on how to implement the solution. So, I took notes from other Desktop App feedback systems.
What sources of inspiration can I rely on?
The most useful were:
- Microsoft Office Live 2007 with happy/sad smilies on the system task bar.
- Amarok implemented LikeBack which is uses the same happy/sad smilies on the top right of its main window.
- Uservoice collects ideas/suggestions from users in a very humane way.
- GetSatisfaction has gained a lot of popularity for its ability to quickly scale focus group level feedback on websites
There were some good ideas in each system. Namely, there was consistency across feedback systems for happy/sad/ideas as a primary channels. There were also some issues: after the initial instancing of the feedback form, there were a number of visual obstacles in the way to stop a user from giving feedback. All of these systems went from high-touch (i.e. use of smilies) to low-touch (i.e. tons of information on the forms). I mean, there were large text boxes for feedback, an e-mail address field, checkboxes for screenshots and profile information, paragraphs of text, submit/cancel buttons all spaced together in a thin and gray pop-up dialog. On top of that, none of them felt like they were alluding to a one-on-one relationship with the application as if it was a person. When asking someone for something subjective like an opinion, you can’t dis-orient them mid-submission with a robotic submission form. It’s not humane. Ok, so my fourth set of requirements was to use happy/sad/idea and keep the submission process high-touch. I still felt like something was missing though. What about the process of directing a user towards giving useful and actionable feedback? How was that possible?
How can I direct users to give an ‘actionable’ piece of user feedback?
I started reading about effective and clear communication. The results were always related to being concise, showing empathy, asking questions in probing manner, using the first person in speech and being respectful. So, what was “concise” and “actionable” in the context of written text? I broke it down as such:
- Sentences have three units of measure: words, syllables and characters.
- For Words, the recommended number per sentence is proposed to be about 15-20 by Martin Cutts in the Oxford Guide To Plain English. In his words, “More people fear snakes than full stops, so they recoil when a long sentence comes hissing across the page.”
- For syllables, the general rule-of-thumb is 1.7.
- For characters, the general rule-of-thumb in Plain English has been 5. A good number of analysis on a quick search of google, have proven that’s a good starting point.
So, I figured the right approach was to offer a sentence’s worth of characters (i.e. 100 characters) plus a little bit of slack to take into account technical words that end up being very long. Therefore, 140 characters felt right. Also, our already defined “early adopter” Beta Testing crowd has experience dealing with this restriction on Twitter as well as SMS text messaging.
- Feedback submission should be directed and alludes to the past, but feels different.
- Feedback needs to relate to issue identification and carry some form of user information.
- It need to be available and highly visible.
- Happy/sad/ideas are good ideas and keep the submission process high-touch.
- Be concise, show empathy, ask questions in probing manner, use the first person in speech and make sure the feedback is respectful.
- Use a 140 characters max limit.
The next blog post will detail a good part of the translation of requirements to concept. Stay tuned!