Many people believe in UX testing, but in real design projects, not much testing takes place. Why the discrepancy between belief and action? Mainly, it is the company’s inability to fire off a quick, small test when faced with a design decision.
Traditional user testing involves individual test participants who are recruited to sit at a computer and be observed carrying out tasks that they are given. The process of watching and listening to real people carry out these tasks provides a great insight into what works and what doesn’t, and critically, why.
Traditional usability testing is very effective but it is time consuming. For that reason it is difficult to test small design decisions and so testing tends to be limited at later stages of the design process.
Guerrilla testing can be the perfect way to capture feedback from users throughout the design process to refine, improve and test assumptions.
What is guerrilla testing?
Guerrilla user testing is a low cost method of user testing. The term ‘guerrilla’ refers to its ‘out in the wild’ style, in the fact that it can be conducted anywhere eg cafe, library, train station etc, essentially anywhere where there is significant footfall.
How does guerrilla testing work?
Guerrilla testing works well to quickly validate how effective a design is on its intended audience, whether certain functionality works in the way it is supposed to, or even establishing whether a brand or proposition is clear.
This approach is quick and relatively easy to set up. Participants are not recruited but are ‘approached’ by those persons conducting the sessions. The sessions themselves are short, typically between 15-30 minutes and are loosely structured around specific key research objectives. The output is typically ‘qualitative’ so insight is often rich and detailed.
Anyone can conduct ‘guerrilla testing’ on their site or service but often the best scenario is for the digital lead within your organisation to run the sessions with the designer or developer. The digital lead can help with defining the tasks, moderating the sessions as well as provide a level of ‘objectivity’ by not being the person who designed or built what is being evaluated. Involving the designer / developer in the sessions enables them to see first hand ‘real’ people interacting with their product, where there are areas for improvement and how they might go about resolving any issues.
This approach also does away with any lengthy reporting back. Insights can be observed, taken away and fed back into the design process almost immediately. However, a brief summary with key findings and recommendations can be written up as a more formal record. It is a method that suits the ‘agile framework’ well.
Where and how you might use guerrilla testing?
Guerrilla testing can be used throughout the service lifecycle. As it is cheap to set up, run and report back on, it is a method that can be used frequently.
There are a few logistics that should be taken into consideration before conducting any guerrilla testing;
- always ask permission first to speak with people
- outline briefly the purpose of the research
- reassure them about confidentiality
- keep it simple and quick
- consider the location and set up carefully eg a busy train station may have the footfall but people might be in too much of a hurry to spare the time
- providing incentives for audience participation is not required or necessary (however, depending on where you are running your sessions chocolates are often a welcome ‘thank you’ for peoples’ time)
Remember, whenever recording sessions you must seek permission from the participant first. Provide them with a written consent form for them to sign.
Why use guerrilla testing?
Traditional usability testing is very effective but it is time consuming. Guerilla testing allows you to set up and test quickly.
If you wait until the end to do usability testing, you are almost certain to find serious problems in the website. The speed with which you can run guerilla UX tests enables you to test design ideas as you work.
Weaknesses and when not to use guerrilla testing
The key weakness of guerrilla testing as a research method it that is not statistically robust and participants may not always match your ‘target’ audience in terms of skills, expertise, knowledge.
Number and types of participants
This can vary from between 6-12 participants in any given round of guerrilla testing. It is very much dependent on where and when those sessions are conducted.
Employing the technique
Guerrilla usability testing is very much about adapting to the situation. That said, here are some helpful hints that I find consistently work in different international contexts:
- Beware the implicit bias. While coffeeshops are a great place to find test participants, focusing on people who frequent them introduces bias to our work. Simply acknowledging this implicit bias can help designers neutralise subjective experiences and account for individual differences. Remember to target different genders and be fair in who you approach.
- Explain what’s going on. Designers should be honest about who we are, why we’re testing, and what sort of feedback we’re looking to receive. Oftentimes, it’s best to do this with a release form, so that people are fully aware of the implications of their participation – like if it’s going to just be used internally versus shared globally at conferences. These sort of release forms, while tedious to carry around, help establish trust.
- Be ethical. Of course, being honest doesn’t mean we need to be fully transparent. Sometimes it’s useful to skip certain information, like if we worked on the product they’re testing. Alternatively, we might tell white lies about the purpose of a study. Just make sure to always tell the truth at the end of each session: trust is essential to successful collaboration.
- Make it casual. Lighten up tests by offering cups of coffee and/or meals in exchange for people’s time. Standing in line or ordering with a test subject is a great opportunity to ask questions about their lifestyle and get a better feel for how a test might go.
- Be participatory. Break down barriers by getting people involved: ask them to draw – on a napkin or piece of notebook paper, for example – what they might expect to see on the third or fourth screen of a UI flow. This doesn’t have to be a full-blown user interface necessarily, just a rough concept of what’s in their head. You never know what you’ll learn by fostering imagination.
- Don’t lead participants. When you sense confusion, ask people what’s going through their head. Open them up by prodding, saying “I don’t know. What do you think?”. People in testing situations often can feel as though they are being tested (as opposed to the product itself), and therefore can start to apologise or shut down.
- Keep your eyes peeled. It’s important to encapsulate passing thoughts for later analysis. Ethnographic observation is one good way to capture what you were thinking of during tests. Don’t get too hung up about formalised notes though, most of the time your scribbles will work just fine. It’s about triggering memories, not showing it off at an academic conference.
- Capture the feedback. A key part of any testing process is capturing what we’ve learned. While the way in which we do this is definitely a personal choice, there are a few preferred tools available: apps like Silverback or UX Recorder collect screen activity along with a test subject’s facial reaction. Other researchers build their own mobile rigs. The important part to remember here is to use tools that fit your future sharing needs.
- Be a timecop. Remember, this isn’t a usability lab with paid users. Be mindful of how much time you spend with test subjects and always remind them that they can leave at any point during the test. The last thing you’d want is a grumpy user skewing your feedback.
At the end of the day, guerrilla usability testing comes in many forms. There’s no perfection to the art. It is unashamedly and unapologetically impromptu. Consider making up your own approach as you go: learn by doing.
- “Guerrilla Usability Testing” by Andy Budd
- “10 tips for ambush guerrilla user testing” by Martin Belam
- “Recording Mobile Usability Tests” by Jenn Downs
- “Failing Fast: Getting Projects Out of the Lab” by Tony Hillerson, Alan Lewis, Scott Green, Ryan Stewart, Randy Rieland
- “Rocket Surgery Made Easy” by Steve Krug
- “Talking Aloud is Not the Same as Thinking Aloud” by Mike Hughes