As more software project teams are moving into Agile, it’s more important than ever that UX design methodology fit into the Agile process. User research is one of those tools that is often hard to fit into the Agile workflow but it can be done—and done well.
This year my research department started a new process of recurrent testing on our agile projects. I want to share with you our (evolving) process so that we can start a conversation on what works best.
How often should you test?
Our tests are monthly, more or less. It’s based on the sprint length of the project but also on what the needs are of the project team and stakeholders. The goal here is not to set a rigid schedule but to be flexible enough to address the research needs as they come up.
The key to scheduling is constant communication with the UX Team and stakeholders on the project team. I scheduled a weekly 30 minute meeting just as a touchpoint so they could let me know what their testing needs were. As a strategic partner, I would also suggest certain lines of testing that might be overlooked or provide insights from related research.
I found that after the first round of testing, we gained substantial efficiencies in the process that allowed us to be very responsive to the teams needs. Subsequently, I could run a testing session with only two weeks notice—and most of that was recruiting time. I’ve found that the prototypes and testing guides can be modified based on the results of the last round of testing and reused.
How many participants should you test in each round?
Our testing sessions are set up with three participants. The number of participants per testing round is one of the keys to success in the iterative research methodology. It allows enough information for the team to move forward without being cost prohibitive for a recurring effort. Over multiple rounds of testing, you get some quantitative input as well as targeted qualitative testing.
What should you test?
It’s been my experience that development teams usually front-load the first sprints with technical issues that don’t impact the user interface or the user interactions. This is the perfect opportunity for the UX team and the researcher to ramp up on some global questions like high-level interest and labeling.
You would be surprised how much energy and team churn goes into getting consensus on what to label a thing! It’s right up there with where to put it in the navigation structure. I have found this to be true across all companies.
The research material organically became a qualitative/quantitative hybrid. I found that we had certain questions that the team wanted more input on, like labeling and feature priority ranking. So these items became a permanent part of the testing guide. Whereas the usability questions (and prototypes) changed as the focus of each sprint changed for the UX team.
What happens after each round of testing?
In an effort to be agile in this process, we try to reduce the documentation. Our iterative research process calls for the team and stakeholders to be present for the research sessions and attend a debrief immediately afterwards to discuss the findings. This allows us to make sure everyone is on the same page and usually results in an impromptu planning of the next round of testing. The final report is usually a brief synopsis of the debrief and a few extra notes from the researcher (aka Me!).
I have found this part of the process to be the most challenging part of the iterative process. Getting the stakeholders and UX team to commit to and participate in the final four hours of each round of testing has been an exercise in herding cats! There is no comparison between direct observation of the testing and reading the report later. I have found over the course of my career that direct observation of the testing sessions is the most effective tool in building empathy for the end-users. Period. Full stop.
Pros and Cons of Iterative Research
- Pro: Constant feedback on the design
- Pro: Low recurring cost for participant recruiting
- Pro: Aggregate quantitative results over the course of the project
- Pro: Timely qualitative results on design questions
- Con: Need a semi-dedicated (internal or external) researcher to support the ongoing research effort (i.e. recruiting participants, testing materials, moderating, reports, etc…)
- Con: Buy-in from stakeholders on a new research process
- Con: Difficulty in getting the necessary parties to observe the testing and participate in the debrief
Iterative research is an exciting solution! I believe we’ve just scratched the surface on what we can do in this testing methodology. Now, I’d love to hear from other teams on how you’ve handled iterative research.
It’s great to see this process documented with pros and cons. I too have found recruiting to be the main challenge and most time-consuming process with agile testing because the research is short and sweet but it still takes the same amount of time to find the right participants.
Nate,
Thanks for reminding me about some of the recruiting aspects I forgot to mention!
We were able to gain some efficiencies by reusing the same recruiting screener for each round of testing. Not needing to get a new screener approved by the legal department each month was a huge silver lining. This also meant that the recruiting agency was able to create a “waiting list” of potential participants that could fill the time slots once we supplied them with the testing date.
A ‘waiting list’ of participants is a great concept and sounds like a best practice.