I hear a lot about the needs of finding a UX champion high in your company’s org but no one ever talks about how to get one. I have found that my process works and it’s repeatable at every company where I’ve worked. It involves using research as bait!
Can Workplace Digital Assistants Be More Than Shiny Objects?
Upon further analyses, it turns out that perceived productivity gains or losses was the core issue for workplace acceptance. I found that perception differed between casual users, frequent users, and power users.
Frequency of use matters
Casual users of our applications appreciated that the digital assistant could help them perform infrequent tasks. These users felt like they had a productivity gain because they didn’t have to take the time to re-learn processes that they only do a few times a year.
On the flip-side, frequent users felt like the digital assistant would only slow them down. They have their tasks in the application practically embedded in their muscle memory. They get in, get it done, and get out and don’t want to be distracted by shiny objects. Frequent users perceive digital assistants as a potential productivity loss at this time.
Power users, like the captain of any Star Trek series, want to sit back and brainstorm with the digital assistant. They want to ask it to perform complex tasks that involve some forecasting trends so that they can make decisions. They also want it to search out search out all the relevant data and present it to them so they don’t have to waste their time manually searching archives. Not losing hours in searching for data is a huge productivity gain for power users.
In my studies, I found that the power users and casual users together were vastly outnumbered by the frequent users. So it seems that digital assistants have little perceived value to most of the user base to date. I think that this perception can be changed going forward.
Future State
In creating user experiences, it’s the UX team’s job to make sure each user type has the best experience we can provide. So before adding on the digital assistant feature to business applications, take the time to look at the needs of each user type. Figure out how it can add value for each of them or risk being dismissed as another shiny object.
The Curious Case of User Zero
The discovery of user zero
Iterative Research: My Journey into Agile User Testing
This year my research department started a new process of recurrent testing on our agile projects. I want to share with you our (evolving) process so that we can start a conversation on what works best.
How often should you test?
Our tests are monthly, more or less. It’s based on the sprint length of the project but also on what the needs are of the project team and stakeholders. The goal here is not to set a rigid schedule but to be flexible enough to address the research needs as they come up.
The key to scheduling is constant communication with the UX Team and stakeholders on the project team. I scheduled a weekly 30 minute meeting just as a touchpoint so they could let me know what their testing needs were. As a strategic partner, I would also suggest certain lines of testing that might be overlooked or provide insights from related research.
I found that after the first round of testing, we gained substantial efficiencies in the process that allowed us to be very responsive to the teams needs. Subsequently, I could run a testing session with only two weeks notice—and most of that was recruiting time. I’ve found that the prototypes and testing guides can be modified based on the results of the last round of testing and reused.
How many participants should you test in each round?
Our testing sessions are set up with three participants. The number of participants per testing round is one of the keys to success in the iterative research methodology. It allows enough information for the team to move forward without being cost prohibitive for a recurring effort. Over multiple rounds of testing, you get some quantitative input as well as targeted qualitative testing.
What should you test?
It’s been my experience that development teams usually front-load the first sprints with technical issues that don’t impact the user interface or the user interactions. This is the perfect opportunity for the UX team and the researcher to ramp up on some global questions like high-level interest and labeling.
You would be surprised how much energy and team churn goes into getting consensus on what to label a thing! It’s right up there with where to put it in the navigation structure. I have found this to be true across all companies.
The research material organically became a qualitative/quantitative hybrid. I found that we had certain questions that the team wanted more input on, like labeling and feature priority ranking. So these items became a permanent part of the testing guide. Whereas the usability questions (and prototypes) changed as the focus of each sprint changed for the UX team.
What happens after each round of testing?
In an effort to be agile in this process, we try to reduce the documentation. Our iterative research process calls for the team and stakeholders to be present for the research sessions and attend a debrief immediately afterwards to discuss the findings. This allows us to make sure everyone is on the same page and usually results in an impromptu planning of the next round of testing. The final report is usually a brief synopsis of the debrief and a few extra notes from the researcher (aka Me!).
I have found this part of the process to be the most challenging part of the iterative process. Getting the stakeholders and UX team to commit to and participate in the final four hours of each round of testing has been an exercise in herding cats! There is no comparison between direct observation of the testing and reading the report later. I have found over the course of my career that direct observation of the testing sessions is the most effective tool in building empathy for the end-users. Period. Full stop.
Pros and Cons of Iterative Research