Can Workplace Digital Assistants Be More Than Shiny Objects?

After a completing a couple of research studies for business applications that incorporated a digital assistant, I’ve come to realize that the user base is quite different from the consumers that personal digital assistants like Alexa, Siri, etc. are marketed to. Consumers appreciate the user delight that these devices try to incorporate into their lives as well as their utility. However, when I interviewed people about incorporating using them in their business applications I got a completely different response—even though these same people use and enjoy digital assistants personally.

Upon further analyses, it turns out that perceived productivity gains or losses was the core issue for workplace acceptance. I found that perception differed between casual users, frequent users, and power users.

Frequency of use matters

Casual users of our applications appreciated that the digital assistant could help them perform infrequent tasks. These users felt like they had a productivity gain because they didn’t have to take the time to re-learn processes that they only do a few times a year.

On the flip-side, frequent users felt like the digital assistant would only slow them down. They have their tasks in the application practically embedded in their muscle memory. They get in, get it done, and get out and don’t want to be distracted by shiny objects. Frequent users perceive digital assistants as a potential productivity loss at this time.

Power users, like the captain of any Star Trek series, want to sit back and brainstorm with the digital assistant. They want to ask it to perform complex tasks that involve some forecasting trends so that they can make decisions. They also want it to search out search out all the relevant data and present it to them so they don’t have to waste their time manually searching archives. Not losing hours in searching for data is a huge productivity gain for power users.

In my studies, I found that the power users and casual users together were vastly outnumbered by the frequent users. So it seems that digital assistants have little perceived value to most of the user base to date. I think that this perception can be changed going forward.

Future State

In creating user experiences, it’s the UX team’s job to make sure each user type has the best experience we can provide. So before adding on the digital assistant feature to business applications, take the time to look at the needs of each user type. Figure out how it can add value for each of them or risk being dismissed as another shiny object.

The Curious Case of User Zero

I’ve often said that every research study reveals some hidden gem of understanding about the users — and, because I’m a total research geek, I call them Christmas presents. Earlier this year, I found a hidden gem of understanding that was comparable to a 6-year old getting a horse for Christmas!

The discovery of user zero read more

Iterative Research: My Journey into Agile User Testing

As more software project teams are moving into Agile, it’s more important than ever that UX design methodology fit into the Agile process. User research is one of those tools that is often hard to fit into the Agile workflow but it can be done—and done well.

This year my research department started a new process of recurrent testing on our agile projects. I want to share with you our (evolving) process so that we can start a conversation on what works best.

How often should you test?

Our tests are monthly, more or less. It’s based on the sprint length of the project but also on what the needs are of the project team and stakeholders.  The goal here is not to set a rigid schedule but to be flexible enough to address the research needs as they come up.

The key to scheduling is constant communication with the UX Team and stakeholders on the project team. I scheduled a weekly 30 minute meeting just as a touchpoint so they could let me know what their testing needs were. As a strategic partner, I would also suggest certain lines of testing that might be overlooked or provide insights from related research.

I found that after the first round of testing, we gained substantial efficiencies in the process that allowed us to be very responsive to the teams needs. Subsequently, I could run a testing session with only two weeks notice—and most of that was recruiting time. I’ve found that the prototypes and testing guides can be modified based on the results of the last round of testing and reused.

How many participants should you test in each round?

Our testing sessions are set up with three participants. The number of participants per testing round is one of the keys to success in the iterative research methodology. It allows enough information for the team to move forward without being cost prohibitive for a recurring effort. Over multiple rounds of testing, you get some quantitative input as well as targeted qualitative testing.

What should you test?

It’s been my experience that development teams usually front-load the first sprints with technical issues that don’t impact the user interface or the user interactions. This is the perfect opportunity for the UX team and the researcher to ramp up on some global questions like high-level interest and labeling.

You would be surprised how much energy and team churn goes into getting consensus on what to label a thing! It’s right up there with where to put it in the navigation structure. I have found this to be true across all companies.

The research material organically became a qualitative/quantitative hybrid. I found that we had certain questions that the team wanted more input on, like labeling and feature priority ranking. So these items became a permanent part of the testing guide. Whereas the usability questions (and prototypes) changed as the focus of each sprint changed for the UX team.

What happens after each round of testing?

In an effort to be agile in this process, we try to reduce the documentation. Our iterative research process calls for the team and stakeholders to be present for the research sessions and attend a debrief immediately afterwards to discuss the findings. This allows us to make sure everyone is on the same page and usually results in an impromptu planning of the next round of testing. The final report is usually a brief synopsis of the debrief and a few extra notes from the researcher (aka Me!).

I have found this part of the process to be the most challenging part of the iterative process. Getting the stakeholders and UX team to commit to and participate in the final four hours of each round of testing has been an exercise in herding cats! There is no comparison between direct observation of the testing and reading the report later. I have found over the course of my career that direct observation of the testing sessions is the most effective tool in building empathy for the end-users. Period. Full stop.

Pros and Cons of Iterative Research

  • Pro: Constant feedback on the design
  • Pro: Low recurring cost for participant recruiting
  • Pro: Aggregate quantitative results over the course of the project
  • Pro: Timely qualitative results on design questions
  • Con: Need a semi-dedicated (internal or external) researcher to support the ongoing research effort (i.e. recruiting participants, testing materials, moderating, reports, etc…)
  • Con: Buy-in from stakeholders on a new research process
  • Con: Difficulty in getting the necessary parties to observe the testing and participate in the debrief
  • read more

    Virtual Design Wall: UX for the Product Team

    What is a Virtual  Design Wall?

    A virtual design wall is basically a UX intranet site where your product/project team can view all the UX deliverables. It can be as simple or complex as needed by the product owner and stakeholders.

    My virtual design wall process evolved over four years. When I started working on the American Airline’s self-service kiosk (as a UX team of one), I suddenly found myself overwhelmed with the sheer volume of requests from my immediate and extended product team. The only reasonable way to deal with it was to make all my deliverables available on a UX intranet. It works so well that I’ve included that in my process since.

    What to Include on Your Virtual Design Wall

  • Wireframes: Annotated Production Screenshots
  • Use Case Flows
  • End-to-end Screen Flows
  • Project-level Mockups (not in production yet)
  • Prototypes or links to Dev/QA environments
  • Links to business and/or development documentation
  • For an Agile team: links to the user stories
  • read more

    The “In a Perfect World” UX Mission Statement

    It’s easy to lose sight of the big picture when you’re wading through user research. The pain points, user personas, survey data, etc… can be shouting so loudly that you forget the point of it all. So I invented the “In A Perfect World” statement to bring it all home in my user research presentations.

    It’s very simple.

    “In a perfect world, this (project, software, feature set) would make the user feel (productive, accomplished, confident) because it (automates redundant tasks, quickly provides relevant data).” read more