Article

All eyes on me

How we prepare non-researchers to observe user testing

Ben Wood

August 22, 2024

An image of scattered, poorly utilized scientific equipment

It can take a lot of work to carve out a budget for frequent research and testing activities when beginning new engagements. We have found that when stakeholders observe testing firsthand and know how users really interact with their products, it can have huge benefits to your ability to expand that budget and improve your products for your users.

When stakeholders observe user testing, they gain a firsthand understanding of user interactions and challenges. This involvement can foster trust and transparency, making them feel more connected to the design process. Moreover, this increased engagement can lead to stronger relationships and more business. Observers feel more invested in the process, creating a sense of partnership and mutual respect; they are more likely to understand the rationale behind design decisions, buy in quicker to proposed changes, and the collaborative spirit spills over into the rest of your design work.

As your work continues, that firsthand knowledge can help them see the value in frequent testing, and they are more likely to invest in additional ongoing research activities and long-term collaboration. That said, because stakeholders are often not trained researchers, their involvement has its potential pitfalls.

The challenges of direct observation by non-researchers

Premature conclusions

Non-researchers may jump to conclusions based on limited observations. Without the full context or thorough analysis, they might make decisions or form opinions that do not accurately reflect the overall findings. Picture them observing their first usability test. After seeing one user struggle with the login feature, they exclaim, “We need to redesign the whole login process!” Little do they know, this was an isolated incident, and subsequent users navigated it just fine.

Anchoring effect

They may focus on specific issues or feedback that come early in the process, potentially overlooking other vital insights. This can lead to biased interpretations and skewed decision-making. For example, a non-researcher may hear in their first session that a user doesn’t like the color palette. Despite more pressing usability issues, they may become convinced that the entire palette needs changing, even if that first bit of feedback doesn’t represent the broader user base.

Distraction and multitasking

If they are observing remotely, they might be distracted or multitasking, which can reduce their engagement and understanding of the session. They might miss critical moments or fail to grasp the nuances of user interactions.

Emotional reactions

Seeing users struggle with their product can evoke strong emotional responses from stakeholders, such as frustration or defensiveness. This can lead to reactive decision-making or resistance to necessary changes. One stakeholder saw a user get confused by a feature and immediately wanted to remove it. They had to be gently reminded that iteration is crucial and that one user’s confusion doesn’t spell doom for the feature.

Setting Expectations and Mitigating Biases

Now that we’ve discussed the potential pitfalls, what can be done to ensure stakeholder observation enhances the usability testing process without compromising its integrity? Here are some effective mitigation strategies to set clear expectations and mitigate potential biases.

Set the stage

Before the session, set expectations for what they should expect. Explain the session’s objectives and emphasize that one user’s struggle doesn’t signal a product’s failure but is an opportunity to gain valuable insights.

Educate on bias

They need to understand how their preconceptions can influence their observations. Sharing examples of common biases, such as confirmation bias or the recency effect, can help them recognize and mitigate these tendencies. This doesn’t need to be a full-day training session to be highly effective — briefly explaining potential biases will make it easier if they need to be called out in the future during testing or synthesis.

Provide structure

Structured observation is vital to keep non-researchers focused. Without guidance, clients might focus on irrelevant details. Provide a dedicated space, like a Figjam board, where clients can note specific behaviors, comments, and issues — and encourage them to jot down questions rather than draw premature conclusions. You might even want to give them specific things to look for, like “Let us know when you see X behavior.”

Focus on the bigger picture

Highlighting patterns across sessions is essential for seeing the bigger picture. One user’s confusion with a feature doesn’t mean it’s universally problematic — remember, nothing you ever design will be perfect. Emphasize the importance of identifying trends and recurring issues. Measure the spread and the severity. Visual aids like charts or graphs in reports can illustrate these patterns and prevent over-emphasizing outliers.

Manage Expectations

Finally, managing expectations is paramount to a successful observation process. Usability testing is an iterative process, not a silver bullet. At the outset, remind them that user testing often reveals problems requiring multiple rounds of refinement. Sharing a timeline of the design process and showing how user testing fits into continuous improvement helps non-reserachers appreciate the iterative nature of UX work.

Bringing it all together

As we navigate the tricky waters of stakeholder observation, we must keep our approach grounded yet engaging. By setting clear expectations, providing structured frameworks, and maintaining open communication, we can ensure that our usability studies maintain the rigor that ensures minimal bias. And remember, even when things get tough, a little humor goes a long way in keeping the team (and stakeholders) motivated and aligned.

Happy testing, and may your user insights be ever clear and your observations ever insightful!