5 Ways To Minimize Expert User Bias - Webinar Summary

By Levi Warvel

5-ways-to-minimize-expert-user-bias-article-blog-webinar

Experts can be very helpful during initial design efforts for a product, sharing their wealth of knowledge and experience with a development team to guide their understanding of user requirements. After all, they understand everything in the domain so well and know a ton of skills, some of which they can perform without breaking a sweat. But what if we told you that expert influence on early design decisions can also introduce biases if not adequately checked?

It seems counterintuitive but it's true. History is full of examples, from airline crashes to reactor meltdowns. Incidents like these highlight a tendency for products informed by experts to end up being designed for experts as well. This is a problem because the majority of users in most domains have enough proficiency in a task to succeed, but don’t have the unique qualities of experts. When design requirements are skewed towards expert perspectives, the results can often mean the products become less usable for less experienced users, making their jobs even more challenging.

So, if experts can accidentally mislead early design efforts, why even use them then, right? Not at all. You should absolutely still use experts. After all, you want other people to perform as well as they do by through your product. You just have to be pragmatic about how you use experts. Luckily, we’re here to help. Here are 5 strategies you can use to help minimize the influence of expert bias and get the most out of your formative design efforts.

  1. Start With The Instructions - A lot of teams will start out by interviewing a subject matter expert or two to identify product requirements. This makes sense but, if you don’t know how to recognize biases in action, you can end up going down the wrong path early on. To avoid this, start by building a task analysis based on instructional materials such as textbooks, manuals, or training guides. Anything that is based on the feedback of multiple experts trying to describe the best way to perform an action will work. Using materials like this will mitigate much of the initial bias because the information is the averaged perspective of many experts, not just a few. Plus, you will understand the basics of the domain and task much better, which provides a framework to help understand what is good feedback and what is biased moving forward.

  2. Observe “Experts” - Once you understand how the task should be done, you’re ready to look at how it actually is done. The best way to do this is through contextual inquiry, a method that combines traditional interviews with observation. Performing contextual inquiry with experts will help establish their true behaviors rather then potentially biased self-reported behaviors. But that’s only half of the equation. You’ll also want to …

  3. Observe “Non-Experts” - Remember, non-experts are usually the people who will need and be using your product the most. As such, you’ll definitely want to have an idea of how they behave as well. Non-expert performance is often very different from experts, both in actual actions taken as well as in the way they conceptualize the process. Capturing both will prepare you to better understand the ways unique user groups differ from the optimal path and each other.

  4. Compare Differences - Once you’ve got the contextual inquiry data, you’re ready to make a task gap analysis based on observed behaviors, highlighting where user groups differ from the core model of optimal performance. To do so, map out observed behaviors as a task analysis as well. From there, try to “fit” each subtask item into the core model. If an item fits, it gets a one. If not, it gets a zero. You can then average the proportion of fit by using the number of fitted items and total items in your core model, giving you both a high-level descriptor of how far from perfect performance users are as well as more detailed understanding of how each group varies.

  5. Close The Loop - With your task gap analysis complete, you’re ready to get insight as to why these gaps exist. Interviews work best here, ideally with the same users from your contextual inquiry if possible. Ask them what happened during each of the subtask items and why they did what they did. Try not to lead them when asking so they don’t just tell you what you want to hear. Also, create a non-judgmental environment so users don’t feel like they’re on trial.

 

Watch the webinar: Webinar: 5 Ways To Minimize Expert User Bias

Read more: Task Analysis: What It Is and Why It Matters

Books: Human Factors Methods: A Practical GuideUser and Task Analysis for Interface DesignHandbook of Critical Incident Analysis

Online Resources: History and Future of Task AnalysisTaskArchitect step-by-step HTAUX Matters article on CI difficultyUsabilityBoK's high-level synopsis of CITVideo overview of CIT