Preventing Worst-Case Scenario User Experiences

 

    In a world where our interactions with services, companies, and even other people are increasingly mediated by mobile apps, software, and services, unpleasant user experiences can leave people feeling unsettled and exposed. Those negative experiences affect not only people’s opinions of the products but often the reputation of the company itself, spreading swiftly through social media and word of mouth. Some of these issues have been addressed, some are ongoing, but it seems that all share the same root cause - failure during design and implementation to consider potential bad outcomes from using the product.

Here are several examples from the past few years of worst-case user experiences: 

Why does this happen? 

     It’s easy to focus on designing for the type of user you expect would be using your product, in the ideal situation, with an understanding of those users’ needs strictly in the context of product use. However, without actively considering negative user experiences, you can’t design to prevent them. Below are some of the contributing factors to irritating and unpleasant user experiences.

  • A myopic view of the user and product use. Only evaluating the average expected user, instead of a wider range of potential users, gives an incomplete picture of potential issues. The same is true when it comes to where and when and how the users would use it—their use environment and the context of their use. A full understanding of their needs met by the product includes understanding cases where they would not want to use particular product features.
  • A failure to consider failure modes, edge cases, and bad outcomes. If your service depends on Wi-Fi but they can’t get a signal, what are the possible outcomes? In those less-than-ideal situations, people will be irritated at best, and possibly even seriously inconvenienced. 
  • Designer bias and blind spots. The team at Facebook that initially developed the Year in Review feature never considered there might be very good reasons people might not want to see reminders of their past year. Unhappy or traumatic events aren’t edge cases, they are literally part of the human condition.
  • “Move fast and break things.” There’s always powerful pressure to move a product, to be first on the market, figuring that if something doesn’t work quite right, you can fix it later. Mark Zuckerberg’s motto exemplifies a mindset in the tech world that’s produced a rising backlash as people realize their privacy and security are among the things companies decided were worth fixing only after problems came to light.

What can we (as UX researchers and designers) do?

      Medical devices (and software) that could potentially harm someone are required to go through a rigorous testing process and provide human factors data to the FDA showing that their devices can be used safely and effectively, with known risks mitigated through design, labeling or instruction. While there’s rarely actual physical harm involved in the unpleasant user experiences from apps, software or services, it would be short-sighted to dismiss anything short of that as not worth protecting people against.. 

      Someone needs to consider the risk and severity of potential harm from unpleasant user experiences during the product development process. People can be harmed through the failure of a product to provide accurate information needed for decision making, through loss of security or privacy, through behaviors induced by dark patterns, or through exposure to particular types of content (such as white supremacist media or, disinformation).  User researchers, designers, and product managers are well-positioned to assess the likelihood, type, and severity of potential harm and work to mitigate them before they affect people and fuel the inevitable backlash. Taking a proactive approach protects the users, the product, and the company.

      Designers and researchers should consider the following during the design process:

  • Don’t only evaluate the expected user. Evaluate users with a wider range of technical savvy or education who might use your product.
  • Consider what circumstances or user environments would cause product features to fail or malfunction.
  • Discover negative use cases where people actively would not want (or be hurt by) a feature functioning as intended.
  • Design for failure modes, edge cases, and bad outcomes. What are the worst things you can imagine happening and how could you prevent them? 
  • Question your assumptions. Find someone outside the team who has nothing to lose by asking the hard questions.
  • Assess the risk of potential harm, prioritize, and advocate for solutions. 

      There’s no way to anticipate everything, but even a modicum of effort put into mitigating future harm will build better products, and reduce the potential for negative PR that could affect not only sales but also the reputation of the company itself.

 

READ MORE: The Social Experience of Digital Reality Use, The Peaks and Valleys of UX in Modern Enterprise, The UX of Scrolling, Should Researchers and Designers Sign Their Own Hippocratic Oath?

More by this Author

Comments

Add Comment