
Artificial Intelligence (AI) is now the driving force behind many technological advancements and people are concerned about how that will affect them. Fears surrounding AI include job displacement, bias, privacy, and misinformation. It’s essential to understand that AI is not replacing humans. Rather, humans play an integral part in AI development and deployment. In this blog, we’ll explore why companies need user testing for AI products and the different factors to consider when building trust and collaboration between technology and humans.
Unraveling the Fears About AI
Before going into how to test AI products, it is important to understand concerns about and limitations of AI.
One of the most common concerns is that AI will displace human jobs. Because AI is not just automating but predicting behaviors, many worry about the increasing reliance on AI for decision-making. However, a part of AI’s power is also augmentation, enhancing human workers by making their jobs more efficient and less repetitive. One could argue that it gives people the ability to do more and focus on the things that matter most.
On another note, AI can analyze, consolidate, and draw conclusions based on a vast array of data – which humans cannot do. For example, in healthcare: AI has reviewed thousands of medical journal reports and findings to make predictions and recommendations by searching for relevant information.
AI’s automating and predicting behaviors are possible because of the data it is trained on. The data, input by humans, are naturally biased because humans are themselves biased. This leads to AI perpetuating those biases, causing concerns about fairness and discrimination in AI-driven decision-making. The documentary Coded Bias illustrates this by exploring how face recognition can cause false accusations toward people of color. Bias can also lead to misinformation while sounding confident when delivering these messages.
Another concern is that because AI needs vast quantities of data, we must address how the black box handles private information. Many companies have made regulations limiting the use of AI to some degree because of confidentiality concerns.
What is the problem AI was created to solve?
Before even thinking about testing products involving AI, it is important to assess the end-user need: what problem is AI trying to solve? AI is the new “trend,” but that is not a good enough reason to commit time, money, and effort to make a completely new feature or product.
Humans: The Heart of AI
When the user problem has been identified, it is important to remember who will be using AI. Humans play a crucial role in the field of AI, and companies need to foster trust and collaboration between humans and AI systems. By working together, these two entities can achieve more compared with working in isolation. However, to establish this level of trust, AI must prove its worth by addressing real-world issues and providing tangible value. Once the foundational research is done to prove that AI is needed, the following experiences should be considered and evaluated to mitigate bias and misinformation.
Human oversight
Incorporating checks within the system or involving human reviewers to assess its performance during usage can add to AI’s reliability and assurance by identifying and rectifying any potential errors or biases that may arise.
Transparency
Traceability and interpretability are important because users should clearly understand that they are utilizing AI and should be provided with visibility into the sources of information. This transparency fosters trust and enables users to make informed decisions. For example, Amazon uses and makes apparent when it presents a consolidation of reviews for a specific product.
Source: https://www.theverge.com/2023/8/14/23831391/amazon-review-summaries-generative-ai
Explainability
Allowing users to gain insights into the so-called black box of AI by providing information about how the system was created and its intended purpose can significantly enhance trust and understanding.
In 2022, IBM made this simple statement: “Explainable recommendations that provided clear evidence as to why a business partner is recommended were critical to the success of this AI system engendering trust in the system.”
Feedback Contestability
To further improve the accuracy of AI systems, implementors, and end-users should be able to challenge or correct the system if it produces incorrect results. A simple thumbs-up or thumbs-down rating system might not suffice. Providing a more comprehensive feedback mechanism can allow users to provide specific feedback and contribute to the system's continuous improvement.
Guardrails
Implementing system-level safeguards from the early stages of AI development is critical when combating bias. Careful selection and management of training data should be prioritized to prevent biases, such as using zip codes as a proxy for race. By proactively addressing potential biases, AI systems can be more fair and unbiased.
As Stanford University stated, “AI is not the ultimate authority; instead, it is a tool for quickly recognizing patterns or predicting outcomes, which are then reviewed by human experts. Keeping people in the loop can ensure that AI is working properly and fairly and also provides insights into human factors that machines don’t understand.”
Control
Once trust is established, users should have the option to relinquish control to AI systems, especially in situations when work and personal life intersect. This allows users to take responsibility for the output while relying on AI for assistance, ultimately enhancing productivity and efficiency.
Personas around AI
How prevalent having controls, guardrails, explainability, and feedback are can also be determined by the risks or consequences of AI-generated products (e.g. healthcare or self-driving car). The level of expertise of end-users can sway how you apply these factors. For example, what are the user expectations when using AI? Do they understand what it’s good at or not? When to use it or not? Users don’t have to be experts on AI to use it, but understanding how users see AI’s limitations can determine how much resistance you should put into your design. If they don’t know how to use it or don’t trust it then they’re not going to use it or, more consequentially, use it incorrectly.
A Human-AI Synergy
Embracing the synergy between humans and AI is key to unlocking technology's full potential and driving progress. By fostering a collaborative culture and embracing opportunities at the intersection of human intelligence and artificial intelligence, we can create a better and more sustainable world by discovering new ways to streamline processes and improve outcomes. At Key Lime Interactive, we have an ongoing track record for helping our diverse clients keep up with technological advances, like AI, without sacrificing user experience. To learn more about how we can do the same with your company, please visit our home page.
Sources:
https://research.ibm.com/blog/what-is-human-centered-ai
https://hai.stanford.edu/news/human-centered-approach-ai-revolution
Comments
Add Comment