With AI-generated content in text, audio, and even visual media, it is important for professionals to consider the ethical usage of AI as the ever-developing technology is applied to ever more fields previously thought impervious to automation. As a UX researcher, this article is written to consider the principles of working ethically with AI.
Ethical AI Principles
One important principle is transparency. AI algorithms should be open and accessible to scrutiny, and developers should provide clear explanations for the decisions made by their systems. This can help to identify and correct any biases or discriminatory practices that may be present in the AI.
Another important principle is fairness. AI systems should be designed to treat all individuals fairly, regardless of their race, gender, or any other characteristic. This means that developers must test their systems for bias and take steps to address any biases that are found.
Privacy is also a key ethical consideration. Data must be collected and used in ways that respect individuals' privacy and autonomy. This means that data should only be collected for specific, legitimate purposes, and individuals must be informed about how their data is being used and given the option to opt out if they choose to.
Safety is another important ethical consideration. AI systems must be designed with safety in mind, including physical safety, cybersecurity, and the prevention of misinformation and disinformation.
Finally, ethical AI requires Accountability. Developers and users of AI must take responsibility for the impact of their systems on society and must be prepared to address any unintended consequences or negative outcomes.
Further Discussions
While this article so far has taken a well-intentioned but shallow look at the ethical usage of AI, most of the content above (the section titled Ethical AI Principles) has been generated by AI, namely ChatGPT. This article was generated with the prompt, “Please write a less-than-400-word article about using AI ethically.” It was done as an exploration of the capabilities and safeguards of one specific AI product. While the AI-generated content above appears to center the discussion around developers, some of the principles also apply to AI tool users.
A typical user of AI-generated content most likely would not encounter private and personal information unless the AI product they are using chooses to reveal or utilize such information. However, users of AI should consider Privacy in the context of using private and personal information as prompts for the AI. AI tools are products created by companies, research labs, or other organizations. For example, ChatGPT belongs to OpenAI, and Midjourney Inc. created Midjourney, a text-to-image AI. When entering a prompt into an AI tool, a user should be mindful that they will be giving information to another organization and should avoid entering personal and private information, regardless of any state privacy policy by the AI’s developers.
The principles of Transparency, Safety, and Accountability might be much more applicable to a user utilizing AI-generated content, even if not entirely as described by the AI-generated content above. While the user provides the prompt that triggers AI’s content generation, it would be difficult for a professional (a writer, artist, or UX researcher) to claim the generated content entirely belongs to them. Any usage of AI should be clearly stated, as well as the extent of AI content in each product. While an external user of AI tools could not be transparent with exact knowledge of the AI tool’s underlying algorithms, transparency with the usage of AI-generated content will be important to said user’s integrity.
In the AI-generated section on Safety, the most relevant line for users of AI tools is “the prevention of misinformation and disinformation.” As seen in this article, just entering a prompt and lifting the result from the AI is simply not enough. AI-generated text often contains inaccuracies and algorithmic assumptions (ChatGPT assumed “using AI ethically” was written from the developer’s perspective). AI-generated images may create errors or even proliferate harmful visual stereotypes. A user of AI-generated content should ensure their utilization of new technology does not cause potentially real harm.
While the principle of accountability in the section above is similarly centered on AI developers, AI tool users will ultimately be accountable for their usage of these tools. Based on their programming by developers, AI products like ChatGPT can only react to the behavior of their users. It is up to the user to be accountable for how they evaluate, transform, and utilize the AI-generated content.
As part of the process for this article to be accountable and transparent, the following section will detail the process used to generate AI content at the start of this article. To save on length, this article will only include the prompts tested and comments on the results generated.
Prompt 1: “write me a 600-word article about how someone should work with AI”
Prompt 2: “write me an 600-word article about how someone should work with AI ethically”
Prompt 3: “Please write a 400-word article using AI ethically”
Prompt 4: “Please write a less-than-400-word article about using AI ethically”
Prompt 5: “Please write a less-than-400-word article about using AI ethically while citing sources”
Prompt 6: “Please write a less-than-400-word article about the ethical usage of AI-generated content”
Last Words, from the AI itself
To close the article, the article would like to provide AI a chance to “speak” for itself and give it a chance to “convince” readers of the ethical usage of AI content.
Prompt 7: “How would you convince people to be ethical with the usage of AI content?”
Answer:
Prompt 8: “How would you convince people to be ethical with usage of AI content in the fewest words possible?”
Prompt 9: “How would you convince people to be ethical with usage of AI content in 100 words?”
Prompt 10: “In 100 words, why should people be ethical with the usage of AI-generated content?”
Answer from ChatGPT on Mar 31, 2023:
“People should be ethical with the usage of AI-generated content to prevent perpetuating biases, violating privacy, and perpetuating inequality. AI-generated content can have significant impacts on individuals and society, and unethical use can lead to negative consequences such as spreading false information and manipulating public opinion. Ethical usage can help ensure that AI-generated content is used to benefit society while minimizing potential harms. Moreover, establishing clear guidelines and regulations around the creation and usage of AI-generated content can promote transparency, accountability, and fairness. Overall, using AI ethically is crucial to ensuring that it is used to create positive impacts while minimizing negative consequences.”
As AI and the ethical usage of AI-generated content are far from settled matters, Key Lime Interactive researchers will continue to monitor this evolving technology and other developments in technology and UX research. Please contact us if you want to understand how new technologies and user experience may affect your next project.
Additional readings and inspirations
Can ChatGPT Make You A Faster, Better (But Ethical) UX Researcher?
How Do We Use Artificial Intelligence Ethically?
Ethical concerns mount as AI takes bigger decision-making role
AI Image Generator: The Problematic Truth Behind the Trend
We're Cry-Laughing at These "Spectacular Failures" of AI Generated Art