The Ethics And Process Of Utilizing AI

AI Head by Tara Winstead on Pexels

With AI-generated content in text, audio, and even visual media, it is important for professionals to consider the ethical usage of AI as the ever-developing technology is applied to ever more fields previously thought impervious to automation. As a UX researcher, this article is written to consider the principles of working ethically with AI.

 

Ethical AI Principles

One important principle is transparency. AI algorithms should be open and accessible to scrutiny, and developers should provide clear explanations for the decisions made by their systems. This can help to identify and correct any biases or discriminatory practices that may be present in the AI.

 

Another important principle is fairness. AI systems should be designed to treat all individuals fairly, regardless of their race, gender, or any other characteristic. This means that developers must test their systems for bias and take steps to address any biases that are found.

 

Privacy is also a key ethical consideration. Data must be collected and used in ways that respect individuals' privacy and autonomy. This means that data should only be collected for specific, legitimate purposes, and individuals must be informed about how their data is being used and given the option to opt out if they choose to.

 

Safety is another important ethical consideration. AI systems must be designed with safety in mind, including physical safety, cybersecurity, and the prevention of misinformation and disinformation.

 

Finally, ethical AI requires Accountability. Developers and users of AI must take responsibility for the impact of their systems on society and must be prepared to address any unintended consequences or negative outcomes.

 

Further Discussions

While this article so far has taken a well-intentioned but shallow look at the ethical usage of AI, most of the content above (the section titled Ethical AI Principles) has been generated by AI, namely ChatGPT. This article was generated with the prompt, “Please write a less-than-400-word article about using AI ethically.” It was done as an exploration of the capabilities and safeguards of one specific AI product. While the AI-generated content above appears to center the discussion around developers, some of the principles also apply to AI tool users. 

 

A typical user of AI-generated content most likely would not encounter private and personal information unless the AI product they are using chooses to reveal or utilize such information. However, users of AI should consider Privacy in the context of using private and personal information as prompts for the AI. AI tools are products created by companies, research labs, or other organizations. For example, ChatGPT belongs to OpenAI, and Midjourney Inc. created Midjourney, a text-to-image AI. When entering a prompt into an AI tool, a user should be mindful that they will be giving information to another organization and should avoid entering personal and private information, regardless of any state privacy policy by the AI’s developers.

 

The principles of Transparency, Safety, and Accountability might be much more applicable to a user utilizing AI-generated content, even if not entirely as described by the AI-generated content above. While the user provides the prompt that triggers AI’s content generation, it would be difficult for a professional (a writer, artist, or UX researcher) to claim the generated content entirely belongs to them. Any usage of AI should be clearly stated, as well as the extent of AI content in each product. While an external user of AI tools could not be transparent with exact knowledge of the AI tool’s underlying algorithms, transparency with the usage of AI-generated content will be important to said user’s integrity.

 

In the AI-generated section on Safety, the most relevant line for users of AI tools is “the prevention of misinformation and disinformation.” As seen in this article, just entering a prompt and lifting the result from the AI is simply not enough. AI-generated text often contains inaccuracies and algorithmic assumptions (ChatGPT assumed “using AI ethically” was written from the developer’s perspective). AI-generated images may create errors or even proliferate harmful visual stereotypes. A user of AI-generated content should ensure their utilization of new technology does not cause potentially real harm.

 

While the principle of accountability in the section above is similarly centered on AI developers, AI tool users will ultimately be accountable for their usage of these tools. Based on their programming by developers, AI products like ChatGPT can only react to the behavior of their users. It is up to the user to be accountable for how they evaluate, transform, and utilize the AI-generated content.

 

As part of the process for this article to be accountable and transparent, the following section will detail the process used to generate AI content at the start of this article. To save on length, this article will only include the prompts tested and comments on the results generated.

 

Prompt 1: “write me a 600-word article about how someone should work with AI”

  • This generic prompt only created a generic answer about using AI tools. Interestingly, the content was NOT centered on the developer's perspective and encouraged users of AI tools to understand the deeper workings of AI tools, such as the training model.

Prompt 2: “write me an 600-word article about how someone should work with AI ethically”

  • The addition of a single word created more relevant content, properly introducing the topic of the ethical creation of AI but shifting the focus away from users of AI tools.

Prompt 3: “Please write a 400-word article using AI ethically”

  • Done for brevity in writing this article. The “400-word” stipulation was interpreted as “around 400-word,” and the end result ended up being 435 words long.

Prompt 4: “Please write a less-than-400-word article about using AI ethically”

  • This was the version chosen for this article, as it provided the most interesting narrative structure (the principles of the ethical usage of AI). Admittedly, this structure feels somewhat fortuitous, and a repeated entry of the same prompt did not mention the term “principles.”
  • The introduction and conclusion generated by the AI were also discarded in writing this article, as they were too generic for this article.

Prompt 5: “Please write a less-than-400-word article about using AI ethically while citing sources”

  • A test of ChatGPT’s ability to cite sources. The AI now included quotes in the content. However, searching those quotes could not verify their accuracy and sources.

Prompt 6: “Please write a less-than-400-word article about the ethical usage of AI-generated content”

  • Finally, ChatGPT produces the intended result, but still without the satisfying and clear “principles” structure, as seen in Prompt 4.
  • Interestingly, this result does echo many of the same “principles,” such as transparency and accountability.

 

Last Words, from the AI itself

To close the article, the article would like to provide AI a chance to “speak” for itself and give it a chance to “convince” readers of the ethical usage of AI content.

 

Prompt 7: “How would you convince people to be ethical with the usage of AI content?”

Answer:

  • Given the human tendency to anthropomorphize, the prompt was changed into a question to generate a result.
  • The result of this prompt was an instruction on the topics to educate other users, not an argument written as a speaker attempting to convince an audience.
  • It was also too long for the end of an article.

Prompt 8: “How would you convince people to be ethical with usage of AI content in the fewest words possible?”

  • Too short.

Prompt 9: “How would you convince people to be ethical with usage of AI content in 100 words?”

  • This result was still written as an instruction, not in the “voice” of the AI itself.

Prompt 10: “In 100 words, why should people be ethical with the usage of AI-generated content?”

Answer from ChatGPT on Mar 31, 2023:

“People should be ethical with the usage of AI-generated content to prevent perpetuating biases, violating privacy, and perpetuating inequality. AI-generated content can have significant impacts on individuals and society, and unethical use can lead to negative consequences such as spreading false information and manipulating public opinion. Ethical usage can help ensure that AI-generated content is used to benefit society while minimizing potential harms. Moreover, establishing clear guidelines and regulations around the creation and usage of AI-generated content can promote transparency, accountability, and fairness. Overall, using AI ethically is crucial to ensuring that it is used to create positive impacts while minimizing negative consequences.”



As AI and the ethical usage of AI-generated content are far from settled matters, Key Lime Interactive researchers will continue to monitor this evolving technology and other developments in technology and UX research. Please contact us if you want to understand how new technologies and user experience may affect your next project.

 

Additional readings and inspirations

Can ChatGPT Make You A Faster, Better (But Ethical) UX Researcher? 

How Do We Use Artificial Intelligence Ethically?

What is Ethical AI

Ethical concerns mount as AI takes bigger decision-making role

AI Image Generator: The Problematic Truth Behind the Trend

We're Cry-Laughing at These "Spectacular Failures" of AI Generated Art

 

More by this Author
Shao-Yu Chen

Shao-Yu Chen is a UX Research Associate with a background in psychology and Human-Computer Interaction, and deep interest in optimizing customer experiences. Shao-Yu joined Key Lime Interactive as an analyst for the Competitive Insights practice. He led data collection and synthesis for the banking, credit cards, auto insurance, and home insurance reports. Since then, he has crossed over to our consulting side of the business, working with clients to gather deep insights to inform the customer and user experience. Shao-Yu attended Vanderbilt University for his BA in Psychology, and has a Masters in Human-Computer Interaction from Georgia Institute of Technology.

Comments

Add Comment