Skip to Main Content

Generative AI CU Boulder

Introduction

The extensive use of AI and its profound integration with society inevitably gives rise to a plethora of ethical issues and concerns. These ethical issues are not only rooted in the technology itself but also in the way it is applied in practice. Recognizing these ethical issues, comprehending their scope, and understanding how organizations, governments and companies have taken different approaches to address them will enable us to use AI tools in an ethical manner.

To learn more, we recommend following Casey Fiesler's work in AI Ethics as well as Harvard's AI Pedagogy Project.

Ethics, Policies & Copyright

Ethical Issues

Generative Artificial Intelligence tools are created and maintained within existing systems of inequality and oppression. Be aware that tools will often reproduce existing biases: seek out information about how specific tools are trained to understand to what extent companies are working to combat those biases, and consider seeking out additional resources to supplement AI-generated information.

 

Bias and Discrimination

Instances of bias and discrimination have been seen in many applications of AI systems, such as racial bias and gender discrimination that may be rooted in the training data.

Dattner, Ben, Tomas Chamorro-Premuzic, Richard Buchband, and Lucinda Schettler. “The Legal and Ethical Implications of Using AI in Hiring.” Harvard Business Review, April 25, 2019

Environmental Impact

Training and running AI systems requires significant amounts of energy such as electricity; The development of AI increases the demand of many natural resources, such as rare earth metals; and the use of AI devices contributes to pollution and waste.

OECD Report: Measuring the environmental impacts of artificial intelligence compute and applications (Published on November 15, 2022)

Google's recommendations on best practices for publishing about Machine Learning (ML)

"Black Box" Algorithm

Deep learning algorithms are the core of AI systems, which operate in a "black box" manner. It is so difficult to understand how these algorithms process data and generate predictions or make decisions, leading to the issue of trust and accountability.

Bagchi, Saurabh, and The Conversation US. "Why We Need to See Inside AI's Black Box." Scientific American, May 26, 2023.

Data Privacy and Protection

The performance of AI systems requires huge amount of data to train the models. In many cases, the training data includes sensitive or confidential content. Use of this content to train AI systems without consent or input from creators raises concerns about Indigenous data sovereignty and the privacy of such sensitive data as medical records. User data input while interacting with AI tools may also be re-used to re-train and develop tools, raising concerns about the storage and use of potentially sensitive information. 

A new way to look at data privacy (published July 14, 2023) 

Generative Artificial Intelligence and Data Privacy: A Primer (published May 23, 2023)

Indigenous knowledges informing ‘machine learning’ could prevent stolen art and other culturally unsafe AI practices (published September 8, 2023)

Worker's Rights

The Machine Learning (ML) behind AI systems relies on human labor to train data, leading to concerns over fair compensation and safe working conditions. 'Data labelers', also called 'ghost workers', train AI to filter offensive content. These jobs are precarious, their work often hidden by the companies that rely on them, and data labelers face low compensation for repetitive and psychologically disturbing work. Additionally, artists and authors whose works have been used to train AI datasets bring intellectual property concerns, claiming that this practice undermines their livelihood by denying them credit and compensation for their work. 

OpenAI Used Kenyan Workers Making $2 an Hour to Filter Traumatic Content from ChatGPT (published January 18, 2023)

ChatGPT and Generative AI Tools: Theft of Intellectual Labor? (published April 4, 2023) 

Author's Guild open letter (published July 18, 2023) 

Artists and Illustrators Are Suing Three A.I. Art Generators for Scraping and ‘Collaging’ Their Work Without Consent (published January 24, 2023)

Frameworks for Ethical AI

International

U.S.