Skip to Main Content

Artificial Intelligence in Education & Research: Ethical Considerations

"We believe in being honest, true, chaste, benevolent, virtuous, and in doing good to all men..." Articles of Faith 13

Guidelines

Academic Integrity

Using AI tools for research and writing creates concerns about maintaining academic honesty. Using AI to generate content without proper attribution can lead to plagiarism. It is crucial to cite AI tools appropriately and to disclose how AI was used in research, writing and creative works.

Intellectual Property & Copyright

Respecting intellectual property rights is important and there is a potential for copyright infringement when using generative AI for creative endeavors. AI models are often trained on datasets that include copyrighted works, raising questions about the legality and ethics of using these works without explicit permission from the copyright owners.

Safety & Data Privacy

When using AI tools that process personal data, it's essential to safeguard privacy of individuals. This involves anonymizing or de-indentifying data and being cautious about how the information is stored and shared.

Bias

It is well-documented that AI systems have concerning issues with bias and misinformation.This is because AI systems are trained on data that can be biased, and the bias shows up in the outputted information. For example, if an AI system's training data is made up of news articles from mostly one political perspective, the text generated from that system will reflect that perspective. It is important to be aware of the potential for bias and misinformation and critically evaluate AI-generated output. Cross-check the AI content with reliable sources found in scholarly publications accessible at the BYU Library.

Further reading:

Zhou, Mi, Vibhanshu Abhishek, Timothy Derdenger, Jaymo Kim, and Kannan Srinivasan. "Bias in generative ai." arXiv preprint arXiv:2403.02726 (2024).

Warr, M. (2024). "Beat Bias? Personalization, Bias, and Generative AI." In J. Cohen & G. Solano (Eds.), Proceedings of Society for Information Technology & Teacher Education International Conference (pp. 1481-1488). Las Vegas, Nevada, United States: Association for the Advancement of Computing in Education (AACE). Retrieved December 11, 2024 from https://www.learntechlib.org/primary/p/224163/.

Warr, M., Oster, N. J., & Isaac, R. (2024). "Implicit bias in large language models: Experimental proof and implications for education". Journal of Research on Technology in Education, 1–24. https://doi.org/10.1080/15391523.2024.2395295

Enviornmental Impact

Generative AI models require large amounts of computational power and have a significant environmental impact, primarily due to high energy consumption, carbon emissions, and water usage. For example, according to one study, a high-carbon image generation model generates the amount of carbon equivalent to 4.1 miles driven by an average gasoline-powered passenger vehicle for 1,000 inferences. (Sasha Luccioni, Yacine Jernite, and Emma Strubell. 2024. "Power Hungry Processing: Watts Driving the Cost of AI Deployment?" In Proceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency (FAccT '24). Association for Computing Machinery, New York, NY, USA, 85–99)

For futher reading review this article and titles in the bibliography: Bashir, Noman, Priya Donti, James Cuff, Sydney Sroka, Marija Ilic, Vivienne Sze, Christina Delimitrou, and Elsa Olivetti. 2024. “The Climate and Sustainability Implications of Generative AI.” An MIT Exploration of Generative AI, March. https://doi.org/10.21428/e4baedd9.9070dfe7 

 

Harm Considerations of Large Language Models

Created by Rebecca Sweetman, Associate Director, Educational Technologies, Queen’s University, Kingston Ontario, Canada.

(University\https://h5pstudio.ecampusontario.ca/content/51741)