Using AI tools for research and writing creates concerns about maintaining academic honesty. Using AI to generate content without proper attribution can lead to plagiarism. It is crucial to cite AI tools appropriately and to disclose how AI was used in research, writing and creative works.
Respecting intellectual property rights is important and there is a potential for copyright infringement when using generative AI for creative endeavors. AI models are often trained on datasets that include copyrighted works, raising questions about the legality and ethics of using these works without explicit permission from the copyright owners.
When using AI tools that process personal data, it's essential to safeguard privacy of individuals. This involves anonymizing or de-indentifying data and being cautious about how the information is stored and shared.
It is well-documented that AI systems have concerning issues with bias and misinformation.This is because AI systems are trained on data that can be biased, and the bias shows up in the outputted information. For example, if an AI system's training data is made up of news articles from mostly one political perspective, the text generated from that system will reflect that perspective. It is important to be aware of the potential for bias and misinformation and critically evaluate AI-generated output. Cross-check the AI content with reliable sources found in scholarly publications accessible at the BYU Library.
Further reading:
Generative AI models require large amounts of computational power and have a significant environmental impact, primarily due to high energy consumption, carbon emissions, and water usage. For example, according to one study, a high-carbon image generation model generates the amount of carbon equivalent to 4.1 miles driven by an average gasoline-powered passenger vehicle for 1,000 inferences. (Sasha Luccioni, Yacine Jernite, and Emma Strubell. 2024. "Power Hungry Processing: Watts Driving the Cost of AI Deployment?" In Proceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency (FAccT '24). Association for Computing Machinery, New York, NY, USA, 85–99)
For futher reading review this article and titles in the bibliography: Bashir, Noman, Priya Donti, James Cuff, Sydney Sroka, Marija Ilic, Vivienne Sze, Christina Delimitrou, and Elsa Olivetti. 2024. “The Climate and Sustainability Implications of Generative AI.” An MIT Exploration of Generative AI, March. https://doi.org/10.21428/e4baedd9.9070dfe7
Created by Rebecca Sweetman, Associate Director, Educational Technologies, Queen’s University, Kingston Ontario, Canada.
(University\https://h5pstudio.ecampusontario.ca/content/51741)