Like many innovations, Generative AI can be used ethically or unethically. But who defines these boundaries? Primarily, it falls to policymakers and rule-makers. Additionally, individual morals and intentions play a significant role. In the academic setting, the primary concern is that students might use AI to write papers, complete projects, and take tests without any personal effort, thereby undermining their learning.
However, this issue can be mitigated through open dialogue between instructors and students. We need to understand where we all stand and what our expectations are when it comes to this technology within the classroom. Generative AI, like any tool, can be wielded for both positive and negative purposes. When preparing for these discussions, consider how we can leverage this technology in effective and ethical ways.
First and foremost, AI is not a replacement for humans. It cannot replicate human ingenuity or creativity. Human creativity is unique and cannot be fully emulated by machines. While AI must learn from existing information, human thinking has the ability to generate entirely new ideas from scratch.
Secondly, AI programs, such as ChatGPT, are not search engines and do not produce information independently. They generate language based on the data they have been trained on.
Lastly, AI is not well-regulated. Currently, there are no comprehensive legislations, regulations, or standards governing AI. Although some executive orders are starting to address these issues, there is still no definitive framework in place.
There are several legitimate concerns about Generative AI in academia. One major issue is the use of AI to write papers, which undermines the trust between teachers and students. How can educators trust the accuracy and authenticity of student work when it is generated by AI? AI relies on the information it is trained on, so if it learns incorrect or misleading data, it will reproduce that content, regardless of its truthfulness.
This concern extends beyond academia. Any field or profession that relies on AI to generate content faces the risk of losing consumer trust.
Consider this article from PBS discussing how the magazine Sports Illustrated was found publishing articles and images using AI: https://www.pbs.org/newshour/economy/sports-illustrated-found-publishing-ai-generated-stories-photos-and-authors
Or how the company Wizards of the Coast had to admit to using Generative AI for a marketing image, as stated in this article from Polygon: https://www.polygon.com/24029754/wizards-coast-magic-the-gathering-ai-art-marketing-image
In addition to concerns regarding trust, there are also concerns with accuracy and even bias or stereotypes (see article from NPR: How AI could perpetuate racism, sexism and other biases in society).
We asked ChatGPT “Can Generative AI be biased or racist?” and here is its response:
“Generative AI can indeed inherit biases present in the data it learns from. If the training data contains biased information or reflects societal prejudices, the AI model may unintentionally generate biased or discriminatory outputs. Developers and researchers are actively working on methods to mitigate these biases and ensure AI systems are fair and equitable.”
Considering that AI learns from the information it's given and lacks adequate regulations, there's a risk of it generating inaccurate or biased information that perpetuates stereotypes. This underscores the importance of reviewing AI-generated content thoroughly before dissemination.
These concerns are crucial when utilizing Generative AI for content creation. However, one paramount concern surpasses all others: the absence of privacy safeguards. It's imperative not to input any private information into AI systems. As Generative AI is utilized, it accumulates and learns from stored data, potentially leading to inadvertent leakage of private information without your awareness. It's advisable to refrain from using sensitive data in these programs altogether.