Addressing the Limitations of Using Generative AI for Learning
Within the academic technology ecosystem, generative AI is increasingly everywhere. The Office of Academic Technology (OAT) is the central authority for evaluating learning technologies on campus, including generative AI. We support the responsible adoption of generative AI for academic use, including for teaching and learning.
Responsible AI Use in Teaching and Learning
At UT Austin, we define responsible use of AI in teaching and learning as using AI in ways that foster the achievement of learning outcomes. That means using generative AI in ways that advance what students should know, be able to do, and the attitudes they should develop as a result of a learning experience. At the same time, responsible AI use on the Forty Acres means not using generative AI in ways that would negate or inhibit the realization of those outcomes.
We encourage our faculty, students and staff to be both AI-Forward and AI-Responsible. On the forward side, we provide access to a variety of tools and resources that foster the adoption of AI to achieve learning outcomes. On the responsible side, we emphasize the critical importance of AI literacy and awareness of what we call the Big 6 limitations of using AI for learning (see below).
Engaging in AI Forward – AI Responsible Learning
Part of responsible adoption involves creating opportunities for the Longhorn community to gain literacy on the benefits and limitations of generative AI use in education. This starts with having a clear definition of what generative AI is and what it is not.
What is generative AI?Â
Generative AI refers to models that generate or create new or original output (text, code, images, videos, music, etc.) using pre-existing data the model has been trained on. The purpose of generative AI is to create brand-new content. Synthetic content can lead to wonderful, creative output â and that same output is often rife with limitations. Being AI-forward requires being aware of and taking action to mitigate the limitations below.
The Big 6: Six Key Limitations of Using AI for Learning
1.0 Privacy and SecurityÂ
UT Austin supports the acceptable use of generative AI with data that is publicly available or defined as Published university information. When learning with AI, protect yourself and others by only using trusted vendors who allow you to disable chat history or model training easily. All students, faculty, and staff must always follow the University’s generative AI acceptable use policy, which notes that inputting confidential or controlled university data is not allowed unless there is a University contract in place that protects such data.
2.0 HallucinationsÂ
Hallucinations are common and occur when generative AI produces content that is untrue or factually inaccurate. For example, bots such as ChatGPT might report that you have only 37 rows of data in a data set when they are 72. Always use more traditional approaches to research to confirm and verify facts.
3.0 Misalignment
Misalignment occurs when a user prompts generative AI applications to produce a specific output, and the AI ends up producing an unexpected or undesired result. For example, when asking a bot to give you the Excel formula for merging two, it may give you the formula for splitting two cells instead. Misalignment is particularly problematic with image generation but also occurs within synthetic or AI-generated text. Careful prompt engineering helps address misalignment in text-based AI, but you will not be able to avoid it completely. Humans should always evaluate the output of generative AI with critical thinking skills.
4.0 Bias
Because humans train them, generative AI output has both explicit and implicit biases baked into it, including stereotypes. It won’t be possible to entirely avoid bias in generative AI, just as it is not possible to avoid it in the real world, but a few small tips can help you engage with greater awareness. Educate yourself on the nature of implicit bias. Be aware that because humans train the models, they will reproduce our biases. Be skeptical and avoid over reliance on the models for any project. When you observe output from AI that is clearly biased or laced with stereotypes, report it to the vendor of the app you are using.
5.0 EthicsÂ
Al models have several ethical considerations, including digesting intellectual property, spreading misinformation, and using gen Al to produce new works or results without contribution. Other ethical considerations relate to the labor used to generate the models, the environmental impacts of generative Al, and the business ethics of releasing Al models for revenue generation without a clear understanding of the impact on society. Similar to bias, it is essential to be aware of the ethical concerns of using Al so that you can engage in a transparent manner.
6.0 Cognitive Offloading
Cognitive offloading is the process of humans using external tools to reduce the demand or âcognitive loadâ of completing tasks (Risko and Gilbert, 2016). Technologies such as generative AI chatbots can help increase productivity through cognitive offloading; however, if you do not employ those freed-up resources for other tasks, the overuse of generative AI could potentially âdiminish specific cognitive skillsâ (LeĂłn-Dominguez, 2024). Engaging in human activities such as actively critiquing and evaluating output from generative AI can help protect problem-solving and creative and critical thinking skills. Engaging in human activities such as actively critiquing and evaluating output from generative Al can help protect problem-solving and creative and critical thinking skills. Other things you can do to avoid the downsides of cognitive offloading include engaging in transformation of gen Al output to make it your own. For example, evaluate the output for hallucinations, misalignment, bias, or ethics and then transform that output to make it authentically yours with your original thoughts, opinions, knowledge; or work.
Have you adopted AI Forward - AI Responsible practices? Â The Office of Academic Technology is interested in learning from you. Please share your examples with the OAT using the form linked below. If you have questions or comments about generative AI for learning, please email us at oat@utexas.edu.
Share Your ExamplesFebruary 21, 2024