Addressing the Limitations of Using Generative AI for Learning

Within the academic technology ecosystem, generative AI is increasingly everywhere. The Office of Academic Technology (OAT) is the central authority for evaluating learning technologies on campus, including generative AI. We support the responsible adoption of generative AI for academic use, including for teaching and learning.  Part of the responsible adoption involves creating opportunities for the Longhorn community to gain literacy on the benefits and limitations of generative AI use in education.


Engaging in AI Forward – AI Responsible Learning

What is generative AI?

Generative AI refers to models that generate or create new or original output (text, code, images, videos, music, etc) using pre-existing data the model has been trained on. The purpose of generative AI is to create brand-new content. Synthetic content can lead to wonderful, creative output – and that same output is often rife with limitations. Being AI-forward requires being aware of and taking action to mitigate the limitations below.

What is AI responsible learning?

Ensure you are familiar with the below-documented limitations of generative AI and avoid overreliance on the models. When using generative AI, approach output with skepticism and use a range of other quality assurance methods. AI researcher Dr. Sasha Luccioni says AI is always confident but only sometimes competent. The best way to avoid the limitations listed below is to engage often with other humans, such as peers, teachers, and mentors, as part of the learning process. This is particularly the case when learning to write, which the UT Austin Faculty Writing Committee notes should always involve frequent human-to-human feedback.


Limitations of Using AI for Learning

1.0 Privacy and Security 

UT Austin supports the acceptable use of generative AI with data that is publicly available or defined as Published university information. When learning with AI, protect yourself and others by only using trusted vendors who allow you to disable chat history or model training easily. All students, faculty, and staff must always follow the University’s generative AI acceptable use policy, which notes that inputting confidential or controlled university data is not allowed unless there is a University contract in place that protects such data.

2.0 Hallucinations 

Hallucinations are common and occur when generative AI produces content that is untrue or factually inaccurate. For example, bots such as ChatGPT might report that you have only 37 rows of data in a data set when they are 72. Always use more traditional approaches to research to confirm and verify facts.

3.0 Misalignment

Misalignment occurs when a user prompts generative AI applications to produce a specific output, and the AI ends up producing an unexpected or undesired result. For example, when asking a bot to give you the Excel formula for merging two, it may give you the formula for splitting two cells instead. Misalignment is particularly problematic with image generation but also occurs within synthetic or AI-generated text. Careful prompt engineering helps address misalignment in text-based AI, but you will not be able to avoid it completely. Humans should always evaluate the output of generative AI with critical thinking skills.

4.0 Bias

Because humans train them, generative AI output has both explicit and implicit biases baked into it, including stereotypes. It won’t be possible to entirely avoid bias in generative AI, just as it is not possible to avoid it in the real world, but a few small tips can help you engage with greater awareness. Educate yourself on the nature of implicit bias. Be aware that because humans train the models, they will reproduce our biases. Be skeptical and avoid over reliance on the models for any project. When you observe output from AI that is clearly biased or laced with stereotypes, report it to the vendor of the app you are using.

5.0 Ethics 

Al models have several ethical considerations, including digesting intellectual property, spreading misinformation, and using gen Al to produce new works or results without contribution. Other ethical considerations relate to the labor used to generate the models, the environmental impacts of generative Al, and the business ethics of releasing Al models for revenue generation without a clear understanding of the impact on society. Similar to bias, it is essential to be aware of the ethical concerns of using Al so that you can engage in a transparent manner.

6.0 Cognitive Offloading

Cognitive offloading is the process of humans using external tools to reduce the demand or “cognitive load” of completing tasks (Risko and Gilbert, 2016). Technologies such as generative AI chatbots can help increase productivity through cognitive offloading; however, if you do not employ those freed-up resources for other tasks, the overuse of generative AI could potentially “diminish specific cognitive skills” (León-Dominguez, 2024). Engaging in human activities such as actively critiquing and evaluating output from generative AI can help protect problem-solving and creative and critical thinking skills. Engaging in human activities such as actively critiquing and evaluating output from generative Al can help protect problem-solving and creative and critical thinking skills. Other things you can do to avoid the downsides of cognitive offloading include engaging in transformation of gen Al output to make it your own. For example, evaluate the output for hallucinations, misalignment, bias, or ethics and then transform that output to make it authentically yours with your original thoughts, opinions, knowledge; or work.

Share Your Stories 

Have you adopted AI Forward - AI Responsible practices?  The Office of Academic Technology is interested in learning from you. Please share your examples with the OAT using the form linked below. If you have questions or comments about generative AI for learning, please email us at oat@utexas.edu.

Share Your Examples
Last Updated

February 21, 2024