Responsible Adoption of AI Tools for Teaching and Learning
What does responsible adoption of AI in education mean, and how can we navigate the ever-changing AI landscape? In the Spring of 2025 UT’s Responsible AI for Teaching and Learning Working Group set out to answer this question. Stakeholders from across campus convened to advance the University’s AI-Responsible AI-Forward framework. The purpose of the working group was to respond to the need among students, faculty, and staff for clear, centrally defined guidance on how to adopt AI tools in responsible and ethical ways in teaching and learning contexts.
Working group members consulted with dozens of experts throughout the University to draft the definition and framework, including the Information Security Office, the Chief Information Officer, the Center for Teaching and Learning, University Risk and Compliance Services, Institutional Reporting, Research, Information, and Surveys, the Office of the Registrar, Cockrell School of Engineering, and Academic Affairs.
Definition of Responsible Adoption of AI for Teaching and Learning
At UT Austin, we define responsible use of AI in teaching and learning as the adoption of AI that facilitates the achievement of learning outcomes and fosters human development for all members of the campus community.
Learning outcomes are course-specific and refer to the things students should know, be able to do, and the mindsets they should develop as a result of a learning experience. The adoption of AI tools should be intentional, empowering all students and educators to take agency over their teaching and learning experiences, strengthen relationships within learning communities, uphold personal integrity, and navigate ambiguity. To use AI responsibly, students and educators must always be in the loop and show understanding of how these technologies work, their ethical assumptions and implications, and the influence they will have on careers and society.
Implementing AI in ways that are vetted, secure, safe, ethical, accessible, open and available to everyone, transparent, and honest are requirements for responsible adoption in teaching and learning environments.
Framework of Principles for Responsible Adoption of AI for Teaching and Learning
Responsible adoption of AI tools in teaching and learning includes the following principles.
- Literacy: Use AI with fluency in its principles, applications, limitations, and impacts in higher education and beyond.
- Intention: Use AI transparently and in the support the achievement of learning in and across disciplinary contexts, including knowing when and why it should and should not be used.
- Balance: Use AI with an innovation and future readiness mindset, and with the awareness of the evolving benefits and drawbacks in the context of teaching and learning across areas of study.
- Agency: Use AI in ways that maintain and enhance the agency of humans over their intellectual output, decision-making, and the teaching and learning process.
- Ethics: Use AI in ways that recognize the array of ethical implications of technology use on the environment, labor, society and other considerations, engage with those implications and consequences across contexts, and consistently practice evaluating when AI can versus should be used.
- Relationships: Use AI in ways that enhance and extend the connections between everyone within a learning community—students and their peers, faculty and students, staff and the people that they serve—rather than diminish or replace them.
- Academic Integrity: Use AI in alignment with our honor code and fundamental scholarly values such honesty, respect, and authenticity.
- Stewardship: Use AI in ways that are explicit about how data is being shared, and guardrails are established to uphold the privacy, safety, security, accessibility, intellectual property rights and right to access of everyone in our teaching and learning community.
Working Group Charge
The working group responded to the following charge:
1.0. Develop a comprehensive definition of what it means from the student, instructor, teaching assistant, and academic staff point-of-view to adopt generative AI tools in a responsible manner at the University.
2.0. Develop a framework consisting of 5-7 informed principles to help guide the responsible adoption of generative AI tools for teaching and learning at UT Austin.
3.0 Develop a set of recommendations for communicating, training, and informing the University on the responsible adoption of generative AI in teaching and learning at UT Austin.
Working Group Members
Vanessa Ayala | Digital Accessibility Manager, Enterprise Technology |
Kasey Ford | AI Designer, Office of Academic Technology |
Rick Garza | Program Manager, Student Conduct and Academic Integrity |
Mario Guerra | Director of Longhorn Technology Experience, Enterprise Technology |
Matthew Russell | Faculty Development Specialist and Lecturer, Center for Teaching and Learning |
Raj Sankaranarayanan | Postdoctoral Fellow and Lecturer, Curriculum and Assessment |
Sharon Strover | Professor and Chair, School of Journalism and Media; Founding Member, Good Systems |
Abhay Samant | Professor of Practice, Chandra Department of Electrical and Computer Engineering |
Julie Schell | Assistant Vice Provost, Director, and Professor, Office of Academic Technology |
Stephen Walls | Assistant Dean for Instructional Innovation and Professor, McCombs School of Business |
Open Comment Period – July 31, 2025
The working group invites feedback from the community on the definition, framework, and principles, including questions, improvements, or endorsements. The open comment period will remain open through the next academic year, however, prioritization will be given to comments provided by July 31, 2025 in anticipation of a Fall 2025 release for the framework.
Open Comment FormContact the Office of Academic Technology at oat@utexas.edu.