Skip to content

AI Use in Graduate Education   

Context

The AI Use in Graduate Education Working Group was convened in Fall 2025 by the Office of Academic Technology and the Graduate School to provide preliminary recommendations on the responsible use of generative AI in graduate coursework, milestones and research activities.  
 
The following recommendations align with the Texas Statement on Academic Integrity, UT’s Responsible Use of AI in Teaching and Learning principles and the University’s Acceptable Use of Generative AI Tools policy. These are not intended to be policies, but rather to inform data-driven policy and best practice development.  
 
A detailed description of the process used to develop these recommendations can be found in the Methods Section. 

Recommendations

UT Austin should develop a policy on using AI in graduate education, covering coursework, teaching (Assistant Instructors and Teaching Assistants), student milestones and research that is flexible enough to allow Departments and Graduate Student Committees (GSC) to customize as needed. Any policy should: 

  1. Be clear enough for students to interpret. 
  2. Clearly differentiate between best practice and policy. 
  3. Be vetted and reviewed by key stakeholders and be available for open comment by the UT community before publication. 
  4. Include a communication and community engagement plan. 
  5. Preserve faculty autonomy. 
  6. Ensure GSC and discipline-specific flexibility. 
  7. Specify that all work in graduate education must be evaluated for quality, competence and mastery of a given discipline. 
  8. Assert that individuals are fully accountable for their work output, including verification of accuracy, quality and originality, regardless of whether AI was used.   
  9. Avoid vague or difficult disclosure requirement policies by supporting students with a vetted disclosure framework. 
  10. Explicitly state that it is never acceptable to intentionally deceive by claiming one’s work as their own when it is generated by AI. 
  11. Include standards for faculty evaluation of graduate student work that addresses the use of generative AI to provide feedback since expert guidance is critical to student growth and learning.   
  12. Be aligned with all related other University policy/principles including: 

Strict guidelines for appropriate use of AI in graduate milestones, such as comprehensive exams, capstone projects, theses and dissertations, are essential so that AI does not substitute for mastery of a subject field or the capacity to produce independent scholarship.    

Guidance on theses, dissertations, comprehensive exams and capstones should be governed with stricter requirements than coursework, consistent with peer institution precedent. GSCs should be involved in how departments communicate about AI use in these milestones. At minimum: 

  1. Graduate students should receive written guidance from their GSC and/or advisory committee before using AI for any portion of milestone work to make explicit requirements for admission to candidacy. Separate statements are needed for the research/creative endeavor itself and for the communication of the work (written and oral). 
  2. The Graduate Handbook should define the boundary between permissible editing assistance and impermissible generation of substantive content, including what is permissible to displace, augment and what new skills/competencies students might gain from using generative AI.  
  3. Students should remain accountable as the sole intellectual author of milestone work and should never cite AI as an author.  
  4. Faculty advisors may not intentionally misrepresent AI-generated feedback as their own expert guidance on student work.  

The University should issue guidance on how prospective students use generative AI in graduate admissions materials and incorporate the following current best practices:

  1. Admissions teams should develop internal guidance for how to account for the use of AI in application materials and the evaluation of those materials.  
  2. The Graduate School should provide advice to faculty on how to evaluate significant contributions and consider congruence between interviews, essays and other application materials when AI is used.  
  3. Consider inclusion of student-facing attestation statements to be transparent with applicants about how AI work is evaluated and that intentionally misrepresenting AI-generated content as original, human work in application materials may constitute grounds for revocation of admission, consistent with emerging standards at peer institutions. 

In addition to the general policy and guidance framework, the University should address AI risks specific to the research, scholarship, and creative endeavors context, including the following considerations:

  1. Use of AI to analyze manuscripts for journals or draft grant reviews for funding agencies is prohibited under federal confidentiality protocols. 
  2. Unpublished manuscripts, creative works, patent-pending ideas and grant proposals should not be entered into public AI tools, as this may constitute public disclosure and void intellectual property protections. 
  3.  AI tools may be used to support research activities but should not be used to independently create proprietary research outputs without appropriate human oversight, particularly with respect to research design, execution and interpretation. Graduate students and their mentors must only use vetted, University-contracted generative AI platforms for work involving non-public research data. 
  4. Creative works involving AI-generated content (e.g. art, media, music, performance, design) raise distinct questions around authorship, originality and artistic integrity that require discipline-specific guidance in alignment with professional standards.

The guidance must address the University’s expectations for faculty supervisors, mentors, advisors and PIs (supervisors) regarding generative AI use in research, scholarship and creative endeavors (including publications) and milestone work.

  1. Supervisors should be required to document their expectations for AI use at the outset of the supervisory relationship, including transparency on how the supervisor will use and endorse (or not endorse) AI in their laboratory, studio, or other creative or scholarly practice. For example, just as it is a best practice to document the acknowledgments of all parties before preparing a paper, supervisors and student authors should document how they will use AI and be transparent about its use before preparing outputs.  
  2. Graduate programs should establish baseline alignment to prevent inequitable or contradictory standards across advisors within the same department while also acknowledging disciplinary and faculty differences.  Such guidance could be documented in the Graduate Handbook.

Effective use of AI, including choosing when and when not to use it (and why) and how to be honest and transparent about your use of AI, is a critical competency for students, faculty and staff. AI Fluency is “the ability to work with AI systems in ways that are effective, efficient, ethical, and safe. It includes practical skills, knowledge, insights, and values that help you adapt to evolving AI technologies” (Dakan, Feller, and Anthropic). AI Fluency is a higher-level skill and ability than AI Literacy, which consists of baseline knowledge on what AI is, how it works, effective use and ethical considerations.

  1. The University and Graduate School should ensure broad access to consistent and effective AI Fluency training for faculty, staff and graduate students, including formal training modules on the responsible use of AI.  
  2. This AI Fluency training should build a culture at the University that makes it easy and safe to be diligent, honest and transparent about AI use. 

Because of the rapidly changing nature of AI, the working group recommends formalizing Communities of Practice around Responsible Adoption of AI for graduate education that help foster discussion, rapid learning and innovation across the disciplines. This community of practice should include:

  1. Facilitated peer-to-peer learning and advancement for students and faculty alike.  
  2. Opportunities to develop new competencies for using AI to advance contributions to one’s research, scholarly and creative endeavors.

Methods

To respond to the charge, the working group identified, reviewed and analyzed AI policy precedents from 18 peer institutions (Precedent Findings).  We also collected and analyzed feedback from 25 experts (five [5] interviews and 20 surveys) on AI use in graduate education (Expert Findings). Our analytical plan included two independent human reviews of the policy precedents and expert feedback. We also used an inter-rater agreement approach by using generative AI to produce independent summaries of both the Precedent and Expert Findings data sets, from two different tools (Gemini and Claude). After human review by three independent raters of both the data and the summaries, we manually developed recommendations and then used AI to conduct a gap analysis of the recommendations using the summaries. We then conducted a human-review in full committee of the documents. 

These recommendations are in open comment through Sunday, May 31, 2026. We welcome feedback from all members of the UT Austin community. Share your thoughts using this form.