< Contact the CTL >
Please feel free to contact the CTL with feedback, questions, and suggested resources around generative AI for teaching and learning.
"5 Things to Know About ChatGPT"
For a quick overview of a Generative AI tool commonly used in higher education, see this CTL resource that was initially created at the launch of ChatGPT and is updated periodically with current resources.
Assessment
One of the best approaches to mitigate concerns that students will use generative AI tools in a way that goes against your policies or expectations of academic honesty can be found in how you develop your own assignments and how they are then assessed.
Clarify your expectations about student use of generative AI tools (from basic to more complex uses)
- Define the universities relevant policies in the context of your class
- Define what “completion” of work entails
- Define what citation of generative AI work involves
- Require a documentation the process of use/engagement
- Indicate your consideration of using ‘checker’ tools early and often (well before submission of work
Consider employing multimodal submissions with students ‘closing the loop’
- Ask students to submit work in Canvas and include a short audio/video clip that clarifies the overall scope of the submission, their process of developing their ideas, etc.
- Include this component of submission in a rubric/evaluation standard
>> Faculty Spotlight: As a recipient of a PAIR student-as-partners grant (Partnership in AI Research), Dr. John Griel (UT School of Law) is redesigning elements of his Law and Religion Clinic to take advantage of generative AI tools in crafting and revising emails to clients, creating and improving contract terms, preparing questions for an oral argument, and increasing productivity in "Bluebooking."
In addition, consider the following suggestions for mitigating concerns of the use of generative AI tools (as outlined by the AI support initiative at the University of Oslo):
- Link the task to concrete work in teaching. For example, you can work on one or more cases in the lesson, and questions can be linked to this specific work. If necessary, work with cases in teaching can be included in the assessment basis (for example portfolio assessment).
- Link to experiences. For example, let the students explain and reflect on process execution, learning activities, problem solving, etc.
- Assignment formulations linked to the syllabus. Design complex task formulations that are specifically linked to the curriculum contribution and that require critical reflection. For example, students may be asked to use a specific syllabus to illustrate an actual/practical problem. Current issues are particularly suitable, as ChatGPT will have little or no content on this.
- Elicit the student's own views. Give tasks that involve analyzing complex issues, evaluating alternative solutions and arguing for their own positions.
- Get the students to create something new. Give tasks without pre-given answers and encourage originality and creativity. For example, students can be tasked with developing and justifying new research questions or justifying arguments, with references to specific literature.
- Visualize thought processes. Include a reflection note in which the students describe how they have worked on the assignment and what they have learned in the process.
- Current issues. Give the students (current) cases/scenarios which they will then use specific syllabus contributions to illustrate.
- Project work. This work may have submissions/presentations during the work.
- Oral. Forms of oral assessment, such as video reflection, presentation, conversation.
- Iterative feedback. Forms of assessment where students receive feedback along the way and improve their work.
- Assess and/or compare different texts using the syllabus (for example fellow students' work, fictitious student answers, published articles or text produced by ChatGPT).
- Ask students to explain the workflow. If there is reason to believe that students will use ChatGPT to answer the assignment, ask them to document how they proceeded (what they asked, how they assessed and processed what the bot generated, what they added themselves and what they learned in the process)
References
As you examine student work, bear in mind that there are no UT-Austin sanctioned GenAI "checker" tools that would fully identify elements of that work as either GenAI or human generated. Developing assignments that allow you to see works-in-progress (topic proposals, drafts, revision decisions and logs) will give you a greater familiarity with the tone and tenor of your students' writing and expressive capabilities.
That said, there are general guidelines that may help you deduce whether or not student submissions are informed by AI-generated content. The Office of Student Conduct and Academic Integrity at UT-Austin will work with instructors to determine the best course of action if they suspect that students are using AI-generated content in a way that goes against the spirit of their course policies.
The following hallmarks or "red flags" have been curated from the Office of Faculty Excellence at Montclair State University.
- Affected by factual errors and made-up sources. Generative AI models work by predicting the next word based on the previous context. They do not “know” things. Because of that, they tend to output statements that look and seem plausible but are factually incorrect. This phenomenon is known as AI hallucination. If a submission contains many such errors, or one or two very dramatic ones, it is likely to be AI-generated.
- Not consistent with assignment guidelines. A submission that is AI-generated may not be able to follow the instructions, especially if the assignment asks students to reference specific data and sources. If a submission references data and sources that are unexpected or unusual, that is a red flag.
- Atypically or unexpectedly correct in grammar, usage, and editing.
- Voiceless and impersonal. It is correct and easy to read, but without any sense of a human person behind it.
- Predictable. It follows predictable formations: strong topic sentences at the top of paragraphs; summary sentences at the end of paragraphs; even treatment of topics that reads a bit like patter: On the one hand, many people believe X is terrible; on the other hand, many people believe X is wonderful.”
- Directionless and detached. It will shy away from expressing a strong opinion, taking a position on an issue, self-reflecting, or envisioning a future. With the right kind of prompting, it can be coaxed to do some of those things but only to an extent (GPT-4 seems to do better than others), as it will continue sounding unnaturally cautious, detached, and empty of direction/content.
References
AI Writing Detection: Red flags. (n.d.). Office of Faculty Excellence. Montclair State University. https://www.montclair.edu/faculty-excellence/teaching-resources/clear-course-design/practical-responses-to-chat-gpt/red-flags-detecting-ai-writing/