If we thought integrating AI-powered tools like ChatGPT into higher education was a steep learning curve last year, the pace of change shows no signs of slowing down in 2024. As instructors, we were forced to quickly adapt our curriculum and teaching methods to account for these disruptive new technologies that can generate remarkably human-like text and content. This year will likely bring even more advanced AI systems with capabilities we can't yet imagine. As researchers push the boundaries of what AI can do, we'll continue to face profound questions about how to effectively educate students for a world alongside increasingly intelligent algorithms.
Seemingly overnight, our students had access to technology that could generate remarkably human-like text on demand. Many of us scrambled to catch up and establish policies around proper attribution and academic integrity, and some of us were tempted to ban AI use entirely. But I've come to believe that outright bans are counterproductive: AI is here to stay, and our students will adopt new capabilities faster than we can keep up. Rather than an endless arms race, I think our focus should be guiding students to use these tools ethically and purposefully within their fields. Some AI use, when properly cited, can actually enhance learning by allowing students to generate ideas and content as a starting point for their own original analysis and writing.
The key is clear policies and transparency around expectations. Students appreciate understanding exactly what's allowed and what isn't when it comes to AI tools. And they respond positively when we treat them as partners in upholding academic integrity, not as potential cheaters to be policed. Clear policies regarding AI use in academics can promote integrity, support learning, adapt to new technologies, ensure fairness, and build trust between faculty and students. Below are some examples that you might find useful.
Example Syllabus Statements
If you want to be more restrictive of students’ AI use, here is an example syllabus statement that outlines a very strict policy:
We expect that all work students submit for this course will be their own. We specifically forbid the use of any generative artificial intelligence (AI) tools at all stages of the work process, including preliminary ones such as brainstorming ideas, drafting outlines, etc. Violations of this policy will be considered academic misconduct.
Adapted from Harvard University’s Office of Undergraduate Education.
If you want to encourage students to experiment with AI tools, here is an example statement that outlines a more permissive policy:
Students are allowed to use any advanced automated tools (artificial intelligence or machine learning tools) on all assignments in this course; no special documentation or citation is required to disclose whether or when you have used these tools.
Adapted from the Center for Teaching & Assessment of Learning at the University of Delaware.
I suspect that many of us are in a middle ground of wanting AI use to neither be a free-for-all nor absolutely forbidden. Here is an example of a more explanatory/teaching-focused syllabus statement that helps students understand what they can do with AI:
Using AI tools responsibly is an emerging skill. This course encourages awareness of AI's capabilities and limitations. When used appropriately as a drafting aid, AI can help develop ideas and refine work. However, directly copying or passing off AI-generated content as one's own violates academic integrity. To uphold quality and transparency, please follow the following guidelines.
First, evaluate AI-generated text critically before adopting it as your own. Fact-check claims and watch for factual errors or omissions. You are responsible for content you submit.
Second, disclose any use of generative AI tools by briefly explaining how you used them to assist your process. For instance, you might describe using a tool to help brainstorm ideas or check grammar. This promotes transparency.
Third, focus prompts on clarifying your own thinking rather than outsourcing it. High-quality prompts elicit outputs that aid your learning and original analysis. Make sure to save the prompt language that you use, and include this language in your disclosure of AI use statement.
Adapted from Prof. Joubin’s statement here.
Clear AI policies help students learn how to use new technologies appropriately. Faculty have an important role to establish and communicate these AI-related policies as emerging tools become more prevalent.
Need more inspiration? Here’s an excellent collection of possible syllabus statements collected by Brandeis.
Want to read more? Browse the Tips archive to read through one of the 200+ posts from the past three and a half years, or search for specific topics
What roles do faculty need to retain exclusively for ourselves? How can we best guide students to use AI ethically and for social good? What literacies must we prioritize to equip the next generation? The learning curve for higher ed faculty will remain steep. But if we actively shape the development and application of AI, we can chart a course that upholds the deepest values of humanistic education.
How are AI tools transforming - and disrupting - how you teach?
Thanks Breana, appreciated this post. Really resonate with your emphasis on transparency and policies that enable instructors to build trust with students. Reminds me of this post from Marc Watkins a while back (https://marcwatkins.substack.com/p/have-an-ai-policy-so-does-everyone).
The key is that, whatever we decide, we take an active role on how AI (or any technology for that matter) is integrated into our teaching.