Teaching with Generative AI
Upcoming Generative AI Events
January 19, 2024 • #LTT2024 Learning to Teach Creative Technologies with Generative AI
The Integrated Design and Media (IDM) program at NYU Tandon will host #LTT2024 Learning to Teach Creative Technologies with Generative AI: A Virtual UnSymposium. Learn more, register, or submit a proposal at https://wp.nyu.edu/ltt2024.
February 8, 2024 • NYU Teaching & Learning with Generative AI Virtual Symposium
NYU’s Office of the Provost will be hosting a Teaching and Learning with Generative AI Virtual Symposium. Learn more, register, or submit a proposal at https://wp.nyu.edu/2024aisymposium.
Spring 2024 • Generative AI TeachTalks
TeachTalks is a series of faculty-led conversations offered by the Office of the Provost. Participants discuss issues currently impacting student learning and share pedagogical innovations across disciplines. Two in the spring will focus on generative AI. Register at https://teachingsupport.hosting.nyu.edu/teachtalks once registration is available.
Past Generative AI Events
- October 2023 NYU Teaching and Learning with Generative AI Conference & Pre-conference workshops
- September 2023 FRN Generative AI Workshops
- August 2023 NYU Generative AI Workshops
- Fall 2023 Generative AI TeachTalks
To be in conversation with other NYU faculty about developments in generative AI for teaching and learning, email email@example.com to join the NYU Teaching and Learning with generative AI Google Space, NYU T&L: AI Tools.
Use of generative AI tools, and especially ChatGPT, continues to expand. AI tools are being added to email programs, word processors, spreadsheets, programming tools, image editors, and so on. By the end of 2024, and possibly by the end of this semester, a majority of NYU students will integrate AI into regular practice for one or more of their courses. Thousands are already there, and students who report using AI at all typically report using it in more than one class.
Faculty have authority over your classrooms, including authority over whether and how you ask students to use AI. However, students have wide access to these tools, whether we want them to or not; if you want students not to use AI for an assignment or a whole course, you can only achieve that with their assent.
The expandable below lays out frequent questions faculty have about AI use in the classroom. Your school may also have guidance about AI use which will take precedence over these guidelines. Please consult them as well.
Frequently Asked Questions
What is generative AI? How can I try it?
What is NYU’s policy about student AI use in coursework?
How should I adapt to broad student access to AI tools?
How should I communicate preferences for AI use to my students?
What policies govern faculty use of AI in classes?
What recommendations apply to faculty use of AI?
What issues in student learning does generative AI present?
What uses of AI constitute cheating?
How should faculty respond to suspected cases of cheating with AI?
Why doesn’t NYU license an AI detector?
What other resources does NYU offer in adapting to AI use?
Generative AI is software that can create novel content in response to user prompting. It uses very large collections of text (or images, or code, or whatever the output is to be.
There are many kinds of generative tools, for generating text, audio, images, video, and code. More recently, more tools can produce combined output — text and images, web pages and code — that increase the complexity of what is possible.
Text-generating tools are the most widely used at NYU, and their most notable feature is producing surprisingly competent written responses to relatively complex questions. If you would like to try it, below are three sample prompts you can input to ChatGPT, Microsoft Bing or Google Bard. Each of these prompts is in two parts, the role you are asking the model to play (which generally improves the answers), and the question you want a response to.
- You are a professor of biology, teaching an introductory course. Please write a description of meiosis vs. mitosis, highlighting similarities and differences in the two processes.
- You are an art historian of Chinese ceramics. Please describe the technical and aesthetic changes in ceramic production from the Shang Dynasty to the Warring States period.
- You are a screenwriter pitching a film. Please write three loglines for science fiction movies where one of the protagonists is not human.
Trying one or more of these prompts, modifying them, or creating your own will demonstrate the basic capabilities of text-generating tools. You can also try image generation, voice generation, code generation, and so on.
Students are allowed to use AI in coursework under two circumstances: the instructor approves, and the use does not violate NYU’s Academic Integrity Policy, which forbids “submitting work (papers, homework assignments, computer programs, experimental results, artwork, etc.) that was created by another, substantially or in whole, as one's own.”
However, because generative AI is a new arrival in the academic environment, students do not have standard approaches to its use. Clarifying your expectations for AI use includes explaining whether and how students are permitted to use these tools, and how they should acknowledge use of AI in their work.
Note that while you can forbid AI use, you cannot prevent it. Even setting aside academic integrity violations, students can collect ideas from anywhere, and use of AI to generate ideas is essentially undetectable. To be effective, a request that students not use AI tools, or only use them in particular ways, requires them to accept the restriction, which they are likelier to do if you explain how the work you are asking them to do relates to what you hope they learn from the class.
There is no standard way to adapt your courses to generative AI. Not only are there many different tools, potential use of these tools for classes in physics, philosophy, and film are quite different. There are however two common elements. Whatever you teach, you should have an opinion about AI use in your class, and you should communicate it to students.
There are a variety of strategies you can adopt, per assignment or for a whole course:
- Advising Against: Tell students they should not use generative AI
- Avoiding: Redesign assignments so generative AI is not effective
- Integrating: Spell out how students are allowed to use generative AI
These strategies are elaborated under Adapting Writing Assignments to Generative AI.
For students, these tools tend to be best when used as inputs, early and interactively — suggesting or reacting to initial ideas, or suggesting alternative approaches to a problem. The highest risk comes when they are used to create outputs, late in the process and definitively — asking for lists of facts, statistics, or citations, or producing near-final drafts.
All of this is in line with research on teaching strategies that are more focused on process and less on product. These strategies suggest offering multiple low-stakes assignments across the semester, breaking larger projects into smaller, graded steps; lower-stakes final assignments; and opportunities to get formative feedback on drafts. All these can reduce the temptation to use AI tools inappropriately, as well as improving the effectiveness of your class.
Communicating your expectations is vital because students do not have default assumptions about AI. A course is an intersection of faculty expectations and student effort; explaining whether and how AI tools can be part of the students work in your course can help direct them towards appropriate uses.
Any communication from you should also tell students how they should acknowledge AI use in their assignments. Your expectations should be included in your syllabus and ideally mentioned early in class.
The examples below cover a range of syllabus language to explain student use of AI:
- You can only learn from work you actually do. Unless otherwise stated, you should not use generative AI tools to create any part of an assignment in this course; every submission should be entirely your work. (example from an NYU course)
- This course assumes that work submitted by students – all process work, drafts, brainstorming artifacts, final works – will be generated by the students themselves, working individually or in groups as directed by class assignment instructions. As will any other class work generated by anyone other than the students (by other students, by a company, or by using generative AI tools), use can be a violation of Academic Integrity policy. (adapted example from University of Texas, Austin)
With Prior Permission
- Students are only allowed to use AI tools, such as ChatGPT or Dall-E 2, on assignments in this course with advance permission. Students must submit a written request with an explanation of how they will use a particular tool in their assignment, and use is not permitted without written approval. If you are unclear if something is an AI tool, please check with your instructor. (adapted example from University of Chicago)
Welcome with Attribution
- You are welcome/ expected to use generative A Itools (e.g. ChatGPT, Dall-e,etc.) in this class as doing so aligns with the course learning goal [insert course learning goal]. You are responsible for the information submitted based on an AI query (for instance, that it does not violate intellectual property laws, or contain misinformation or unethical content). Your use of AI tools must be properly documented and cited. For example, [insert citation style for your discipline. NYU libraries guidance can be found here.] (adapted example from Temple University)
Welcome on Specific Projects
- Where noted, you are allowed to use generative AI tools for assignments or activities. However, assignments created with AI should not exceed 25% of the overall work, and you must identify the portions where you used AI tools, and describe how you used them. Note that you are responsible for all parts of an assignment; if an AI tool provides incorrect information, it is your responsibility to find and fix the error before submitting. Note too that overreliance on AI can hinder independent thinking and creativity. (example from an NYU course)
- Use of ChatGPT (or other similar tools that generate text) is allowed in this class for specific assignments only. When use of the tool is allowed, it will be explicitly noted in the assignment directions. If you utilize ChatGPT for any part of the assignment (from idea generation to text creation to text editing), you must properly cite ChatGPT. Violations can result in failure of the assignment or failure of the course and a notation on your transcript. (example adapted from University of Vermont)
We have also compiled a list of questions that you may use to begin a conversation in class with your students about generative AI, which can be found here. We have also created an outline of guidelines for faculty and students that you may find useful.
Most decisions about classroom conduct are left to individual instructors, and use of AI is no exception. Faculty at NYU are already experimenting with AI tools to make suggestions or provide feedback on syllabi, assignment prompts, and test questions. However, there are both restrictions and recommendations about such use:
Privacy and FERPA
Do not share work containing student identifiers with any third-party services—no student names or other unique identifiers like NetID or N-numbers. Sharing such identifiers with services like ChatGPT or Google Bard violates the Federal Educational Records Privacy Act (FERPA), which mandates careful handling of student records and restricts their disclosure to third parties. Sharing data that identifies students with tools that NYU has not licensed will never be FERPA-compliant, as FERPA requires the institution to have a specific sort of business relationship. (NYU’s FERPA guidance.)
We are working with the Office of General Counsel and various technology firms to provide a FERPA-compliant tool licensed by NYU. When we have FERPA-compliant tools, we will alert you.
If you are running a disconnected AI model on NYU-owned hardware, FERPA does not restrict you from using student data for ordinary classroom purposes. If you don’t know whether your hardware and software clears this hurdle, it doesn’t. If you would like to set up a locally hosted instance, please fill out this form.
Your use of student data and student work is also limited by IP policy. NYU does not make IP claims over student data (Section XI.G), so you should not use student data for anything other than providing assessment of their work in your class.
Student Data and Faculty Research
Research involving student work is also subject to the usual restrictions on research, including informed consent. This is not particular to AI, but you may not use student work for research projects unless the work has been approved by the Institutional Review Board (IRB).
- The principle Humans in the loop, meaning that an AI’s operations and output should be subjected to human review before use, is a good one for both faculty and student work.
- If you use these tools to help develop class materials, you should disclose and describe your use to students, just as you should expect them to disclose and describe to you.
- Never send a student AI feedback about their work without checking it first.
- Generative AI tools such as ChatGPT produce answers that sound plausible, but are not always factually accurate, an effect sometimes called ‘hallucination’. You should check any output you want to use before sharing it. You should also warn students about this weakness if you are allowing AI use in your classes.
- Generative AI tools have typically been trained on data that tends towards stereotypical answers—“Doctors are men whereas nurses are women” kinds of biases that extend to many circumstances. All users of these tools should be aware that these biases exist, and should use their judgment in using or editing the output.
The most obvious risk from student use of AI is learning loss. If AI replaces efforts the student is expected to learn from, that student will learn less. This does not mean AI should not be allowed in the classroom, but it does mean that if one set of activities is automated away, other activities should be added. For example, if students are asked to use generative AI for an assignment, they could be asked to provide a critical reflection on the quality or accuracy of the responses they receive, or if they use a style checking tool, they could be asked to compile a personal study guide from the stylistic changes the tool has recommended.
A second significant risk is students placing undue trust in the output of generative AI tools. These tools have been trained to produce written output that sounds confident and plausible, leading many people to believe that the output is also accurate, when in fact it fails to produce the correct answer more often than search engines or online reference works. Interestingly, failure rates seem to be higher for answers produced in highly structured formats, like journal citations or legal case identifiers. In those cases, large language models can produce references in perfect MLA style to papers that do not exist.
AI tools are not authoritative sources. While their output should be acknowledged, they should never be cited as evidence. No matter what tools are used, the student is ultimately responsible for the correctness of the output.
AI output is opaque. There is no way for a student (or anyone) to know how an answer was constructed or how a change to a prompt will change the answer. There is also no way to produce the same answer in the future, even with an identical prompt. This opacity can limit students’ comprehension of the output.
The ease of producing the material can lead to automation bias, where the simplicity of generating unlimited amounts of output weakens students' critical faculties, making it harder to notice that any given example of output is low quality.
Academic integrity violations can only be defined in a particular context. Use of AI to create written work in a course that forbids it violates the rules, but that same case in a course that requires use of AI is obviously not.
NYU’s Academic Integrity Policy has three elements: Plagiarism: taking credit for work the student did not do, Cheating: deceiving faculty members about their mastery of the material through e.g. use of unauthorized materials during tests, and not violating policies set by the student’s school, department or division, or instructor.
Various AI tools can be used to violate all three prohibitions, including AI-assisted tools that have been around for years, such as grammar checkers and translators. For example, Quillbot can take informal student writing and re-write it in Academic, Simple, or Formal style. Gmail can edit emails in three modes, Formalize, Elaborate, and Shorten. As AI becomes part of substantially all software, these capabilities will become ubiquitous.
Because many students are making use of AI tools when communicating outside of academic work, it is important to be clear about which uses of these tools do and do not constitute a violation of academic integrity in your class.
For all the anxiety about academic integrity, generative AI presents no new philosophical issues. Students are not to take credit for work they did not do, whether that work is copied from elsewhere, or created for them from scratch. AI use looks like cut-and-paste, because it is automated, but AI use works like existing ghostwriting services, often called ‘contract cheating’.
Where faculty suspect a student may have used generative AI in an unapproved way, you should treat it as contract cheating. There is no stable source material from which the work has been copied; these tools generate different output even in response to an identical question.
This makes these cases different from, and unfortunately more time-consuming than, using simple detectors of cut-and-paste copying, such as Turnitin. With contract cheating, faculty should gather evidence that the student may not be responsible for the work—dramatic change in writing style, references to material not included in class readings, incorrect footnotes or references—and speak with the student about it, asking them directly if they used AI in ways they should not have.
If the student admits to AI use, or if the student's command of the material in conversation is so poor that you are confident they did not write it, you typically have a range of responses of varying severity available, as the situation calls for. If the student does not admit to AI use, the case is obviously more complex, but presents no difficulties different than when you suspect a student has had another person write the paper for them.
Note that whatever response you decide on, academic integrity issues are handled in the schools. The Office of the Provost can offer advice, but does not oversee these cases; if you have questions about a particular case, that should be addressed to your school’s Dean of Academic Affairs or equivalent position.
Many companies offer tools that purport to detect passages of text that have been created using AI. Though some of these tools work well on simpler cases—straightforward cutting and pasting from a moderately competent Large Language Model like GPT3.5 (2022)—their ability to catch AI use falls off when tested against more sophisticated models, such as GPT4 (2023). False negatives—uncaught instances of AI use—also increase with even moderate amounts of editing or paraphrasing.
Far more than false negatives, the false positives are a considerable worry. Missing a case of student cheating would be an issue, but wrongly accusing a student cheating on work they in fact did themselves would be a calamity.
After experimenting with several tools last year, the Office of the Provost and IT’s Academic Technology group do not believe any current AI detectors work well enough to recommend their use or license them on behalf of the university.
A recent study of 14 detection tools includes this blunt summation: “The researchers conclude that the available detection tools are neither accurate nor reliable”. Given that the goal of the AI firms is to produce human-seeming text, our current view is that an effective AI detector is unlikely to appear. However, if in the future a tool appears that addresses the risk of false positives, we will examine it as a candidate for university adoption.
The Google Space, NYU T&L with AI, is dedicated to engaging NYU Faculty and Staff in meaningful discussions and exchanging resources related to teaching and learning with generative AI.
The Teaching and Generative Tools (TGT) Working Group is made up of self-identified faculty from across NYU. Members convene monthly via Zoom to discuss relevant and developing topics related to teaching and learning with generative AI.
To be added to NYU T&L with AI or to get invitations to TGT meetings, please contact firstname.lastname@example.org.
View General Guidelines
Educate Yourself First!
- Start here with the NYU Generative AI Primer (Google Slides) which contains definitions, getting started with Generative AI, basic prompt design, possibilities and pitfalls, and use cases. See additional NYU Generative AI Guides and Resources below.
- No one is an expert. We’re all learning this together as the technology rapidly advances.
- Learn more about public generative AI tools and NYU’s private generative AI pilot at https://www.nyu.edu/ai.
- Be mindful of FERPA and student work IP.
- Uploading identifiable student papers into any third-party tool is not governed by an NYU contract, including generative AI tools other than the ones NYU hosts, is a FERPA violation. In addition, students own their work in your classes. Instructors can’t upload student work to a generative AI engine for any purpose other than providing feedback to the student without their consent.
- There are cases working their way through court that may expand the definition of IP infringement for uploaded work. If that happens, we will issue additional policy guidance. In addition, where you want students to upload their own work to a generative AI tool, you should ideally make this optional for tools other than the ones NYU hosts.
- Stay in the know by
- joining the NYU Teaching and Learning with generative AI Google Space, NYU T&L: AI Tools. Email email@example.com to join.
- attending the generative AI track of TeachTalks OR
- referencing previous slides and videos from NYU’s generative AI workshops (September and August), and October’s pre-conference workshops and conference.
Set Clear Generative AI Expectations and Guidelines For Your Course In Your Syllabus
First and foremost, make sure that the learning outcomes for your course and assignments are clear. Here are some resources:
- Office of the Provost: Adapting Writing Assignments to Generative AI
- “Classroom Policies for AI Generative Tools,” a crowdsourced Google doc, where academics are sharing syllabus language curated by Lance Eaton, a doctoral student in higher education at the University of Massachusetts.
- Honorlock's Guidelines for Writing an AI Classroom Policy
Have Conversations With Your Students About Your Generative AI Position
- First, create a safe space.
- Establish Trust.
- Take an inquiry-based approach.
- Conversations should be just that, two-way, and interactive! Have an open dialogue and ongoing discussion with your students.
- Here’s a course discussion guide intended for courses primarily in the Humanities and Social Sciences. The guide addresses multiple pathways through which intellectual virtues can be explored and cognitive biases can be identified and mitigated from the Open Inquiry Toolkit.
- Concepts to discuss:
- Originality (Original thought, writing, coding, etc).
- Developing your own voice and ideas.
- Learning by doing.
- Questions to discuss:
- What does teaching and learning look like in the age of AI?
- What AI skills will be needed for their careers?
- What can collaboration between humans and AI look like?
- What will students lose intellectually if they over-rely on generative AI?
Ensure Students Understand the Limitations of Generative AI
- Randomness is built in. You'll get a different answer every time even when you use the same prompt.
- Generative AI tools are not neutral. Human biases exist on the internet. Because generative AI tools are trained on the internet, some of the content that is generated may also be biased, which could lead to reinforcing stereotypes. Bias can also take the form of a lack of coverage of particular disciplines, languages, regions, etc..
- Makes things up including references and citations which leads to the spread of misinformation.
- Protect Your Privacy / Data
- Public generative AI tools offer free use with a self-created account but offer no privacy or data protection, so you should be careful about sharing sensitive data. In particular, do not input information covered by FERPA, personally identifiable information, or academic and administrative data we would not put on a public website into generative AI tools.
- Intellectual Property and Copyright
- Trained on copyrighted materials in some instances.
- Digital Divide
- Access is not equitable. There may be a gap between who has access and who doesn’t.
- Many of your students may be unaware of the ramifications of AI usage. Ask them what they think or know about this topic. There are lots of areas to explore.
- For example, are students aware of the large carbon footprint and the environmental concerns of using many AI tools?
- Do they understand the human labor that goes into helping train an AI tool?
- Do they know how AI models are trained, and the problems of bias and inaccuracy that can come along with these training models?
- If students have an understanding of the shortcomings and concerns associated with AI, it can help them be more responsible users of these tools.
Discuss The Benefits and Opportunities that AI May Provide
- Editing and proofreading
- AI as feedback generator
- AI as personal tutor
- AI as team coach
- AI as learner
- Debugging Code
Explore Generative AI With Your Students As Partners
- Students often relish the opportunity to have agency in their own learning. You don’t have to take all of their suggestions, of course, but they may have new and creative thoughts on how to use AI in their assignments. Educate them where you see pitfalls or factors they haven’t considered.
- One way you can enlist your students as partners is by creating activities that allow teaching and learning with generative AI. Here are a couple of resources that could help you do so:
- Chat GPT and Artificial Intelligence in Higher Education Quick Start Guide by UNESCO, 2023
- “Student Use Cases for AI” by Ethan Mollick and Lilach Mollick suggests four ways to let your students incorporate AI into their studies.
- “Assignment Makeovers in the AI Age: Essay Edition” by Derek Bruff.
- Course Design Guide and Assignment Design Guide from the Open Inquiry Toolkit.
Emphasize And Nurture Academic Integrity
- Emphasize and nurture academic integrity to reduce motivations to breach it. Some teaching points could include the importance of claiming authorship only over original work, citing and acknowledging others’ work, transparency in how student work is produced, and having students adopt honor codes.
Potential Questions To Engage Your Students In Generative AI Conversations
- What is generative AI and how does it work?
- Have you used generative AI tools before?
- If so, which tools and for which purposes, personal or academic?
- Where have you found it useful?
- Where does it fail?
- What are the pros and cons of using generative AI for learning?
- How do you think generative AI could be harmful?
- Is it cheating to use generative AI tools to help complete your schoolwork? Why or why not?
- What are the data privacy issues of using generative AI?
- What are the ethical considerations of using generative AI?
- What role might generative AI tools play during your career? How is the workforce using it?
- What questions do you have about generative AI?
Adapting Writing Assignments to Generative AI
All learning is learning by doing. When an assignment requires writing, students learn from doing the writing, developing skills from domain knowledge to organizational acumen to editing for tone. This is so obvious academics have rarely needed to spell it out, but ChatGPT and related tools can now generate readable, relevant text at high volume, instantly, at low cost. Everyone can now produce topical writing without creating it.
To the degree students produce unedited writing with generative AI, they will be learning how to use generative AI, not how to write. Some faculty will consider this shift acceptable or even desirable, others will not, but in any case, the design of assignments that rely on writing will have to change.
This memo details possible instructor responses to potential student use of generative AI, including advising against using the tools, designing assignments to avoid use of the tools, and designing assignments to embrace use of the tools. It also covers some of the changes in dealing with academic integrity violations.
Even if an instructor does not intend to change individual assignments, the availability and generality of these tools increases the need for clarity around the instructor’s expectations. Whatever strategies an instructor might adopt for individual assignments or a whole course, instructors should explain to students what is expected around AI use.
Before revising a syllabus or assignment, faculty should try ChatGPT or a similar tool at least once. (Instructions follow at the end of this document, in Appendix A.) Instructors should also familiarize themselves with any recommendations made by their schools regarding the use of generative AI.
When setting out course policy, the following principles for students are generally applicable:
- When students use these tools, they should acknowledge that use
- Students should understand that taking credit for writing they did not create violates both NYU’s Academic Integrity policy and the norms of the academic community
- The student is responsible for ensuring any errors in the writing they submit, even where it was automatically generated.
In order to help students understand these things, we recommend that instructors:
- Explain to students the expectations and reasons for your AI policy
- Explain what you expect students to learn from the assignments—both the goals you have for the work they do, and what they should learn from that work
- Be specific about Dos and Don’ts—“Do acknowledge and describe any AI use”, or “Don’t use any AI for this assignment”—in the syllabus or assignments
- Remember that students generally want to learn, and describe what students will learn from doing the work, not just the potential punishments for cheating
Strategies for Assignments
While there are many individual strategies faculty can adopt for assignments, they can be broadly grouped into three categories:
- Advising Against: Students are told they should not use generative AI
- Avoiding: Assignments are (re)designed so that generative AI is not relevant
- Integrating: Students are allowed or required to use generative AI, so long as that use stays within guidelines and is acknowledged
These strategies may cover a whole course, or be provided assignment by assignment, but whatever preferences an instructor may have about student use of AI, those preferences should be communicated directly.
1. Advising Against Use of Generative AI
Persuading students not to use these tools for some or all assignments will require explaining that the things you want them to learn from the assignment require that they do the work themselves.
The advantage of asking students not to use of these tools is that this strategy can preserve some of the design of individual assignments or a whole course. The disadvantage is that while you can recommend against use of these tools, you cannot prevent their use. Given the relative difficulty in detecting use of these tools, academic integrity cases can be harder to adjudicate, because most evidence is circumstantial.
Sample statement for syllabus:
Because writing is a form of thinking, you should not use ChatGPT or other AI tools as a shortcut or substitute for drafting and editing written work in this class. Taking credit for writing you did not create is a violation of NYU’s Academic Integrity policy.
Advising against use of these tools asks students to self-police. If your school has an honor code, you should refer to it in your syllabus. If not, you may want to consider adding one for your class. (A list of school honor codes is at the end of this document.)
Advising against use of generative AI is compatible with designing assignments to avoid use of generative AI. Faculty may want to consider using elements of both strategies, instructing students not to use these tools and designing assignments that cannot easily be completed by these tools.
2. Avoiding Use of Generative AI
Making generative AI less relevant means designing an assignment to require the kind of work where humans still significantly outperform machines.
The advantage of avoiding use of these tools is that assignments will be designed to require student effort. The disadvantage is that these assignments will be a moving target, as things the tools cannot do well this semester may become possible next semester, requiring regular review of their effectiveness.
Sample statement for syllabus:
Though you are welcome to use generative AI tools to brainstorm in the early phases of an assignment, you are expected to produce the assignments themselves on your own. (Taking credit for work you did not create is a violation of NYU’s Academic Integrity policy.) The assignments have been designed around tasks or outputs the tools do not perform well, and your work will be graded down, perhaps substantially, if it fails to meet those expectations regardless of how it was created.
Where an instructor decides to design assignments that make use of generative AI less relevant, they should consider one or more of the following strategies:
- Collect early student thoughts about an assignment in class, to get a sense of how they write unaided
- Design assignments with greater emphasis on process — iterative work, submission of rough drafts, preserving edit history
- Ask for specific references or quotes from material studied in class
- Design assignments that require integration of discussions in class
- Design assignments tightly tied to specific course readings or concepts
- Design assignments that require oral presentation or in-class discussion
3. Integrating Use of Generative AI
Embrace of ChatGPT involves giving students explicit permission to use the tool in a course or on an assignment, but in approved ways. The list of possible ways these tools can be integrated into coursework is large and growing: a list of strategies collected by UNESCO on page 9 runs to nearly a dozen items.
The advantage of integrating these tools is that it will encourage students to discuss their use in the context of the class. The disadvantage is that understanding student use will require new effort by the instructor. Involving students in this way will also make them more like co-designers of the assignments, which has both advantages (more engagement) and disadvantages (less predictability.)
Sample statement for syllabus:
Use of ChatGPT and related tools is allowed in this class, but only in ways noted in the assignments. (Taking credit for writing you did not create is a violation of NYU’s Academic Integrity policy.) As with all assignments, learning from the work is your responsibility. You must use the tools in a way that involves effort you learn from.
For every assignment, you should also turn in a description of:
• Which tools and techniques you used (Include your prompts, any plugins you used, etc.)
• Which parts of the assignment you used them for
• What you think you learned from the work you did, and why you think that matches the goals of the assignment
Be prepared to discuss your answers in class, or in conversation with me
Where an instructor decides to design assignments that integrate generative AI, they should consider one or more of the following strategies:
- Share examples of effective uses of the tool for brainstorming and iterating the output, rather than just copying and pasting the results of a single query
- Highlight the student’s responsibility for the accuracy of any writing they submit, and the need to verify any references or claims in the text
- Design multi-step assignments that invite student deliberation, analysis, critique, and decision during the creation process
Academic Integrity Violations
Academic integrity policy is overseen by the schools, but we expect generative AI will make detecting and reacting to academic integrity more difficult. Unlike straightforward plagiarism, where copies of student writing can be found elsewhere, identifying writing created with ChatGPT et al but claimed as the student’s own is a judgment call. Such accusations rely far more on the instructor’s judgment about the student’s capabilities and the writing produced than when the source material exists online or in a database.
If an instructor suspects a student of an academic integrity violation:
- Document reasons for believing the writing is not the student’s own. Possible evidence includes:
- Internal Patterns: Grammatical perfection, consistent but bland style, sudden changes in style or tone, vague and often unsubstantiated claims, spurious or incorrect references, and list structures masquerading as development of an idea
- External Patterns: Writing does not match a student's previous work (particularly work produced in class), lack of rough drafts or evidence of editing, footnotes or references not related to the body of the text, footnotes or references pointing to work that does not exist.
- Internal Patterns: Grammatical perfection, consistent but bland style, sudden changes in style or tone, vague and often unsubstantiated claims, spurious or incorrect references, and list structures masquerading as development of an idea
- Ask the student if they used generative AI on the assignment in inappropriate or unacknowledged ways, given the evidence. One possible response, if it is in line with your school’s policies, is to require them to redo the work, providing evidence of editing
- If they deny using these tools but you continue to suspect that they used them, involve your school administration.
While there are a number of products that purport to positively identify AI-generated writing, they have high error rates. If you plan to use such detectors on student work, you should inform them at the outset of the class. Instructors should never accuse a student of a violation based solely or mainly on the output of these detectors.
When these tools have been tested by third parties, they are frequently inaccurate, and are easily defeated by simple editing strategies. Where there are false positives they are disproportionately targeted at students for whom English is an additional language. In the longer term, there is good reason to believe that these tools will become progressively less effective as AI tools improve.
Appendix A: Experimenting with ChatGPT
Go to chat.openai.com. (You will need to sign up for a free account if you haven't already.) You can ask it a question or make a request on that page.
Some example prompts you can feed ChatGPT:
- Can you write a memo outlining the pros and cons of student use of AI in a biology class?
- Please describe the different schools that make up NYU, and what their strengths are.
- Can you write a memo listing some ways urban universities are working to improve community relations?
You can use any of these, or, better, make up your own prompt. (If you would like more examples, there are many listed here.) Once you see ChatGPT's response, you can ask followup questions. You can also ask the same question a second time (the Regenerate button) and see how subsequent answers differ.
- Office of the Provost:
- NYU Schools
- Abu Dhabi's Hilary Ballon Center for Teaching and Learning: Teaching with Generative AI (NYU Stream)
- Arts and Science Getting Started with Generative AI for Instructors (Brightspace course open to all NYU Instructors.)
- Stern (website)
- Game Center AI Policy (web page)
- NYU Libraries
- Practical AI for Instructors and Students," a five-part YouTube video series by Ethan and Lilach Mollick of Wharton Interactive @ UPenn
- “Assignment Makeovers in the AI Age: Essay Edition” by Derek Bruff
- “Classroom Policies for AI Generative Tools,” a crowdsourced Google Doc, where academics are sharing syllabus language curated by Lance Eaton, a doctoral student in higher education at the University of Massachusetts.
View Past Announcements
September 6, 2023: Disabling the AI Tool in Turnitin NYU is in the process of disabling the AI detection tool in Turnitin. The AI tool will no longer be available after Friday, September 8. All the other features in Turnitin remain in place and the ordinary Turnitin service of identifying student copying of existing writing will remain unchanged.
August 29, 2023: This is an advisory update on adapting to generative AI (ChatGPT and similar tools) for faculty who use written assignments in class.
April 26, 2023: This is an advisory update on use of ChatGPT and related tools in end-of-term assignments. Please share it with faculty who may be facing these issues. As the Spring 2023 semester comes to a close, we expect increased student use of generative AI tools, especially text-generating tools like ChatGPT, in their course work. The Teaching with Generative Tools Working Group (TGT) has been exploring the issues related to these tools this semester, and while we do not yet have survey-level data, the majority of faculty we have talked with report student experimentation with these tools in their classes.
March 27, 2023: This is an advisory update on use of ChatGPT in the classroom. Please share it with faculty who may be facing these issues. This is the first full semester where students have access to ChatGPT. The Provost’s office is hearing from faculty about students generating essays, test answers, and even written class discussion they submit as their own. As adoption grows, the question for faculty is how we respond to violations of academic integrity, while adapting to the genuine utility of these tools.
The Teaching and Generative Tools (TGT) Working Group is made up of self-identified faculty from across New York University (including global sites). Members convene monthly as a whole and in sub-working groups on relevant and developing topics related to the subject of large language model tools such as ChatGPT, GPT-4, and others.
Learn more about the TGT working group
The Teaching with Generative Tools (TGT) working group will address three questions:
- What has changed and might change in higher ed with generative tools?
- How can faculty avoid potential negative effects of these tools on student learning?
- How can faculty take advantage of potential positive effects on student learning?
The Teaching and Generative Tools Working Group will provide advice, best-practices, resources, and coordination between departments and schools interested in addressing opportunities and challenges posed by the academic use of these tools.
The areas of interest broadly identified by the group currently include:
- Assignment and Assessment Design
- Acknowledgment and Citations
The Teaching and Generative Tools Working Group aims identify new and emerging opportunities and concerns around these tools, to collect and distribute resources and recommendations created and collected by the NYU community, and to share these regularly over the coming year, in keeping with the subject’s rapidly changing landscape.
To learn more, or to become a part of the TGT Working Group, contact Scott Henkle, Assistant Director of Learning Experience Design.