Clay

Clay Shirky is Vice Provost for AI and Technology in Education.

In this role, he designs, develops, and enhances all academic aspects of technology-based teaching and learning, University-wide; oversees the development of NYU’s instructional technology strategy, working with deans, faculty, NYU IT, and other university offices; maintains an inventory of online education offerings; and helps schools use educational technology to recruit new students, help existing students learn and progress, and generate costs savings or new revenues.

Q: We're curious to hear about your reaction to ChatGPT when it launched on Nov 30th, 2022. Did you immediately understand that as the Vice Provost responsible for educational technologies you and your team would be responsible for spearheading NYU's response?

A: We certainly knew in early December that there was going to have to be some sustained response, and that the Provost's Office was going to have to convene a cross-school group to develop that response. The Teaching with Generative Tools working group formed at the end of the fall semester, and met all spring.

And while the initial public conversation in academic circles understandably focused on the immediate term -- what could we do to deter students from forging their writing assignments? -- NYU hosts so many different threads of the technical and ethical developments around AI that we also knew that we needed a longer-term reaction as well, one that assumed that generative AI was going to have a place in teaching and learning, and that we will have to develop strategies of integration and use in the classroom as well.

Q: What advice are you offering faculty members who are concerned about ChatGPT and other AI? In the context of student use of AI, what should professors be doing to prepare?

A: For all that's been written about ChatGPT in academic life, there are really only three broad strategies: forbid its use in assignments, design around its use, or integrate its use. And while faculty have a high degree of autonomy in how they run their courses, my advice has consistently been that forbidding its use is at best a short-term strategy -- NYU can't win an arms race with the AI companies.

Over time, faculty will get good at designing assignments that either require student involvement -- such as process-oriented assessments, where how a student writes a paper is as important as the finished product -- or allows use of the tools with acknowledgement -- "Have ChatGPT to write a pro and con argument for your thesis, then you critique the output."

Over the summer we took materials already developed in many of the schools and libraries, materials that offer strategies for these kinds of assignments and assessments, and turning it into a curriculum for new faculty, and for faculty who want to retool their syllabi.

Q: When did you first start thinking about AI making its way into university life? Did you consider the release of a tool like ChatGPT inevitable?

A: I'm embarrassed to say that I did not see ChatGPT coming, even though I had been using Sudowrite, one of the closest things to it, for over a year when ChatGPT launched.

When ChatGPT appeared, its underlying Large Language Model (LLM) was good but not dramatically better than other models at the time. The real change was in the interface -- suddenly, anyone who'd ever texted back and forth with anyone else was given a way to think about how interacting with an AI should feel.

We saw the importance of chat as a mode of interaction -- it doesn't even feel like an interface, though that's what it is -- in the sudden explosion of use. There was zero difference in what OpenAI's LLM was capable of last Thanksgiving vs. last Christmas; the ten of millions of users who showed up that month were responding to the chat interface making existing capabilities comprehensible and exciting.

So even though I was not just paying attention to AI, I was paying attention to generative text tools specifically, I did not see the opportunity for the sudden, massive increase in usability that ChatGPT brought us.

Q: As AI continues to evolve, how do you see it transforming (for better or for worse) traditional teaching methods and the role of instructors in higher education?

A: Mostly for the better, not even because of the tools themselves, but because the only way out of an invidious arms race with AI companies is for faculty to talk with their students about the whys and hows of learning, to make it clear what certain assignments are meant to accomplish, to invite students to see homework not just as a set of tasks but as challenges that make them better people.

We know from decades of research that when faculty explain why students are doing certain kinds of work, that engagement is better for all sorts of student outcomes -- emotional and civic, not just intellectual. Much of what we'll be doing over the next few years will involve specific adaptations to the strengths and weaknesses of specific tools, but the underlying change in teaching and learning is moving faculty towards being more explicit about both goals and strategies to pursue those goals in their classrooms. 

Q: Have you come across any examples of how AI can enhance student learning, and if so, can you describe what this could look like? 

A: To quote Herbert Simon, learning results from what the student does and thinks and only from what the student does and thinks. For AI to enhance student learning, it has to enhance what the student does and thinks, not just replace it. 

This is the core of the challenge. At the extremes of its capabilities, AI can be used for simple correction or wholesale replacement of student work. A spellchecking AI can correct writing without doing much in the way of teaching the user. (There are words I spell wrong about half the time, and spellcheckers sure don't seem to have improved those odds over the years.) And at the other end of the spectrum, if I write a prompt for a memo, then cut and paste the result without even reading it, also haven't learned anything. 

The opportunities for student learning lie between the extremes of trivial correction and total opt-out. If we want students to learn while using AI, our assignments have to ask them to use it in an engaged way and to understand the results, which is only to say that for any activity the tool replaces, there needs to be some higher order activity -- editing the output, critiquing the output, repeating the process with variations, and so on -- that becomes the new source of what the student thinks and does.

Over a three year cycle, NYU offers something like twenty thousand different courses, each of which has multiple assignments, so in aggregate this redesign problem is huge. There's no way to do this all at once, so the key for us is to use every phase of adaptation to understand what we need to do next.
This fall we will be working on providing faculty a set of strategies and tools that adapt to the present capabilities of the tools. As development of AI continues, and as we get more experience, we will update what we provide faculty, and they will continue to adapt, and we will learn more, and so on for the next several years.

Q: Are there privacy and data security concerns with student use of AI? Can universities play a part in protecting student information?

A: Yes, absolutely, and Tani Raiford, our Chief Privacy and Data Officer, is very involved in these conversations. There are Federal rules for handling student data, but more importantly, we have an ethical obligation to create a space for our students to use these tools without feeling like they are sharing data with third parties.

The ways these tools work, both ingesting and producing data, are unlike most of the tools we've gotten used to in the 30 years the internet has been publicly accessible. The conversation around student privacy, and around ethical use generally, is going to be as involved and iterative as the conversation around teaching and learning.