...

Photo credit: DrAfter123/Getty Images.

Before the advent of artificial intelligence as we know it today, films ranging from Yul Brynner’s Westworld in the 1970s to Harrison Ford’s Blade Runner in the 1980s to Arnold Schwarzenegger’s Terminator 2: Judgment Day in the 1990s explored the future of human-machine interactions and tended to imagine the worst.

Now that AI tools can be used for everything from learning a language to spotting counterfeit goods, many fear life is now indeed imitating art, with these technologies eluding the control of their creators or deployed in ways that may deceive users.

Many of these same worries surfaced nearly a decade ago, notes Leif Weatherby, director of Digital Humanities at NYU, when Stephen Hawking, NYU’s Yann LeCun, and other scientists and entrepreneurs released an open letter that stressed the importance of “avoiding potential pitfalls” of AI, while also recognizing the “huge” benefits it could offer.

“While some concerns about AI are overblown, we undoubtedly live in a world where the penetration of technologies into processes historically reserved for humans—work, political order, and even thinking—demands our attention,” explains Weatherby.

Weatherby’s work centers on cybernetics, which is the study of communication and control in living beings and the machines invented, created, and built by humans—such as Sean Young’s fictionalized “Rachael” character, a bioengineered humanoid, in Blade Runner.

“The discipline was established in the late 1940s and had an ambitious goal—to recast the full range of scientific and philosophical knowledge with the aim of studying and guiding organized systems: animals, machines, and social bodies,” explains Weatherby, an associate professor in NYU’s Department of German.

To help navigate what AI has become and to give his students a firm grounding in its origins, Weatherby teaches “Communication and Control: A Long History of Cybernetics,” a College of Arts and Science core curriculum course that is primarily an intellectual history and considers the fiction and nonfiction works of a range of thinkers as philosophical and literary lenses through which to view AI. Thomas Hobbes’ Leviathan offers commentary on control, for example, Plato’s Phaedrus on the relationship between the body and the soul, and Ursula Le Guin’s The Left Hand of Darkness on the future. On the syllabus, Hannah Arendt and Thomas Malthus appear alongside Stanley Kubrick’s Dr. Strangelove.

Students in “Communication and Control: A Long History of Cybernetics" conduct an experiment asking a large language model, such as ChatGPT, to generate some text with the aim of getting the tool to produce a “hallucination.” Photo credit: Jon King.

Students in “Communication and Control: A Long History of Cybernetics” conduct an experiment asking a large language model to generate some text with the aim of getting the tool to produce a “hallucination.” Photo credit: Jonathan King.

But at the beginning of each semester, Weatherby’s students engage in a hands-on experience in order to help them better understand what they are studying. The students conduct an experiment in which they ask a large language model (LLM), such as ChatGPT, to generate some text with the aim of getting the tool to produce a “hallucination”—making something up. Hallucinations have plagued LLMs since their inception—and, last year, Google’s chatbot Bard cost the company’s parent $100 billion in market value when the tool made an error. It wrongly asserted that the James Webb Telescope discovered exoplanets, an achievement that in fact belongs to the European Southern Observatory.

“Showing them what a hallucination is and how easy they are to create serves as a warning about not using these systems carelessly, while also helping them become technologically aware in today’s society—one of the goals of the course,” Weatherby explains.

After the hallucinations are generated, students work in pairs to “audit” the content to first confirm its inaccuracy and then to surface why the tool made a mistake. 

“The goal of the exercise is to get the students to understand the risks of using these systems and to pair their knowledge of AI with close analysis of a text,” Weatherby explains.

During this semester’s exercise, students found that the tools invented citations to explore further in answer to queries but gave stilted responses to more complex questions involving moral reasoning. Photo credit: Jon King

During this semester’s exercise, students found that the tools invented citations in some instances. Photo credit: Jonathan King

During this semester’s exercise, students found that the tools invented citations to explore further in answer to queries but gave stilted responses to more complex questions involving moral reasoning. Many explored, at Weatherby’s suggestion, AI’s inability to write passages without using a given letter, such as “e.”

“After being given extensive opportunities to practice textual interpretation and to immerse themselves in some of the literary and philosophical works that have been influential in shaping—and even foreshadowing—today’s world, students can better navigate the technological changes they are confronted with every day,” says Weatherby.