Experiment
(Photo courtesy of Tega Brain)
The Art of Unintended Consequences
What happens when you let artificial intelligence alone determine the fate of wetland gardens? Hint: it doesn’t end well
By Jenny Comita
With Deep Swamp, the provocative experimental art installation by Tandon School of Engineering artist/environmental engineer Tega Brain, comes the question: How much faith should we put in the wisdom of machines? To understand that question and Brain’s installation, it helps to know the story of how the ozone hole was discovered—or, to be more accurate, how it almost wasn’t.
In the early 1980s, Joseph Farman, a British scientist who’d been conducting research in Antarctica for decades, was measuring atmospheric ozone with a not-particularly-sophisticated device called a Dobson spectrophotometer when he noticed a precipitous falloff—around 40 percent—in the readings. The decrease was so drastic that Farman was convinced that his machine was broken. But a new meter provided even more startling results, and Farman spent the next couple of years painstakingly researching the alarming phenomenon. Finally, in 1985, Farman and two collaborators published their findings in a top scientific journal—and were almost laughed out of the scientific establishment. Why? Because NASA satellites had also been monitoring ozone levels—using much more advanced sensors coupled with high-tech computer analyses—and had found no such problem. How could three guys with a clunky old meter question the best technology money could buy?
But the NASA computers, it turns out, were programmed in such a way that any readings that strayed too far from expectation were seen as outliers and considered unreliable. “Basically,” says Brain, “the ozone hole had been written off as a sensor error because whoever had programmed that satellite had assumed there was no way ozone levels could drop 40 percent. And this sort of thing happens all the time. A lot of assumptions go into how we build computational models, but their version of reality becomes really compelling because computers can collect data on a scale that humans can’t. The result is that we tend to believe machines more than we do human observation.” Which, as the story of Farman being correct about the ozone makes clear, blind faith in machine wisdom can be a dangerous state of affairs—and one that Brain dramatically demonstrates with Deep Swamp.
For the work, Brain recreated wetlands full of local plant species in three oversized aquariums. Each tank was assigned an artificial intelligence system that controlled conditions including light, water levels, nutrients, and humidity, and each had unique photo input parameters that continually informed how the AIs chose to adjust their environments. The first AI system, dubbed Hans, was fed thousands of photographs of wetlands culled from the internet and instructed to keep his tank looking as similar to those images as possible. Meanwhile, his virtual colleague Harrison—apparently a highly cultured guy—was trained on images of landscape paintings from the history of Western art. Finally, Nicholas—an AI for the Instagram age if there ever was one—was designed simply to optimize attention. “Whenever there were a lot of people around,” says Brain, “Nicholas would reinforce those settings.” In short, the AIs were deliberately, almost comically, single-minded, resulting in mini-environments that were, by the end of the show, a bit of a mess.
And in that mess lies her point: AIs are only as good as they’re trained to think, and the humans training them inevitably have their own viewpoints and biases. Deep Swamp “is not about demonstrating what the tech can do, but trying to approach it from a more critical perspective,” explains Brain, who says one of her goals is “to counter the loud voices coming from Silicon Valley,” where companies like Microsoft and Planet Labs are aggressively pushing to integrate AI into environmental research and management.
The issue of overreliance on inherently biased technology is in no way exclusive to ecology. Brain points to the many problems that have arisen when AI and computational modeling have been incorporated into social services, the judicial system, and predictive policing. “In a lot of fields that are inherently social, there’s an attempt to automate decision making, and that can be hugely problematic,” she says. “It’s most obvious when you see companies like Google and YouTube struggling to understand that the technologies they’ve produced have really deep political consequences and we can’t necessarily fix them with just another algorithm. There have to be more responses that aren’t necessarily technical to the challenges we’re facing.”
It was that realization that caused Brain, who started her career as an environmental engineer in Australia, to shift her professional focus. She enrolled in art school and eventually came to New York, where she began to collaborate on pieces like the Good Life (Enron Simulator)—a website that forwards willing participants all 500,000 publicly available emails from Enron archives—and Smell Dating, a matchmaking service based solely on scent. Brain describes her hybrid practice, in which science and technology collide with art, as “eccentric engineering,” and her AI piece fits that description more neatly than most. “I began to see that in order to successfully address the very, very dire environmental situation that we’re facing, we’re going to have to ask bigger questions,” she says. “We need technology absolutely, but we need cultural and political will and a lot of other things too.”
One of Tega Brain’s goals is “to counter the loud voices coming from Silicon Valley,” where companies are aggressively pushing to integrate AI into environmental management.