Do Alexa and Siri Really Know Best?
Journalism professor—and former computer scientist—Meredith Broussard explains why you shouldn’t put your faith in the superiority of machines
By Lauren Mechling
We follow routes our phones plot for us. We read news stories generated by computers. We have Jetsons-style fantasies of getting into our own little hovercraft bubbles and being whisked off to work during a very relaxing commute. And we often do so without so much as a nanosecond of doubt or misgiving.
But our blind faith in the superiority of our gizmos, gadgets, and computers gets (rightfully) rattled in Meredith Broussard’s engaging and often alarming book, Artificial Unintelligence: How Computers Misunderstand the World (MIT Press). Diving into the troubles with things such as AI software and driverless cars, she illuminates how the machines we revere are often glitch-riddled—and even stupid. A computer scientist and journalist, Broussard has worked as a software developer at AT&T Bell Labs and the MIT Media Lab, and she is now an affiliate faculty member at the NYU Center for Data Science and an associate professor at NYU’s Arthur L. Carter Journalism Institute.
I was fascinated to read in your book that the AP uses computers to write some of its articles about sports and business.
The branch of AI being used here is called natural language generation. It’s really cool, but it doesn’t do as much as people imagine. It’s not thinking; it’s more like Mad Libs.
Your background is in both computer science and journalism?
I studied computer science and English as an undergraduate at Harvard, and I started out as a computer scientist. I couldn’t hack the sexism in the field, so I left to become a journalist. Journalism is a much friendlier place to be female. Once data journalism became a well-known discipline, I switched over because it allows me to do journalism and computer science simultaneously.
What does “data journalism” mean?
It’s the practice of finding stories in numbers and using numbers to tell stories. The story that kicked off the field—the first time the tools of quantitative social science were applied to journalism—was a [Detroit Free Press] story by Philip Meyer in 1968. There was a race riot in Detroit, and he performed a survey of people who were involved in the riot. By analyzing the results on a mainframe computer, he found that the people who participated in the riots came from varied social classes—which gave a different story to tell about the riots than the one people had been telling themselves. When you use computational tools, you can challenge conventional wisdom and you can find more insights.
One of the main themes of your book is how feelings and nuance tend to get lost in a society that over-relies on technology.
It’s important to remember that both qualitative and quantitative factors matter. For instance: you can’t write a program that measures love, but love matters greatly in the world. We make knuckleheaded mistakes and we end up discriminating. When we attribute too much agency to a dumb machine, we make a worse world.
One of your chapters focuses on the inefficacy and danger of self-driving cars. Why has our society become so obsessed with them?
It’s an enchanting idea. People have been talking about flying cars since at least the 1950s—it’s probably keyed into our fantasies of space travel and our visions of what the future would look like. It’s really important not to get too carried away with these old-fashioned visions of the future and trying to make science fiction real. Self-driving cars don’t work as well as marketers would like people to believe. For example, GPS technology doesn’t operate on the Z axis, so self-driving cars can move around a flat plane, but can’t automatically navigate a multistory parking garage.
How severe do you think our mania for technology has become?
We need more computational literacy overall, and we should make careful choices about when it is appropriate to use computers and when it isn’t. It turns out, human interaction is something we need—look at people in solitary confinement, they go crazy from a lack of social interaction. People think they love the idea of technology that replaces a lot of human interaction, but a totally frictionless world is dangerous. In election software, for example, we’re more vulnerable to being hacked. Paper ballots might seem like a pain, but they are much harder to hack.
And what is the good side of technology?
I love technology—that’s the first line in my book. I love building things, and it’s thrilling how human ingenuity can take care of so many mundane tasks. But we need to be judicious.
| Video: One Minute • One Question |
What If AI Isn’t Actually So Intelligent?