Mesquita didn’t set out to be a modern-day Nostradamus. While studying South Asian politics as a grad student at the University of Michigan in 1967, he detected an erroneous equation in a book on political mathematics. The experience was momentous for the young scholar. He says: “It was the first time I realized that one could say, in regard to politics, not that ‘I disagree’ or ‘This is my opinion,’ but ‘No, this is simply wrong, and I can show you why.’ ”
By the late 1970s, Mesquita was predicting the outcome of events with unnerving accuracy: a declassified CIA audit of his work for the agency found he had a success rate of 90 percent. He formed Mesquita and Roundell, LLC and consulted with governments and private businesses (including British Aerospace Systems, J.P. Morgan, and Arthur Andersen) to analyze the likely outcome of various negotiating scenarios—from foreign policy crises to mergers and acquisitions—and to advise them on how to use this insight in their favor.
Mesquita’s new book, The Predictioneer’s Game: Using the Logic of Brazen Self-Interest to See & Shape the Future (Random House), surveys his career as a forecaster, offers some tips on improving your own fortune, and includes predictions and proposals for the major foreign policy crises of the day, including the North Korean nuclear standoff and the eventual outcome of the war in Iraq. The New York Times Magazine recently featured him in a story asking if Iran will ever develop the bomb. (His answer: No, they’ll eventually back down.)
NYU Alumni Magazine recently spoke to Mesquita, a Silver Professor in the Wilf Family Department of Politics, about the model that started it all.
How does rational choice predict the future?
It takes into account something fundamental about the way people tackle problems: People do what they believe is in their best interest. I construct a model of a game that looks at the relative clout of players seeking a settlement and the willingness of these players to use their clout to arrive at a certain outcome. Then I run the numbers.
Are you ever surprised by the outcomes?
I can never anticipate what the numbers are going to tell me. The very first time I attempted a forecast for the State Department, in the late 1970s, I was asked to forecast the contest of prime minister in India. I knew something about Indian politics and had my own opinion about what would happen. But the model disagreed with me and all of the other experts. It turned out to be correct. It was a humbling moment, but also an informative one.
Why is the model sometimes wrong?
In my book, I discuss a prediction I made in the 1990s, about health-care reform, which was wrong. This was because of what I called an “exogenous random shock.” A person identified as key to shepherding the legislation through Congress, Congressman Dan Rostenkowski, was indicted on 17 felony counts. Since then, I’ve revised the model to take such shocks into account.
Is it really the case that everyone always acts in their best interest? Aren’t there instances of irrationality or people acting on emotion that mess with the calculations?
Sure, that accounts for some portion of the forecasts that turn out to be incorrect, but I don’t think it accounts for enough of them to be a really big deal. [People] want to think that because the model isn’t right 100 percent of the time, they can conclude that people are emotional and the model has no value. But nothing is correct 100 percent of the time. People want to believe in something like “wisdom,” though they have a hard time defining it or recognizing it. This [model] is a somewhat objective analysis. Are there methods that work even better? Not to my knowledge.