NYU psychologist Jay Van Bavel explores the latest research on how partisan allegiances can interfere with analytical thinking.
Counselor to the President Kellyanne Conway touched a nerve—to put it mildly—when she coined the phrase “alternative facts” in a January 2017 interview defending Press Secretary Sean Spicer’s false claim that President Donald Trump’s inauguration had drawn record attendance in Washington D.C. Spicer’s boast was easily disproven through photographs that showed that the crowd on the mall for Trump’s inauguration was about a third the size of the one at President Barack Obama’s inauguration in 2009.
So how could Conway—and some Trump supporters at large—cling to a different version of reality? Political opponents have long held different opinions, critics argued, but those have always been based on differing analyses of the same facts. If now the two sides couldn’t even agree on the facts, what hope was there for genuine debate?
Many journalists labeled the phrase “Orwellian” and lamented it as emblematic of a new era in which voters’ political convictions shaped which kinds of evidence they’d accept, rather than the other way around. And since then, amid mounting evidence that the deliberate creation of social media-friendly “fake news” may have shaped the outcome of the 2016 election (an article falsely claiming that Pope Francis had endorsed Donald Trump received close to a million engagements on Facebook, for example), the conviction among many that something is uniquely and newly broken in American politics has only strengthened.
But if that’s true, where exactly did we go wrong? And is there any hope of repairing the damage? What makes “fake news” so irresistible to some—and is anyone truly immune?
These are questions that have been troubling Jay Van Bavel, an associate professor of psychology and neural science who specializes in identifying how group identities and political beliefs shape the mind and brain.
“That’s the cool thing about science,” he says. “When you’re watching the news and freaking out, one thing you can do is go back to your lab, read the work that’s been done, and design your own studies to try to figure out what’s going on and maybe find a cure.”
Last year, Van Bavel and colleagues examined 560,000 tweets on contentious topics such as gun control, climate change, and same-sex marriage and found that each moral-emotional word (such as “greed”) a tweet contained increased its retweets by about 20%—but the sharing was mostly among people with similar viewpoints. And this spring, he and postdoctoral fellow Andrea Pereira co-authored “The Partisan Brain: An Identity-Based Model of Political Belief,” a review of current research suggesting that identification with political parties can actually interfere with the way that the brain processes information.
Van Bavel explained to NYU News the downsides to our deeply human desire for belonging, and offered some possible tactics for fostering evidence-based thinking. Here are some of his thoughts on how better understanding the brain could help encourage more productive political conversations.
We tend to reject facts that threaten our sense of identity.
When a survey showed that Trump supporters were more likely than others to misidentify a photo from the 2009 inauguration as one from 2017, were they just being stubborn, or did they actually perceive the size of the crowd differently?
Van Bavel says there could have been some of both going on—knowingly giving a wrong answer to signal support for a partisan side is known as “expressive responding”—but having trouble reconciling facts that don’t support your existing view is something that happens to people from both sides of the aisle.
In one study cited in Van Bavel’s paper, Democrats were shown to be more likely to remember George W. Bush as having been on vacation during Hurricane Katrina (he wasn’t), whereas Republicans were more likely to remember seeing Barack Obama shaking hands with the President of Iran (he didn’t). In another, even people with strong math skills struggled to solve a math problem when its answer contradicted their view on whether gun control reduced crime.
What’s going on here? Van Bavel theorizes that choosing a particular political party is often an essential part of how people construct their identities—so that a threat to a particular candidate or position can sometimes be perceived (though not always consciously) as a threat to self.
“We don’t yet have all the answers of the level of the brain,” he says, “but when you have a really strong commitment to a group or belief and you get information that contradicts what you already know, you construct new ways of thinking about that information rather than updating your belief.”
Van Bavel points to a classic study by social psychologist Leon Festinger, who infiltrated a doomsday cult to see what would happen when the world didn’t end on the date the group’s leader had predicted. Rather than abandoning the cult when the prediction didn’t come true, the followers instead did the opposite: They “doubled down” on their beliefs and proselytized even more fervently. It’s just one extreme example of the (illogical) way that people tend to resolve what psychologists call “cognitive dissonance”—the uncomfortable state of feeling two different personal beliefs conflicting with each other—in all kinds of everyday situations.
Tribalism is old, but social media is new.
The cognitive structures that make it feel good to belong to an “in group”—and painful and scary to change allegiances when new facts come into conflict with our core beliefs—may be as old as humanity itself, Van Bavel says. It’s likely that we’ve always had a tendency to embrace and share evidence that reinforces our worldview and reject that which contradicts it. But if there’s something different about the way that process works now, it’s the speed with which news—“fake” or otherwise—can spread.
Facebook has about two billion monthly active users worldwide, with another 336 million on Twitter. “In seconds, I can click a button and retweet an article to 10,000 people,” Van Bavel says. “The average human has never had that capacity before.” Add to that the fact that—as Van Bavel’s study showed—it’s the more sensational stuff that’s likely to make a splash within social networks, and both ordinary citizens and news organizations who rely on clicks for revenue have a strong incentive to trumpet outrageous headlines.
“Ancient psychology and modern technology have created a perfect storm for fake and hyper-partisan news to be perpetuated,” Van Bavel says.
Some political differences seem “hard-wired.”
Whereas we might feel like we choose a political party or candidate based on which shares the principles we hold dear, there’s some evidence to suggest that sometimes the process works the other way around, or even that we aren’t really “choosing” at all. In one study, participants agreed or disagreed with a given welfare policy based on whether or not it was said to be endorsed by their chosen party, rather than on whether it aligned with their personal ideologies. And even more disconcerting is research suggesting there could be a genetic component to political identification: Identical twins have been shown to be much more likely to share political beliefs than non-identical twins, and one of Van Bavel’s own studies found a correlation between attitudes toward the political system and the size of one part of the brain—the amygdala.
Does all of that mean it’s impossible to convince anyone of anything they don’t already believe? Van Bavel doesn’t think so, but says it might mean thinking differently about our methods of persuasion. If the brains of liberals and conservatives really are different, then what works for you might not work for the person you’re trying to convince.
“It might mean that you have to do a better job of understanding that person’s position, and look at how to frame arguments in a way that appeals to someone with that belief,” he says. “I think in the future a lot of political research is going toward thinking about biological makeup and psychological orientation to the world, and how to find messages that appeal to different types of people based on those grounds.”
But logical thinking can be taught.
Some studies suggest that people with a high degree of scientific curiosity and those (such as judges) working in professions that require them to evaluate evidence fairly may be less susceptible to partisan blindness—and more likely to change their minds when presented with new facts. Van Bavel believes that a little of that kind of training—even for those of us working in very different fields—can go a long way toward inoculating people against the allure of fake news, and that it’s something educators should focus on.
“You can do that training in high school and in college,” he says. “You can take a philosophy class on logic, or a journalism class where you learn how to fact check and spot well-sourced versus poorly-sourced stories. I teach Introduction to Psychology, and my hope is that students go into the world afterward and even if they don’t choose to become practicing psychologists like me, they’ll have the skills to open a newspaper to an article about some finding in psychology and decide if it’s worth paying attention to or not.”
And as for pointing out holes in someone else’s logic? It’s tricky, of course, but Van Bavel points to research showing that it’s best not to go on the offensive, but rather to ask questions—such as “How do you know that?” or “Why do you think that is?”—that lead the other person to discover their own uncertainty on the topic.
“I think that most people say things socially and informally in a way that expresses greater certainty than they actually hold,” he says. “But when you go through the exercise of asking about the premises of their argument and what evidence they have in a way that doesn’t make them defensive, they might actually see the holes in their own arguments.” And in the process, you might find areas where you aren’t as sure as you thought you were, either.
Still, even experts can be fooled.
Van Bavel’s New Year’s resolution this year was to post fewer “hot takes” on Twitter—instead waiting to weigh in on anything political until he’d looked at data on the topic. But he admits that even he has accidentally posted fake news—twice. Both times it was satire that sounded like it could have been true, and both times he deleted it immediately upon learning of the mistake.
But Van Bavel also credits his online friends—fellow scientists and researchers who’ll immediately chime in with questions about evidence, whether he’s posted a political story or a research paper—with keeping him honest. Just as the need for acceptance by an ideological in-group can lead some to share spurious “news,” Van Bavel’s similar social desire to be respected by his peers is what reminds him to be careful about what he shares.
“I’m fortunate to have a community of people who are really skeptical, and so I’ve embraced that kind of uncertainty and criticism,” he says. “That’s really part of the identity of the scientist. But if we could generate that ethos with other types of identities, I think it would be a benefit for everybody.”