The video below isn’t of some kind of Vulcan mind meld. Rather, it’s an illustration of the shared brain activity between two people making eye contact—a connection that some researchers believe may fuel the back-and-forth of everyday social interactions.
Picture a badly acted movie in which each character patiently waits for the other to finish a line before speaking. The dialogue sounds stilted compared to real, free-flowing exchanges in which we tend to mimic and adjust to each other’s rhythms, explains NYU psychologist and linguist Suzanne Dikker, a postdoctoral researcher in David Poeppel’s lab studying the brain functions that make conversation possible. Behavioral psychologists have long observed that people walking in groups easily fall into step together, and it’s the same when we chat, Dikker says. “You sync up to talk at the same speed as someone, and that’s what helps you jump in at the right moment. We’re in tune with the people that we’re listening to so that we can make a smooth transition when it’s our turn to speak.”
Whereas earlier models of language held that the brain makes sense of speech from the bottom up, by processing individual sounds that are then systematically pieced together into words and sentences, Dikker is part of a group of scientists who’ve come to view the brain as a top-down “prediction machine”—always anticipating what’ll happen next. If you’ve got a spouse who finishes your sentences, a friend who somehow blurts out just what you’re thinking, or a co-worker who just loves to chime in before you’ve finished your thought, you’re probably already familiar with this phenomenon.
But what exactly is happening in the brain at that moment of interjection or interruption? In a recent study published in the Journal of Neuroscience (much of which was conducted at the Sackler Institute for Developmental Psychobiology), Dikker led a team of researchers who found that a listener’s brain activity is similar to that of a speaker when the listener is able to predict what the speaker is going to say. Dikker hypothesizes that the brain might use such a prediction to prep the auditory cortex for the anticipated sounds—like a tenant telling a doorman to watch out for a particular package.
In the experiment, the researchers recorded a speaker as she described a series of images that Dikker had drawn, and then played back those descriptions for listeners as they viewed the same images. Dikker came up with the ideas for the pictures by jumbling together 90 nouns and 45 transitive verbs—yielding whimsical results. While some of the images tended to elicit a single, predictable description, like “a penguin hugging a star,” others called to mind several possible captions. (See examples below.) Dikker compares the difference to that between hearing “The grass is...” (when it’s easy to guess that the next word is “green”) and hearing “John is...” (when any number of words could come next). The listeners’ brain activity most closely matched that of the speaker when they were able to predict her description.
“A pear choking an owl.” Many of Dikker's pictures turned out to be violent because it’s hard to find peaceful transitive verbs in English. “There's ‘kiss’ and ‘pet,’” she says, “but then also ‘stab’ and ‘kill.’”Photo credit: Suzanne Dikker
“A guitar boiling a tire”? “A guitar stirring tire soup”? Descriptions varied for this picture.Photo credit: Suzanne Dikker
“A penguin hugging a star.” An example of a picture with an easily predicted description.Photo credit: Suzanne Dikker
Photo credit: Suzanne Dikker
Photo credit: Suzanne Dikker
These moments of brain wave synchronicity—when two minds seem to “click”—are also at the heart of a series of imaginative projects Dikker’s been working on outside the lab. In collaboration with fellow scientists and artists such as Matthias Oostrik and Marina Abramovic, she’s created a series of museum installations (like “Measuring the Magic of the Mutual Gaze,” above) exploring what it really means to “be on the same wavelength” as someone else. In one, a neurofeedback game, two people wearing portable EEG headsets score points every time their brainwaves are synchronized; in another, a pair sits in a cart that moves when the headsets register shared brain activity.
The question these activities invariably elicit? Whether you can use an EEG headset to find your soulmate, of course. “People always ask if this is a dating tool,” Dikker says. “We’re always careful to tell people that this is just three-minute snapshot of two brains—it doesn’t say anything about whether you should get divorced, or whether you get along better with your mom than your sister.”
Though the impulse to find significance in a meeting of the minds is a strong one, Dikker cautions that some amount of synchronization is to be expected. “We know already from a lot of research that our brains do approximately the same things if we’re doing approximately the same things,” she says. More difficult is accounting for subtler variations in how different brains respond to the same situation.
“Is there anything to drive synchronicity that you can’t explain just from what’s going on in the world around us?” Dikker asks. “That’s actually a really hard question when you think about it.”