Symposia > Anders

Communicating brains – from one-brain neuroscience to two-brain neuroscience

Chairs:   Silke Anders1 &  Thomas Ethofer2

1Department of Neurology, Universität zu Lübeck, Lübeck, Germany

2Department of General Psychiatry, University of Tübingen, Tübingen, Germany

Abstract:

Traditionally, the search for the neural mechanisms underlying human communication and social interaction has focused on the brains of individuals who are engaged in either encoding or decoding social signals. This work has revealed remarkable insight in how the human brain parses and integrates social information in visual and auditory signals. Recently, imaging studies have begun to investigate flow of information between brains from an interindividual perspective. In this work, the focus has shifted from the analysis of brain activity in individual brains to the analysis of mutually complementary processes in dyads of interacting brains. In this symposium, we will elucidate the advantages and current limitations of both approaches. The first talk (Thomas Ethofer) will give an overview of recent work on how affective information from vocal and facial communicative signals is decoded and integrated in the human brain. Using a two-brain approach, the second talk (Anna Kuhlen and Carsten Allefeld) will show how EEG and two-brain multivariate spatio-temporal analyses can be combined to study on how listeners align their own brain activity to that of the speaker while listening to fairy tales. Using a similar two-brain approach, the third talk (Silke Anders) will show how information-based fMRI can be used to investigate the flow of information between senders and perceivers during facial communication of affect. The fourth talk (Ivana Kovalinka) will present data from a truly interactive EEG study that shows that combining information from interacting brains can indeed reveal information about ongoing social interaction that the sum of information derived from each brain in isolation can not reveal. Finally, the fifth talk (Edda Bilek) will present recent advantages in hyperscanning that will finally permit to study brain processes in truly interacting individuals with fMRI.



Talk 1:

Encoding and integration of social information from human faces and voices

Thomas Ethofer
Department of General Psychiatry, University of Tübingen, Tübingen, Germany

Successful social interaction requires correct interpretation of dynamic facial and vocal features, such as emotional facial expressions and speech melody (prosody). It has recently been shown that the neural correlate for integration of signals from these two sensory modalities is situated at the overlap of right hemispheric face- and voice-sensitive superior temporal sulcus (STS) cortices. Building on this research, we show that the fine-scale spatial activation patterns within these modality-specific areas in the STS areas indeed carry information about perceived emotional tone. Furthermore, we combine functional magnetic resonance imaging (fMRI) results with diffusion tensor imaging (DTI) to clarify which other brain areas are recruited in concert with STS regions to extract the social meaning of facial and vocal stimuli. Using a factorial adaptation design we demonstrate significant response habituation in the orbitofrontal cortex (OFC) which occurs similarly during perception of emotional faces, voices and face-voice combinations. These functional data are in line with DTI findings showing converging fiber projections of the three different STS modules to the OFC which run through the external capsule for the voice area, through the dorsal superior longitudinal fasciculus (SLF) for the face area and through the ventral SLF for the audiovisual integration area. This suggests a key role of the OFC for processing of dynamic social signals and proposes that the OFC is part of the extended system for both face and voice perception. Our findings show that combining a number of different neuroimaging methods can successfully be used to tap processing stages in individual brains in great detail.


Talk 2:

Coordination of EEG between speakers and listeners


Anna Kuhlen1,2,Carsten Allefeld1, John-Dylan Haynes1,2
1 Bernstein Center for Computational Neuroscience, Charité – Universitätsmedizin Berlin, Germany;
2 Berlin School of Mind and Brain, Humboldt-Universität zu Berlin, Berlin, Germany


When two people talk to each other they coordinate both their linguistic and nonlinguistic behavior. To capture the neural basis of social interaction processes, studies in social neuroscience have recently begun to extend their focus from the isolated individual to investigating two or more interacting individuals. In this study we uncover a coordination of neural activity between two individuals’ ongoing EEG (electroencephalogram) – a person speaking and a person listening. The EEG of 12 speakers was recorded while narrating short stories. The EEG of another set of 12 participants was recorded while watching video recordings of these narrations. To ascertain that a neural coordination is indeed due to processing communicated information, audiovisual recordings of two speakers were superimposed on each other, and listeners were instructed to attend either to one or the other speaker. Using multivariate analyses of variance we found evidence that listeners show similar time-locked activity when attending to the same speaker. Furthermore, a canonical correlation approach revealed that listeners’ EEG coordinates with speakers’ EEG at a delay of about 13 seconds. This finding suggests that speakers and listeners coordinate their representations of larger semantic units. Going beyond previous studies, our study ascertains that the established neurophysiological marker of interpersonal coordination is not driven by individuals synchronously processing shared sensory input. Instead, our measure reflects an interpersonal coordination between two individuals that is based on the information one interlocutor conveys to the other.

Talk 3:

Mapping the flow of affective information between communicating brains

Silke Anders
Department of Neurology, Universität zu Lübeck, Lübeck, Germany.

When people interact, affective information is transmitted between their brains. We used information-based functional magnetic resonance imaging (fMRI) in a ‘pseudo-hyperscanning’ setting to map the flow of affective information between the brains of senders and perceivers engaged in ongoing facial communication of affect. We found that the level of neural activity within a distributed network of the perceiver's brain can be successfully predicted from the neural activity in the same network in the sender's brain, depending on the affect that is currently being communicated. Furthermore, there was a temporal succession in the flow of affective information from the sender's brain to the perceiver's brain, with information in the perceiver's brain being significantly delayed relative to information in the sender's brain. This delay decreased over time, possibly reflecting some ‘tuning in’ of the perceiver with the sender. I will show that these data support current theories of intersubjectivity by providing direct evidence that a ‘shared space’ of affect is successively built up between senders and perceivers of affective facial signals.

Talk 4:

Dual-EEG of joint tapping: what can two interacting brains teach us about social interaction?

Ivana Konvalinka
Center of Functionally Integrative Neuroscience, University of Aarhus, Denmark; Informatics and Mathematical Modelling, Technical University of Denmark, Lyngby, Denmark.

The neural mechanisms underlying real-time social interactions remain largely unknown. Only a small number of recent studies have explored what goes on in brains of two people during true social interaction. Here, we asked whether information gained from two truly interacting brains can better reveal the neural signatures of social interaction than separate investigation of two brains. We measured dual-EEG during an interactive finger-tapping task. Pairs of participants were asked to synchronize with an auditory signal coming either from their partner (interactive or ‘coupled’ condition) or from a computer (‘uncoupled’ computer-controlled condition). Time-frequency analysis revealed stronger left-motor and right-frontal suppression at 10 Hz during the interactive condition than during the uncoupled computer-driven condition. We used machine-learning approaches to identify the brain signals driving social interaction. We combined data from both participants in each pair (raw-power at 10 Hz during tapping at each electrode), and applied logistic regression using feature selection in order to classify the two tapping conditions. The first seven (frontal) electrodes consistently emerged as good classifiers, with 85-99% accuracy. Moreover, there was a tendency for one member’s frontal electrodes to drive the classifier over the other’s, which predicted the leader of the interaction in 8/9 pairs. This study shows how analyzing two interacting brains can give better classification of behaviour; and hence that the whole of two brains is indeed better than the sum of its parts, at disentangling neural signatures of interaction.

Talk 5:

Using ‘hyperscanning’ to study social interaction

Edda Bilek1, Matthias Ruf2, Ceren Akdeniz1, Peter Kirsch3, Andreas Meyer-Lindenberg1
1Department of Psychiatry and Psychotherapy, Central Institute of Mental Health, University of Heidelberg, Mannheim, Germany
2Department of Neuroimaging, Central Institute of Mental Health, University of Heidelberg, Mannheim, Germany
3Department of Clinical Psychology, Central Institute of Mental Health, University of Heidelberg, Mannheim, Germany.

We developed a hyperscanning environment using two 3T MRI scanners which are functionally connected and time-synchronized, enabling online interaction (cooperation/competition) between dyads of participants while they are being scanned simultaneously. Among others, we use a joint attention task in which a target stimulus is presented to one participant only, and cooperation between both participants (engagement in joint attention) is required to complete the task successfully. The presentation gives an overview of technical characteristics of the hyperscanning set up and method implementation. Data analysis includes conventional statistic parametric mapping, but considers time lagged connections to detect maximally covarying systems in the simultaneously acquired datasets. Results from the ongoing data analysis will be presented. Finally, we will discuss the usefulness of hyperscanning-fMRI in clinical social neuroscience.

Online user: 1 RSS Feed