Symposia > George

Face perception: insights into the visual, emotional and social brain

Chair:  Nathalie George

CNRS / UPMC / Inserm, UMR 7225 / UMR-S 975, CRICM, Paris, France



Abstract:

There is a long tradition of research in cognitive neuroscience of face perception in humans. This research field has become very diversified over recent years, with sustained research on the mechanisms of face identity and facial expression perception, together with a growing amount of studies in the emerging field of social neuroscience. Because face perception is a multi-fold process, these studies exploits the variety of methods developed in cognitive neuroscience, in order to understand the neuro-functional organization of the perception and integration of the many information conveyed by faces. This symposium will present an
overview of recent studies in the field of cognitive, affective, social and cultural neuroscience in relation with face perception: Bruno Rossion will show how visual steady-state potentials can provide a new tool to study of face perception. Gladys Barragan-Jason will present scalp and intracranial EEG data on the temporal dynamics of superordinate to familiarity level processes. Stephanie Dubal will present evidence for early emotional amplification during the perception of human faces and robotic stimuli displaying positive emotion. Atsushi Senju will present eye-tracking data on intercultural differences in face and gaze perception. Nathalie George will present an EEG study on oscillatory brain activities associated with joint
attention during online face-to-face interaction.

Talk 1:

Understanding individual face perception by means of steady-state visual evoked
potentials


Bruno Rossion, Adriano Boremanse, Dana Kuefner, Esther Alonso
University of Louvain, Louvain-la-Neuve, Belgium

A novel approach to understand face perception in the human brain by means of steady-state visual-evoked potentials (SSVEPs, Regan, 1966) is introduced (Rossion & Boremanse, 2011). In such experiments, participants are presented with pictures of faces appearing at a constant rate (e.g., 4Hz, or 4 faces/second) for a 90s duration while high-density EEG is recorded (128 channels). Time-frequency analysis shows large responses at the fundamental frequency (4 Hz) and harmonics (8 Hz, ...) over posterior electrode sites. The first and second harmonic responses are much larger at right occipito-temporal channels when different faces are presented than when the same face is repeated. This reduction of signal in the identical face
condition is much smaller for inverted or contrast-reversed faces, two manipulations that are known to greatly affect facial identity perception. The SSVEP response at the specific frequency rate increases until about 10 seconds and then decreases when the same face is repeated. The sudden introduction of different face stimuli leads to an immediate increase of signal, indicating a fast, large and stimulation frequency-specific release to face identity adaptation. Overall, this sensitivity of the SSVEP to face identity in the human brain provides further evidence for face individualization in the right occipito-temporal cortex by means of a much simpler, faster and higher signal-to-noise approach than previously used. It offers a
promising tool to study the sensitivity to visual features of individual faces in various populations presenting a lower sensitivity of their electrical brain responses (e.g., infants and children, clinical populations).


Talk 2:

The neuronal dynamics of face processing: From detection to recognition

Barragan-Jason G.1, Cauchoix M.1, Valton L.1,2, Sol J.C.2, Serre T.3, Barbeau E.J.1
1 Centre de recherche Cerveau et Cognition, Université de Toulouse, CNRS-UMR 5549,
Toulouse, France
2 Centre Hospitalier Universitaire, Rangueil, Toulouse, France
3 Cognitive, Linguistic, and Psychological Sciences Department, Institute for Brain
Sciences, Brown University, Providence, RI, USA

Recognizing familiar faces rapidly and accurately is crucial for social interactions. However, how humans can move on from face detection to face recognition among hundreds of known faces remains largely unclear. In particular, the speed needed between face detection and face recognition has seldom been investigated. Event related potential (ERP) studies suggest that face detection occurs around 110ms while familiar face recognition may potentially rely on different components: the N170, the N250 or the N400.
Using scalp EEG in control subjects and intracranial recordings in patients with drug-refractory epilepsy during a rapid go-no go categorization task, we compared electrophysiological responses between face detection (human vs animal faces) and face recognition (famous vs unknown faces, ie. familiarity level). We constrained participants to answer very rapidly and we used a large pool of stimuli in order to prevent top-down activation.
Using both ERPs and single-trial decoding, a difference of ~150ms was found between both detection and recognition conditions. This 150ms electrophysiological delay is remarkably similar with the delay observed in reaction time. Detection occurred around 100ms after stimuli onset while recognition occurred around 250ms post-stimulus. Reaction times correlated with different ERPs components or decoding and suggest that the N400 is not necessary to recognize a face as known.
In contrast to some suggestions, this study demonstrates that individualizing a face as known (familiarity level) in a bottom-up paradigm takes a rather lengthy time compared to face detection (superordinate level). Why it takes such a long time needs to be investigated.


Talk 3:

Early emotional modulations beyond human faces

Stephanie Dubal, Mariam Chammat, Jacqueline Nadel
Emotion Centre, CNRS USR 3246, GH Pitié-Salpétrière, Paris, France


Considering that faces are the main conveyers of human emotion is only one step to state that there is an emotional facilitation bias towards human faces. To test for this potential bias, we have designed event-related potentials studies using a set of prototypical emotions displayed by non-humanoid robots. These robotic heads were made out of complex metallic arrangements from which emotional signals had to be extracted. We compared ERP early responses to these non humanoid robots expressing happiness and a neutral emotion, and sadness in a separate study.
At the behavioral level emotion shortened Reaction Times similarly for robotic and human stimuli. Early P1 wave was enhanced in response to emotional - both happy and sad-compared to neutral expressions for robotic as well as for human stimuli. Congruent with their lower faceness properties compared to human stimuli, robots elicited a later and lower N170 component than human stimuli, and did not produce an inversion effect when put upside-down.
These results emphasize the idea that early perceptual modulations in response to emotional expressions go beyond human faces. They also raise questions about the dissociation between affective properties and physical properties of the stimulus at the level of perceptual encoding. Besides examining the properties of the stimuli that contribute to emotionality at the level of the P1 component, our results show that positive stimuli may as well as negative ones trigger early emotional effects. A special focus will be put on the idea that positive emotion conveys high impact information.

Talk 4:

The effect of cultural background on face and gaze scanning: An eye-tracking study

Atsushi Senju1, Angélina Vernetti1, Yukiko Kikuchi2, Hironori Akechi2, Toshikazu, Hasegawa3, & Mark H Johnson1
1 Birkbeck, University of London, London, UK
2 Japan Society for the Promotion of Science, Tokyo, Japan
3 University of Tokyo, Tokyo, Japan

A fundamental question about the development of social cognition is the effect of postnatal environment. However, it is difficult to test this question empirically because, unlike non-human animals, it is virtually impossible to control for the human postnatal environment. One of the promising ways to overcome this limitation is to study how the different cultural norms, which would systematically change the social experience, modulate the development of social cognition. We focused on the different cultural norms on the use of eye contact between British and Japanese cultures, and investigated whether it is related to the eye movement in response to the perceived eye contact. British and Japanese adult participants were presented
with a series of animations of computer-generated faces, which made a gaze shift either toward or away from the participants, and either smiled or opened the mouth in a non-communicative manner. Results revealed differential pattern of face scanning between cultures, with Japanese participants fixating more ‘in between’ the eyes and less to the mouth. It was also found that participants followed the perceived gaze (i.e. looked to the same direction as the gaze shift) and looked more to the eyes when the face made eye contact and smiled. Critically, these differential responses to facial displays did not interact with the
cultural background of the participants, suggesting that the responses to facial gestures are not modulated by the cultural backgrounds.


Talk 5:

Investigating on line joint attention during face-to-face interaction: an hyperscanning EEG study

Nathalie George & Fanny Lachat.
CNRS / UPMC / INSERM, UMR 7225 / UMR-S 975, CRICM, Paris, France

Within face, gaze plays a particular role in social interaction. Viewing someone gazing at an environmental object triggers attention orienting toward the object in the observer. This joint attention process involves a dynamic interplay of mutual attentiveness and coordinated attention to the environment between the persons (Tickle-Degnen, 2006); it is also a building block of theory-of-mind (Baron-Cohen-1995). Here, we aimed at studying joint attention with a setup involving online, face-to-face interaction between two agents whose brain activities were simultaneously recorded with EEG hyperscanning (64 electrodes/subject). The
participants sat face-to-face, with a device subtending 4 light emitting diodes (LEDs) inbetween them, which could be switched in red, green, or orange. In “congruent” attention blocks, the subjects were requested to look at the same LED (joint attention condition), whereas on “incongruent” attention blocks, they had to look at opposite LEDs (no joint attention condition). Baseline trials were the participants could see the LEDs but could not see each other were included. Time-frequency analysis showed that induced alpha/mu activity between 10 and 12Hz was reduced in the joint relative to no joint attention conditions. These
results suggest a modulation of motor-related mu-rhythm, providing support for the mirror neuron account of joint attention process (Shepherd 2009). This study emphasizes the interest – and feasibility – of moving toward a neuroscience of on line face perception and real-life social interaction.

Online user: 1 RSS Feed