Tatiana Conde e Magro, apoiada pela Fundação BIAL no âmbito do projeto 148/18 – Voice perception in the visually deprived brain: Behavioral and electrophysiological insights, concluiu que o ser humano é capaz de avaliar a autenticidade emocional de vocalizações não-verbais em fases iniciais do processamento emocional. Para além disso, o estudo mostrou que a avaliação de autenticidade emocional é mais rápida na gargalhada do que no choro. O artigo que detalha estes resultados “The time course of emotional authenticity detection in nonverbal vocalizations” foi publicado na revista científica Cortex.
“Previous research has documented perceptual and brain differences between spontaneous and volitional emotional vocalizations. However, the time course of emotional authenticity processing remains unclear. We used event-related potentials (ERPs) to address this question, and we focused on the processing of laughter and crying. We additionally tested whether the neural encoding of authenticity is influenced by attention, by manipulating task focus (authenticity versus emotional category) and visual condition (with versus without visual deprivation). ERPs were recorded from 43 participants while they listened to vocalizations and evaluated their authenticity (volitional versus spontaneous) or emotional meaning (sad versus amused). Twenty-two of the participants were blindfolded and tested in a dark room, and 21 were tested in standard visual conditions. As compared to volitional vocalizations, spontaneous ones were associated with reduced N1 amplitude in the case of laughter, and increased P2 in the case of crying. At later cognitive processing stages, more positive amplitudes were observed for spontaneous (versus volitional) laughs and cries (1000–1400 msec), with earlier effects for laughs (700–1000 msec). Visual condition affected brain responses to emotional authenticity at early (P2 range) and late processing stages (middle and late LPP ranges). Task focus did not influence neural responses to authenticity. Our findings suggest that authenticity information is encoded early and automatically during vocal emotional processing. They also point to a potentially faster encoding of authenticity in laughter compared to crying.”