Pode o cérebro recuperar informações motoras não visualizadas durante a audição de vocalizações?

Pode o cérebro recuperar informações motoras não visualizadas durante a audição de vocalizações?

 

Alice Tomassini, apoiada pela Fundação BIAL no âmbito do projeto 246/20 - The hidden rhythm of interpersonal (sub-)movement coordination, concluiu que durante a audição da fala, o cérebro reconstrói informações articulatórias que não estão disponíveis visualmente. Estes resultados são detalhados no Speech listening entails neural encoding of invisible articulatory features publicado na revista científica NeuroImage.

 

Abstract

“Speech processing entails a complex interplay between bottom-up and top-down computations. The former is reflected in the neural entrainment to the quasi-rhythmic properties of speech acoustics while the latter is supposed to guide the selection of the most relevant input subspace. Top-down signals are believed to originate mainly from motor regions, yet similar activities have been shown to tune attentional cycles also for simpler, non-speech stimuli. Here we examined whether, during speech listening, the brain reconstructs articulatory patterns associated to speech production. We measured electroencephalographic (EEG) data while participants listened to sentences during the production of which articulatory kinematics of lips, jaws and tongue were also recorded (via Electro-Magnetic Articulography, EMA). We captured the patterns of articulatory coordination through Principal Component Analysis (PCA) and used Partial Information Decomposition (PID) to identify whether the speech envelope and each of the kinematic components provided unique, synergistic and/or redundant information regarding the EEG signals. Interestingly, tongue movements contain both unique as well as synergistic information with the envelope that are encoded in the listener's brain activity. This demonstrates that during speech listening the brain retrieves highly specific and unique motor information that is never accessible through vision, thus leveraging audio-motor maps that arise most likely from the acquisition of speech production during development.”