News
Grants

Does the brain adjust to speech depending on our level of attention?

Study shows that brain adjusts electrical activity when listening to speech, and this synchronisation depends more on attention than on volume. EEG‑based systems can adapt sound to listener’s state and pave the way for more efficient auditory technologies.

Published Feb 9, 2026

When we listen to someone speaking, the brain doesn’t simply “receive” the sound: It adjusts its electrical activity to follow the rhythm and variations of the speech, almost as if it were dancing to the same beat. This phenomenon, known as speech tracking (or brain–speech synchronization), is strongly linked to speech comprehension and has sparked great scientific interest because it reveals how the brain adapts to process auditory information efficiently.

In a recent study, a team of researchers led by Alejandro Pérez analyzed whether this neural tracking could be influenced by speech volume, adjusted in real time according to the listener’s level of attention. To do this, the researchers used EEG to measure brain activity related to attention (alpha waves) and developed a system that automatically increased or decreased the volume. Participants listened to sequences of digits and had to repeat them in the same order, allowing the researchers to assess immediate memory and brain activity.

The results showed that immediate memory performance remained stable regardless of volume. However, the way the brain followed the speech changed systematically. Louder sounds presented when attention was low (indicated by high alpha power), resulted in weaker speech tracking but with faster response peaks, contradicting the authors’ initial hypothesis. In contrast, quieter sounds, heard during states of higher attention (lower alpha power), produced stronger and more prolonged neural tracking.

Additional analyses suggest that the brain may activate compensatory mechanisms to maintain memory performance—for example, greater synchronization in the theta band during encoding—even when the sound changes.

Taken together, the results suggest that internal attention plays a more decisive role than volume in speech processing. Furthermore, the study shows that EEG-based systems can adjust sound to the listener’s state, paving the way for smarter, adaptive auditory technologies. This study was published in the Journal of Neural Engineering, in the article Modulating speech tracking through brain state-dependent changes in audio loudness, as part of the research project 267/22 - System for measuring and manipulating language-based social interactions using EEG hyperscanning, neurofeedback and closed-loop brain stimulation, supported by the Bial Foundation.

ABSTRACT

Objective. To determine whether the perceptual intensity of speech signals—manipulated via loudness and dynamically adjusted through a brain state-dependent stimulation (BSDS) paradigm-modulates neural speech tracking and short-term memory.

Approach. We implemented an EEG brain state-dependent design in which real-time variations in alpha power were used to modulate the loudness of pre-recorded digits during a task modelled on the digit span test. Speech tracking was quantified using lagged Gaussian copula mutual information (2–10 Hz), and behavioural performance was assessed through recall accuracy.

Main results. Contrary to our initial hypothesis that higher loudness would enhance speech tracking and memory via bottom–up attention, digit recall accuracy was stable across loudness conditions. Speech tracking revealed an unexpected pattern: louder stimuli presented during high alpha power (low attention) elicited reduced tracking magnitudes and shorter peak latencies, whereas quieter stimuli delivered during low alpha power (high attention) produced stronger and more temporally extended tracking responses.

Significance. These findings may suggest that internal attentional state, rather than external stimulus salience, plays a dominant role in shaping speech encoding. The study provides proof-of-concept evidence for BSDS in auditory paradigms, showing the importance of attentional fluctuations and stimulus loudness in determining the strength and timing of neural speech tracking, with implications for the design of adaptive speech-enhancement strategies.

Share

Send through