Can a simple gaze from AI create an impression of communication
An international team of researchers conducted a groundbreaking study on human communication with artificial agents. The results suggest that interaction with technology depends not only on the signals displayed but also on human expectations.
Published Dec 9, 2025
Communication with artificial agents, such as virtual characters and social robots, is becoming increasingly prevalent in domains such as healthcare, education, and services. One of the most important cues for understanding communicative intentions is gaze, which in humans’ signals attention and intention and is essential for processes such as joint attention. With the growing integration of artificial intelligence into everyday life, understanding how we interpret these cues in artificial agents is crucial for improving human-technology interaction.
With this goal in mind, an international team led by Friederike Charlotte Hechler and Emmanuele Tidoni conducted an innovative study. In a semi-interactive online paradigm, 160 participants observed a virtual agent alternating its gaze between different objects. In each scenario, the researchers manipulated two factors: whether the agent established eye contact and whether it repeated its gaze toward the same object. Participants were asked to decide whether to “give” an object to the agent, interpreting whether it was requesting assistance or merely observing. Additionally, participants were informed that the agent’s behavior was either based on human data or generated by artificial intelligence (AI).
The findings confirmed that eye contact is the strongest cue of communicative intent, followed by gaze repetition. When both occurred, the likelihood of participants interpreting the gesture as a request for help was higher. When the agent repeatedly looked at the same object without eye contact, this also influenced responses, but less clearly, acting more as an indication of the object’s relevance than as a direct communicative signal. Interestingly, beliefs about the origin of the behavior (human or AI) had little overall impact but did influence responses in ambiguous situations. In such cases, participants tended to attribute greater communicative intent when they believed the agent was human-controlled.
These results suggest that interaction with technology depends not only on the signals displayed but also on human expectations. This study was published in the scientific journal Scientific reports, in the article The influence of human agency beliefs on ascribing gaze-signalled communicative intent, as a part of research project 137/24 - Decoding the motor and physiological dynamics of human-robot interactions, supported by the Bial Foundation.
ABSTRACT
Communication with artificial agents, such as virtual characters and social robots, is becoming more prevalent, making it crucial to understand how their behaviours can best support social interaction. Eye gaze is a key communicative behaviour, as it signals attention and intentions. Prior research shows that perceiving an agent as sentient affects how its gaze is interpreted. This
study examined how such beliefs affect the interpretation of gaze as a signal of communicative intent. In a semi-interactive online task, 160 participants viewed a virtual agent exhibiting dynamic gaze sequences. Each trial varied whether eye contact occurred and whether the agent looked at the same object twice. Participants judged whether the agent was requesting help or merely inspecting the object. Beliefs about the agent’s sentience (human- or AI-controlled) were also manipulated. Results showed that when gaze cues were ambiguous, participants were more likely to ascribe communicative intent if they believed the agent was human-controlled compared to when they believed the agent was AI-controlled. Subjective ratings also indicated a general preference for human-controlled agents. These findings underscore the influence of user expectations on interpreting gaze in artificial agents.