Facial expressions tested as input method for VR and AR systems: University of Glasgow
Making digital interaction more accessible & inclusive
The University of Glasgow in the United Kingdom, along with the University of St Gallen in Switzerland, has developed a method that uses facial expressions as an input mechanism for virtual and augmented reality applications, in a step towards more inclusive and accessible digital world.
Aiming to study the functionality of a commercially available VR headset, researchers from the University of Glasgow and the University of St Gallen have developed a method that uses facial expressions as an input mechanism for virtual and augmented reality applications.
In a press statement, the University of Glasgow says that the research highlights the potential for more accessible digital interaction, particularly for users with limited mobility.
The British university also adds that researchers have worked with 20 volunteers without disabilities, where it asked them to replicate 53 facial expressions recognised by the headset. Under this study, these expressions, known as Facial Action Units (FAUs) were used to animate avatars in virtual environments.
According to press statement, the University of Glasgow says that by holding the expression for three seconds on each attempt, each participant performed the FAUs three times. Later, they were asked to evaluate each one for ease of performance, comfort and execution. Later, the study found that seven FAUs could be consistently recognised by the headset while also being comfortable to perform.
As per press statement, British University says that around seven expressions were identified which include the opening the mouth, squinting the left and right eyes, puffing the left and right cheeks and pulling the corners of the mouth sideways.
“Some have suggested that VR is an inherently ableist technology because it often requires users to perform dextrous hand motions with controllers or make full-body movements which not everyone can perform comfortably. That means that, at the moment, there are barriers preventing widespread access to VR and AR experiences. With this study, we were keen to explore whether the functions of a commercially available VR headset could be adapted to help users accurately control software using only their face. The results are encouraging, and could help point to new ways of using technology not only for people living with disabilities but more widely too,” says Graham Wilson, School of Computing Science, University of Glasgow.
The university says that to test the real-world application of the findings, it developed a neural network model, which was capable of identifying the seven facial expressions with 97 pc accuracy. Later, they asked 10 additional non-disabled participants to complete two tasks using facial input alone.
According to the press statement the first task involved navigating and interacting with a VR gaming environment and later the second task involved the browsing of web pages in an AR setup, where participants performed the same tasks using standard handheld controllers to compare functionality.
As a result, the participants reported that traditional controllers provided greater precision for gaming, but that facial expressions offered an effective, lower-effort alternative. The facial input was rated as straightforward and suitable for web browsing, with users indicating willingness to adopt the method in the future.
Additionally, the University of Glasgow noted that the potential applications of this technology extend beyond accessibility, including the usage of AR glasses while carrying items, cooking, or interacting with devices in public spaces without the need for hand-based controls.
The UK University says that this study is released ahead of the CHI 2025 conference in Yokohama, Japan, where it will explore whether commercially available VR headsets such as Meta’s Quest Pro can be adapted to enable users to navigate digital environments through facial movement alone.
“This is a relatively small study, based on data captured with the help of non-disabled people. However, it shows clearly that these seven specific facial movements are likely the most easily-recognised by off-the-shelf hardware, and translate well to two of the typical ways we might expect to use more traditional VR and AR input methods. That gives us a base to build on in future research. We plan to work with people with disabilities like motor impairments or muscular disorders in further studies to provide developers and XR platforms with new suggestions of how they can expand their palette of accessibility options, and help them break free of the constraints of hand and controller-based input,” says Mark McGill, Co-Author, School of Computing Science, University of Glasgow.