A research group led by Osaka University has developed a technology that allows Androids to dynamically express their mood states, such as “excited” or “sleepy,” by collecting facial movements as superimposed decomposed waves.
Even if the robot’s appearance is so realistic that it could be mistaken for a human in a photo, watching it move in person can feel a little unsettling. They may smile, frown, or display other familiar and different expressions, but finding a consistent emotional state behind those expressions can be difficult, leaving you unsure of what you’re really feeling and creating a feeling of discomfort.
Until now, when robots that can move many parts of their faces, such as androids, have been allowed to display facial expressions for long periods, the “patching method” has been used. This method involves setting up multiple pre-arranged action scenarios to ensure that unnatural facial movements are excluded while switching between these scenarios as needed.
However, this poses practical challenges, such as preparing complex action scenarios in advance, minimizing unnatural movements observed during transitions, and fine-tuning movements to subtly control conveyed expressions.
In this study, lead author Hisashi Ishihara and his research group developed a dynamic facial expression synthesis technique using “waveform movements,” which represent the different gestures that make up facial movements, such as “breathing,” “blinking,” and “yawning.” individual waves. These waves are propagated to relevant facial areas and overlaid to generate complex facial movements in real time. This method eliminates the need to prepare complex and varied business data while also avoiding noticeable motion transitions.
Furthermore, by introducing “waveform modulation”, which adjusts individual waveforms based on the robot’s internal state, changes in internal conditions, such as mood, can be instantly reflected as changes in facial movements.
“Advancing this research into the synthesis of dynamic facial expressions will enable robots capable of complex facial movements to show more lively expressions and convey mood changes that respond to surrounding conditions, including interactions with humans,” says lead author Koichi Osuka. “This can greatly enrich the emotional communication between humans and robots.”
“Instead of creating superficial movements, further development of a system in which inner feelings are reflected in every detail of a robot’s actions could lead to the creation of androids that are perceived as having a heart,” Ishihara adds.
By realizing the function of adaptively regulating and expressing emotions, this technology is expected to greatly enhance the value of communication robots, allowing them to exchange information with humans in a more natural and humane way.
Introduction video: https://youtu.be/QAvtAzdu_WQ
Introduction video on automatic generation of dynamic excitement expressions on the face of a robotic robot
Figure 1
Proposed system
Credit: Hisashi Ishihara
Figure 2
Screenshots of sleepy mood expression achieved on a children’s Android robot
Credit: Hisashi Ishihara
The article, “Automatic generation of dynamic excitation expressions based on decomposed wave synthesis of robot faces,” was published in the Journal of Robotics and Mechatronics at DOI: https://doi.org/10.20965/jrm.2024.p1481.
/General version. This material from the original organization/author(s) may be chronological in nature, and is edited for clarity, style and length. Mirage.News does not take corporate positions or parties, and all opinions, positions and conclusions expressed herein are solely those of the author(s).View in full here.