Friday , April 23 2021

Researchers make robots more expressive



Japanese researchers have found a way to make faces of human robots more explicit, paving the way for machines to show a greater number of emotions and finally having a deeper interaction with people.

While robots have presented advances in healthcare, industry and other settings, it's still a difficult challenge to capture humanistic expressions in a robotic face.

Researchers at Osaka University in Japan found a method for identifying and quantitatively evaluating facial movements on their android robot childhood head.

The name Affetto, Android's first-generation model, was first announced in 2011. Researchers have now found a system to make Affetto's second generation more expressive.

Their findings, published in the magazine Frontiers in Robotics and AI, offer a way for androids to express greater emotional patterns and ultimately have a deeper interaction with people.

"Surface deformations are an important issue in controlling android faces. Movements of their soft facial skin create instability, and this is a major hardware problem we can handle," said Minoru Asada from Osaka University.

"We sought a better way to measure and control it," said Asada.

The researchers examined 116 different faces of Affetto to measure its three-dimensional motion. Facial points were supported by so-called deformation units.

Each device consists of a set of mechanisms that create a distinct facial distortion, such as lowering or raising the part of a lip or eyelid.

Measurements from these were then subjected to a mathematical model to quantify their surface treatment patterns.

While the researchers encountered challenges to balance the applied force and adjust the synthetic skin, they could use their system to adjust the deformation devices for accurate control of Affetto's face area.

"The Android robot still has a blackbox problem: they have been implemented but have only been sentenced in vague and public terms," ​​said Hisashi Ishihara, the first author of the study.

"Our exact results will allow us to effectively control android facial movements to introduce more nuanced expressions, such as smile and frowning," says Ishihara. PTI MHN MHN


Source link