Experiments reveal why human-like robots elicit uncanny feelings
New insights into the uncanny valley phenomenon
Date:
September 10, 2020
Source:
Emory Health Sciences
Summary:
Experiments reveal a dynamic process that leads to the uncanny
valley, with implications for both the design of robots and for
understanding how we perceive one another as humans.
FULL STORY ========================================================================== Androids, or robots with humanlike features, are often more appealing
to people than those that resemble machines -- but only up to a certain
point. Many people experience an uneasy feeling in response to robots
that are nearly lifelike, and yet somehow not quite "right." The feeling
of affinity can plunge into one of repulsion as a robot's human likeness increases, a zone known as "the uncanny valley."
==========================================================================
The journal Perception published new insights into the cognitive
mechanisms underlying this phenomenon made by psychologists at Emory University.
Since the uncanny valley was first described, a common hypothesis
developed to explain it. Known as the mind-perception theory, it proposes
that when people see a robot with human-like features, they automatically
add a mind to it. A growing sense that a machine appears to have a mind
leads to the creepy feeling, according to this theory.
"We found that the opposite is true," says Wang Shensheng, first author
of the new study, who did the work as a graduate student at Emory and
recently received his PhD in psychology. "It's not the first step of attributing a mind to an android but the next step of 'dehumanizing' it
by subtracting the idea of it having a mind that leads to the uncanny
valley. Instead of just a one-shot process, it's a dynamic one."
The findings have implications for both the design of robots and for understanding how we perceive one another as humans.
"Robots are increasingly entering the social domain for everything
from education to healthcare," Wang says. "How we perceive them and
relate to them is important both from the standpoint of engineers
and psychologists." "At the core of this research is the question of
what we perceive when we look at a face," adds Philippe Rochat, Emory
professor of psychology and senior author of the study. "It's probably
one of the most important questions in psychology. The ability to perceive
the minds of others is the foundation of human relationships. "
==========================================================================
The research may help in unraveling the mechanisms involved in
mind-blindness - - the inability to distinguish between humans and
machines -- such as in cases of extreme autism or some psychotic
disorders, Rochat says.
Co-authors of the study include Yuk Fai Cheong and Daniel Dilks, both
associate professors of psychology at Emory.
Anthropomorphizing, or projecting human qualities onto objects, is
common. "We often see faces in a cloud for instance," Wang says. "We also sometimes anthropomorphize machines that we're trying to understand,
like our cars or a computer." Naming one's car or imagining that a
cloud is an animated being, however, is not normally associated with an
uncanny feeling, Wang notes. That led him to hypothesize that something
other than just anthropomorphizing may occur when viewing an android.
To tease apart the potential roles of mind-perception and dehumanization
in the uncanny valley phenomenon the researchers conducted experiments
focused on the temporal dynamics of the process. Participants were
shown three types of images -- human faces, mechanical-looking robot
faces and android faces that closely resembled humans -- and asked to
rate each for perceived animacy or "aliveness." The exposure times of
the images were systematically manipulated, within milliseconds, as the participants rated their animacy.
The results showed that perceived animacy decreased significantly as a
function of exposure time for android faces but not for mechanical-looking robot or human faces. And in android faces, the perceived animacy
drops at between 100 and 500 milliseconds of viewing time. That timing
is consistent with previous research showing that people begin to
distinguish between human and artificial faces around 400 milliseconds
after stimulus onset.
A second set of experiments manipulated both the exposure time and the
amount of detail in the images, ranging from a minimal sketch of the
features to a fully blurred image. The results showed that removing
details from the images of the android faces decreased the perceived
animacy along with the perceived uncanniness.
"The whole process is complicated but it happens within the blink
of an eye," Wang says. "Our results suggest that at first sight we anthropomorphize an android, but within milliseconds we detect deviations
and dehumanize it. And that drop in perceived animacy likely contributes
to the uncanny feeling."
========================================================================== Story Source: Materials provided by Emory_Health_Sciences. Original
written by Carol Clark.
Note: Content may be edited for style and length.
========================================================================== Journal Reference:
1. Shensheng Wang, Yuk F. Cheong, Daniel D. Dilks, Philippe Rochat. The
Uncanny Valley Phenomenon and the Temporal Dynamics of Face
Animacy Perception. Perception, 2020; 030100662095261 DOI: 10.1177/
0301006620952611 ==========================================================================
Link to news story:
https://www.sciencedaily.com/releases/2020/09/200910110857.htm
--- up 2 weeks, 3 days, 6 hours, 50 minutes
* Origin: -=> Castle Rock BBS <=- Now Husky HPT Powered! (1337:3/111)