As robots become more common, and more sophisticated, new ethical question are emerging about how they’ll impact our lives and how we’ll impact theirs.
“We think that robots are alive,” Kate Darling, a research specialist at the Massachusetts Institute of Technology Media Lab who specializes in robot ethics, told an audience at the C2 Montréal business conference.
Darling says that people tend to anthropomorphize robots; we act like they’re alive, even when we know that they’re not. There’s a tendency to ascribe intent to a robot’s actions, she says, as if they were responding to stimulus like an animal or person, not just responding they way they’re programmed to.
And she says that’s true even when the robots are only designed to dispose of bombs or vacuum floors.
While robots are becoming smarter and more life-like, Darling says that has less to do with this change in perception than the fact that robots, whose use was once limited to heavy industry and dangerous hard-to-reach places, like outer space and the deep sea, are now moving into people’s homes, cars and offices.
But there’s also a new wave of robots that are designed to elicit emotion, social robots that are built to interact with people and have at least some human-like qualities.
“We respond to the social cues that these robots give us,” Darling says.
That means it’s possible to learn about how empathetic a human being is based on how willing they are to “hurt” a robot, she says.
The most interesting question about this, she says, is “can we change empathy with robots?”
But with this anthropomorphization comes new ethical issues.
Not only does the coming rise of robots in the home, car and office raise the same privacy and data security concerns as other connected devices, robots have more power to manipulate people’s emotions.
“Is it ok if your sex robot has in-app purchases?” Darling says.
What if a robotic pet that you’ve developed a relationship with suddenly requires a mandatory update that costs thousands of dollars? Or what if a language-teaching robot for children has a vocabulary that’s heavily influenced by corporate sponsors, she wonders.
They’re questions without easy answers. Those aren’t the only questions that have emerged.
As people develop feelings about robots and those robots become more sophisticated, will robots be given rights similar to those that are now extended to animals, or even humans?
Darling says that on this question, the design of the robots might have more to do with how people feel about protecting them than what the robots actually experience.
She says that’s currently the case with animals – how people feel about rights for specific animal species tends to be more determined by our relationship with those animals.
“The charismatic ones will have an easier time,” says Nick Bostrom, a philosopher at the University of Oxford, who was also speaking at C2. Giving rights to a supercomputer in a grey box might be a harder sell than for a social robot that has large eyes – even if the grey box has real feelings and experiences and the social robot doesn’t.