Should We Respect Our Robots?

in #life7 years ago

Sticks and stones can break robot bones — but calling them names might erode our humanity
1.jpeg
Poor Fabio. The humanoid robot was recently fired from a grocery store, but at least it didn’t get beaten like other robots in the Pepper series!

We may not be witnessing an epidemic in robot assaults, but then again, androids aren’t running through the streets just yet.

In the future, they might be everywhere, but for now, lots of us are making due with semi-intelligent personal assistants like Siri — voices with simulated personalities, but without mouths, lips, or other body parts. Siri isn’t injured if you throw an iPhone against the wall or flush it down the toilet, because the bot more or less exists in the cloud.

Because Siri isn’t “alive”, it also isn’t injured when you attack it with weaponized words, but nevertheless, we might want to think long and hard about the words we choose. It’s counterintuitive, but perhaps we should plan now to treat those future embodied robots with respect.

Have you ever flipped out at Siri? Come on, admit it. If you haven’t been verbally abusive, you’ve probably at least let out some strongly worded feelings. Siri fails are infuriating, especially when you carefully enunciate your commands and they still aren’t understood.

And yet, no matter what obscenities you scream, Siri never loses its cool. Having neither emotions nor willpower, Siri can’t have anger management issues or experience any emotional character flaws whatsoever. Siri is like a serene Buddha statue, praise Apple.

As an experiment, I just dropped an F-bomb. Siri responded, “I’m not going to dignify that with a response.”

Don’t for one second think this is a #takingthehighroad moment. Siri can direct you to the Wikipedia page on dignity, but it doesn’t truly “know” a thing about the subject. Siri is programmed to be funny, but with the express desire to not offend Apple’s customers. Thus, the programmers ensure it cannot take genuine offense at anyone or anything being demeaned.

You can’t hurt a robot’s feelings (since they’re nonexistent) or cause them to experience pain (though property damage is something else). Even so, reaming out robots or giving robots beatdowns might be a lot worse than similar antics with other kinds of machines, like yelling at a toaster or smashing it for burning your bread.

Robots aren’t sentient — at least not yet — and neither are appliances. But human beings tend to perceive these objects differently. It’s a lot more likely that we will personify — anthropomorphize, as it’s referred to in the technical literature — a walking or talking robot than a simple appliance that isn’t designed to look like us, sound like us, act like us, or resemble any being that’s alive, like an insect or animal.

People know that the Roomba, a robotic vacuum cleaner, isn’t alive. But the mere fact that the disk moves around as if of its own accord is enough to trigger emotional attachments and naming, and even motivate some people to preclean for the device.

More seriously, in experimental settings, people have objected to torturing robots that simulate pain, and on real battlefields, soldiers have objected the “inhumane” treatment of robots that defuse landmines.

It can be unsettling to recognize that our minds have such irrational tendencies and that human beings may unconsciously rely on unhelpful mental shortcuts that allow our intellectual understanding and emotional responses to part ways.
But cognitive scientists, psychologists, and behavioral economists have been studying these tendencies for years and identified a long list of distortions that the human mind produces when processing all kinds of information — phenomena known as cognitive biases. A familiar example from the tech world is the stickiness of default settings. The cognitive bias of inertia inclines users to accept default settings without checking to verify that they line up with their actual privacy preferences.

Given how little it takes for the human mind to anthropomorphize, there are scholars who thoughtfully suggest we think twice about treating robots badly. Kate Darling, a pioneer of the idea that we might want to “do unto robots as we’d have humans do unto us,” isn’t pressing her point because she’s worried about the well-being of robots. Darling is concerned about how we as human beings might be affected if we train ourselves away from feeling the pangs of conscience when debasing things like robots, since, on a visceral level, they remind us of our own humanity.

It might seem funny if Lil’ Johnny enjoys dropkicking Robo Rick. But it would be tragic if, after doing so time and again without scolding (or even laughter), he thinks it might be equally humorous to dropkick his human friend, Lil’ Ricky. Johnny, like the rest of us, is a creature of habit, and all of our dispositions are shaped by praise and blame, laughter or anger, and other positive and negative reinforcements.

Some parents are already concerned about their kids’ attitudes in bossing around Amazon’s Alexa, speaking loudly and aggressively in order for Alexa to process what they say and making demands of Alexa without extending the courtesies of please and thank you. To set the right playing field, Darling contends that it’s worth at least considering whether second-order rights should be extended to some robots—that is, protections modeled on the existing regime of animal rights.

This way of looking at the behavioral impact of technology echoes the moral panic long surrounding violent video games. That old debate has basically run its course, New York Times and Wired technology writer Clive Thompson told me when I asked him if playing video games is actually dangerous.

“I’ve been looking at the research for years,” he said, “and it seems to suggest that for the great majority people, the answer is no. It doesn’t make us more violent or more aggressive. But there does, however, seem to be a minority of players who do get affected,” Thompson added.

“For these players, games really do increase their aggressiveness. They seem to be kids or adults who already wrestle with impulse control and aggressiveness. So what you’ve got, really, is a technology that seems mostly fine and healthy or even salutary for the vast majority of people, but deleterious for a minority.”

Thompson elaborated: “I’ve started to think this pattern holds for other tech, too. In the ‘quantified self’ area, research suggests that step trackers are either neutral or positive for the great majority of people who wear them — but they’re disastrous for anyone with an eating disorder. Wearables that track ‘numbers’ — your physical activity, your calories — really trigger eating disorders. It looks very similar to the pattern with games: a technology that’s perfectly fine for most people while really bad for a minority.”

An important thing to keep in mind is that there are significant differences in how our brains process doing violent things in classic video games versus doing them to robots. For starters, physically hitting or kicking a robot is different physical behavior than moving a controller with your hand to manipulate an on-screen avatar to hit or kick. And yelling at a bot that can process what you say and respond interactively is different than unilaterally yelling at a simulation of a person on a screen as your video-game character beats it to a pulp.

However, this distinction may lessen as newer AR and VR video games become more immersive than their predecessors. Games now can take place in hyper-real 3D environments where players explore worlds through physical interaction and encounter highly interactive, computer-controlled characters.

When I got together with friends and family to try out a simulation on the Vive headset, we couldn’t handle the virtual experience of simply walking on a plank set atop a skyscraper. Real vertigo set in! Given how much more realistic games will become over time, it would be a mistake to assume, without further testing, that they’ll necessarily always have a negligible cognitive-behavioral impact on the majority of players.

While newer studies of cutting-edge games might lead to new conclusions, Andy Phelps, director of the Media, Arts, Games, Interaction, Creativity (MAGIC) Center at Rochester Institute of Technology, says it’s easy to lose sight of context and demonize games as a slippery-sloped pathway to desensitization. Right now, military pilots who operate drones by remote control are at risk of experiencing post-traumatic stress disorder.

“Being a drone pilot in a real war and knowing the consequences of your actions on the battlefield versus playing a highly realistic war simulation with friends for fun are completely different contexts,” Phelps told me. “They have radically different stakes, and people are attuned to the consequences of each activity, rather than how realistic a particular experience looks or even how similar the technology might look between an entertainment game and a training simulation.”

Is there opportunity at the opposite end of the spectrum? Can we capitalize on our cognitive biases and build on the idea that playing certain types of games could make us better people? As Phelps sees it, context cuts both ways. “Meanwhile, the relationships between players, the human-human interaction that occurs through and around gameplay, does tend to shape and mold our perceptions reactions just as other, real-world interactions do.

“Thus games can encourage teamwork, negotiation, empathy, etc., or conversely, reinforce negative social behaviors and stereotypes. Shooting a more realistic avatar may not have any difference over shooting a less detailed one, but engaging in the shared activity with a group over time may affect one’s views and perceptions in relation to the community they play with.”

Phelps is right. Context is everything. We already self-regulate our behavior based on our surroundings — whether we’re in a professional environment, or whether our kids can hear us. If you’re alone in the car and receive directions to the wrong destination, losing your temper with Siri won’t make you a terrible person.

But maybe just as you likely moderate your language in front of your children, you might want to moderate your aggressiveness. Without having necessarily developed the ability to clearly delineate between a human and an artificial program, they may view your tirade as an affirmation of the general principle that it’s okay for humans to be nasty. Until the data and research grow, we may do well to err on the side of the golden rule for AI and robots, as well as for each other. Not for their sakes, but for our own.

Disclosure: I’m a professor of philosophy at Rochester Institute of Technology and an affiliated faculty member of the MAGIC Center.

Coin Marketplace

STEEM 0.22
TRX 0.26
JST 0.040
BTC 98101.02
ETH 3477.28
USDT 1.00
SBD 3.25