Skip to main content
Humans+Robots

To be trusted by humans, robots need to read the room

Depending on the context, a robot dog can be loved — or loathed.

By Eoin O’Carroll

The NYPD’s newest four-legged recruit was supposed to be a cop’s best friend. But New Yorkers weren’t in the mood for any new tricks.

Standing knee-high and looking like a cross between an Italian greyhound and a USB drive, Digidog, the department’s robotic dog, joined the force in August 2020 with loads of potential: Perhaps it could help officers get into tight spaces, or scout potentially dangerous crime scenes before human officers went in or used force. Police officials were especially impressed with its rare — for robots — mastery of stairs. “This dog,” said an NYPD inspector in a December 2020 TV interview, “is going to save lives.”

Police told the New York Times the dog had responded to a half-dozen incidents between August and April, including investigating reports of people who had barricaded themselves inside apartments in the Bronx, Brooklyn, and Manhattan and delivering food to hostages in a home invasion in Queens. But less than a year later, upbeat news stories gave way to unsettling viral videos of the robot trotting through low-income communities with a panoramic camera mounted on its back. Privacy advocates leveled charges of mass surveillance, a spokesman for New York City’s mayor called the four-legged robot “creepy,” and Black Mirror comparisons proliferated. By the end of April, Digidog had turned in its badge.

Meanwhile, another autonomous pooch, Sully, was having a much better time on the job. The 70-pound robot was adopted in 2021 by Code & Circuit, an after-school program in Amesbury, Mass. that teaches computer skills. On one morning this past June, Sully entertained elementary school students with dog-like behavior — rolling over and lifting a leg near a tree. The Boston Globe identified Sully as “a good dog,” and reported that the robot charmed middle- and high schoolers as well. “He brought smiles,” Code & Circuit’s executive director told the paper.

What made Digidog fuel anxiety and Sully spark joy? What’s the difference between the two robots’ designs?

Nothing, actually. Digidog and Sully are the same model. Both are from the “Spot” line of robots built by the Massachusetts-based tech company Boston Dynamics, which began selling the robotic dogs in June 2020. That Spot can draw such disparate reactions speaks to the current challenges of robot design — and why social interactions are as important as function if robots and humans are going to work side by side.

Automation nation

Robots have been a source of anxiety and fear for — well, precisely 100 years. The word “robot” originated in the Czech play “R.U.R.” (“Rossum’s Universal Robots”), by Karel Čapek, which premiered in January 1921. Set in the year 2000, the play describes a world that has become completely dependent on robotic labor. Čapek’s “roboti” — Czech for “forced laborers” — are physically indistinguishable from humans. In the first act, the human characters debate whether the robots have souls and whether it’s ethical to enslave them, but the question becomes moot after the robots stage a revolt and exterminate humanity.

Though anthropomorphic robots still loom large in our cultural imagination, the physical design of robots has expanded tremendously in recent years — from tiny MTJEMS, a jet-powered drug-delivery prototype robot that can easily fit on the edge of a dime, to the nearly 28-foot-tall MONONOFU, officially the world’s largest humanoid robot, with Guinness World Records praising it as “too tall to leave its warehouse.”

Robots of different shapes and sizes have edged into all aspects of the service industry: An April 2021 survey found that one in four retailers are developing in-store robotics for scanning inventory and stocking shelves, and that nearly half say they will explore the prospect in the next 18 months. Grubhub is set to deploy suitcase-sized, six-wheeled delivery rovers to college campuses in September. Its competitor, DoorDash, recently acquired the robotics startup Chowbotics, whose vending machine-like robot, Sally, assembles custom salads, grain bowls, and yogurt parfaits. 

And robots are increasingly being used not just to perform physical labor, but to serve human emotional needs. In some classrooms, cute, fluffy robots are helping children learn academic skills and emotional strategies. Nursing homes around the world are introducing companion robots. These include ElliQ, which looks like a table lamp with an iPad; Ludwig, which looks like an anime action figure and is designed to help Alzheimer’s’ patients; and Stevie, a four-and-a-half-foot-tall Irish robot that looks like the offspring of Bender from the TV show Futurama and a plastic wine-saver pump.

All of these developments raise new — and complicated — questions about design and function. Historically, roboticists have focused almost exclusively on building machines that could sense and respond to their physical surroundings. But researchers are now realizing that this is only half the equation. As robots integrate themselves into the fabric of our lives, they need softer skills, too. A robot working in a human environment needs to be able to read a room — and the room needs to be able to read the robot. In other words, why the robot is there, and what its capabilities are, should be apparent in the design.  

“Machines don’t operate in a vacuum,” says Kristian Kloeckl, a professor at Northeastern University’s Department of Art + Design and School of Architecture. “They operate in a social context.”

A sweeter Spot

Spot wasn’t built with policing, education, nor any other specific line of work in mind. The canine robot’s true specialty, honed over decades of research and precision engineering, is moving without toppling over. This it does exceedingly well, ably traversing gravel, stairs, curbs, and other obstacles that would frustrate even the most capable Roomba.

The base model starts at $74,500 (with free shipping), and, for tens of thousands of dollars more, customers can purchase add-ons like a panoramic zoom camera or an articulated claw arm. Boston Dynamics, which did not respond to an interview request for this story, said in February that it has sold about 400 Spot models so far, mostly to construction projects, mines, and decommissioned nuclear sites.

Inscrutability poses problems when robots are tasked with public-facing jobs, like law enforcement.

Spot’s versatility, however, comes at the cost of some of its social skills. Because it lacks a head, a tail, and — when it’s standing still — an obvious way to distinguish front from back, it’s not always clear which way Spot is facing, or even if that matters. 

This inscrutability poses problems when robots are tasked with public-facing jobs, like law enforcement.

“Crime is not a mechanical issue. It’s a social issue,” Kloeckl says. “When a robot is in front of me in a highly charged situation, I’m going to ask, ‘What is that thing doing? Why is it doing that?’”

Angela Tinwell, a lecturer at the University of Bolton in Greater Manchester, England, says that robots often fall into a gap between our mental categories of “human” and “not human.” This gap, which the roboticist Masahiro Mori dubbed the “Uncanny Valley,” can come with an eerie feeling, especially when we’re presented with a simulated human whose realism falls just short of convincing.

Driving part of the Uncanny effect is that some robots seem unresponsive to the people around them, even as they collect data on them.

When that responsiveness is absent — say, when we can’t read a person’s face or tell if a robot police dog is filming us — we tend to feel threatened. “We’re immediately meant to feel uneasy,” says Tinwell, author of the book The Uncanny Valley in Games and Animation. 

A workout at work

Responsiveness is essential to human-robot interactions, says Kloeckl, who directs Northeastern’s Experience Design Lab. “We’re really talking about a conversation,” he says.

A robot’s appearance can be part of this conversation, says Northeastern University engineering professor Taskin Padir. “The huge success of robots like Roomba” — which has the same round shape and circular motions as a floor scrubber or a street sweeper — “is that we associate their form almost entirely with their function,” he says.

Padir directs Northeastern’s Robotics and Intelligent Vehicles Research Laboratory and frequently collaborates with Kloeckl. In one recent research project, they observed the movements of workers on factory floors and compared them to the movements of exercise routines.

They then parlayed that research into Gymnast CoBot, a stationary “collaborative robot” designed to lift heavy boxes and hand them to workers in a manner that avoided injury and, instead, offered fitness benefits like strength building and flexibility.

Gymnast CoBot looks like a silver articulated arm the size of a lamppost. At the end of the arm are four suction-cup grippers and a video projector. The robot monitors each worker’s height, weight, movements, and heart rate to set a height and rhythm for handing over boxes. Workers can manually set a preference depending on how much of a workout they want, and Gymnast CoBot dynamically adjusts how it hands over objects according to the workers’ posture and energy levels.

Gymnast CoBot gives the worker biometric feedback, such as heart rate and fatigue levels, projected directly onto the box. “We share information about what the robot thinks of the human,” Kloeckl says.

That final piece of interaction — where the robot lets the human know what it knows — is essential for overcoming creepiness. Humans are primed to look for the effects that our actions have on others, says Tinwell. When a face fails to acknowledge you, it leaves an uneasy void.

“Whether it’s a [computer-generated] agent, a human, or a robot,” she says, “we intrinsically want to be recognized and be understood by others.”

Kloeckl says he hopes that this approach will help others rethink some basic assumptions about the workplace, which has historically forced workers to contort themselves to accommodate machinery, not the other way around.

“We’re still in the mindset of industrialization,” he says. “But now we have machines that can adjust and adapt to humans. The question we want to ask ourselves is, ‘What does good work look like?’”

It’s an important question as society moves closer to a world fully shared with robots. A social context doesn’t just determine whether a robot’s behaviors are creepy or reassuring, but also whether they’re morally permissible or not, says John Basl, a Northeastern University philosophy professor who studies the ethics of AI and other emerging technologies.

Technology has a long way to go before robots can be said to exercise moral agency, says Basl. “It’s much more likely that we will treat robots badly than the other way around,” he says, noting that humanity may not be far off from developing an artificial intelligence that has the same cognitive abilities as a mouse. This raises questions about whether machines might someday deserve ethical consideration.

But in the meantime, humans will continue to use robots to mediate their interactions with each other. And as the social media era has shown, the same technology can divide people on one day and unite them on another.

“Like any possible new technology, robots can influence us in ways that are positive or negative,” he says. “We need to think about the particular robot and the particular context in which it is deployed.”

Published on

Eoin O’Carroll is a writer and podcaster based in Amherst, Massachusetts.

 

Photo by Josh Reynolds/Associated Press

Humans+Robots

Forget the class guinea pig. Meet the class robot.

AI-powered social robots can help teach vocabulary, handwriting, and even emotional intelligence.

By Hannah Thomasy