Skip to main content
Big Idea

Afraid your toaster is spying on you? Call on this digital gatekeeper.

A new kind of advocate AI could protect our privacy against smart devices' cameras and microphones.

By Glenn McDonald

It’s an unsettling truism of the “smart devices” era that we’ve been trading away privacy for convenience for years. And now, with the rise of smart speakers and voice assistants — Amazon’s Alexa, Apple’s Siri, and Microsoft’s Cortana — those vague anxieties have taken the shape of alarmingly specific questions: Are my electronics spying on my conversations? What are they recording and transmitting? And is there anything I can do about it?

The answers, in order: not exactly, researchers are figuring that out, and potentially, yes! AI researchers are working on that, too.

The problem

Your toaster oven isn’t spying on you. Probably not. Well, not a lot, anyway. That’s the moderately comforting conclusion generated by The Mon(IoT)r Lab, an observational laboratory at Northeastern University in Boston. The lab’s director, computer science professor David Choffnes, and his research colleagues have been finding out just how much our devices are actually sending out to the internet.

In summer 2020, Choffnes and his colleagues published an intriguing study on smart speakers and voice assistants. “The good news is that, contrary to the most paranoid beliefs, the devices are not constantly recording what’s said in the environment and sending it out,” Choffnes says.

However, they occasionally wake up and record for short periods of time, often because someone says a phrase that sounds similar to the device’s wake words — the phrases that trigger the system to start listening. (“Oh, sorry” can sound a lot like “Hey, Siri.”) Some of that audio then goes to cloud-based servers that handle complex voice recognition.

To see how often this happens, the NU team used 137 hours of television-show dialogue, from comedies like The Big Bang Theory as well as political and medical dramas. Using video cameras, the team tracked accidental activations, while also capturing the data sent from the lab to the cloud servers. (Among the quirky results: The West Wing’s dense Aaron Sorkin dialogue triggered the Google Home Mini around once per hour, more than most shows. Dialogue that included foreign languages or accents tripped the wake function more than neutral American voices.)

Your toaster oven isn’t spying on you. Probably not. Well, not a lot, anyway.

The important takeaway is this: When accidentally triggered, our smart speakers are indeed sending out audio recordings made in our homes. Most of these recordings are just a second or two in duration, but around 10% of them are longer — 10 seconds or more. One outlier clocked in at 43 seconds of recorded dialogue. Taken together, that’s a lot of potentially sensitive information unintentionally escaping our homes.

The solution

But if AI causes the problem, maybe other AI can resolve it. Choffnes and his team hope to develop technology that could, like their Mon(IoT)r Lab experiments, keep an eye on size and shape of outgoing data transmissions for us. The system would function as a gatekeeper, a burly digital doorman keeping an eye on the exits.

Such a system would be powered by a bare-bones artificial intelligence, Choffnes says — a machine-learning system programmed with consumer privacy at top of mind.

“We would be using AI to infer what a device is doing, and once we know, we can identify what’s normal and what’s not,” Choffnes says. “We can also take actions depending on what the user wants. So we could block certain activities, once we recognize that they’re happening.”

That software would live in your home network — between the home router and the cable modem, say — and alert you when your smart devices are beaming out suspicious packets of audio or video.

The challenge

The technical obstacles are legion, and the concept of a personal digital gatekeeper, at this point, is mostly theoretical. Creating such a system would involve a wholesale shift in priorities for the tech industry, which has developed AI systems to indiscriminately harvest huge batches of data for big companies. The privacy interests of the consumer, in other words, are not a priority.

It wasn’t always this way. In the internet’s 1990s infancy, the idea of a consumer-advocate gatekeeper wasn’t so far-fetched. AI researchers like Vince Conitzer, a professor of computer science at Duke University, conceptualized the idea of individual AI “agents” — semi-autonomous helpers to represent us online, handle specific tasks, and keep our interests in mind.

“The personal assistant thing didn’t really develop that way,” Conitzer says. “Instead we have Siri and Alexa, these omnipresent generic AI constructs that everyone talks to. It hasn’t matched the original vision — of the AI research community, at least — where those agents really represent individual human beings.”

The Big Picture

What if we could nudge AI development back toward that earlier model? Conitzer says that if we shift the technology’s objectives away from where they’re at now — benefiting companies — we could design a new kind of advocate AI to defend our privacy concerns against all the smart appliances and other devices bristling with cameras and microphones.

Choffnes says that would require combining technology, public policy, and economic incentives. He and his group recently received a National Science Foundation Frontier Award to study these questions. “If you do policy, but you don’t have the technology to back it up, you’re not going to make a difference,” he says. “If you have the technology but no policy, it’s the same thing.”

Published on

Glenn McDonald is a writer based in Chapel Hill, North Carolina. He has written for National Geographic, NPR, Discovery News, The History Channel, Thrillist, Goodreads, and McClatchy newspapers.

Illustration by Sol Cotti