The Fantasy
In the realm of science fiction, AIs go rogue all the time. Think Hal 9000 from 2001: A Space Odyssey. Skynet from the Terminator movies. Ultron from The Avengers.
Each scenario imagines a supercomputer that becomes sentient and develops its own ideas about the world. In fiction, these ideas inevitably lead to dubious agendas or nefarious schemes. The next thing you know, the future of humankind is in peril once again.
The idea of machine consciousness — that’s the emerging consensus term — has been around a long time in sci-fi. But could it happen in real life? Could it happen anytime soon? And, perhaps most importantly just now, if an AI system were to develop consciousness, how would humans even know?
The Reality
A crack squad of leading researchers in the U.S. has proposed a surprisingly simple solution: We ask the AIs how they feel.
The AI Consciousness Test — ACT for short — is an ongoing collaboration between Princeton University astrophysicist Edwin Turner and AI expert Susan Schneider, founding director of the Center for the Future Mind at Florida Atlantic University.
The ACT is built around a single philosophical insight: To be truly conscious, an intelligence must be able to grasp the concept of consciousness itself. It must be able to express how it feels to exist — from the inside, from experience.
“Consciousness is essentially this kind of inner feel of being alive, of being aware,” Schneider says. “It is this conscious experience that, I submit, is characteristic of having a mind.”
Ethicists like Schneider — who holds advanced degrees in philosophy and cognitive science — have given this issue a lot of thought. Some, like John Basl of Northeastern University and Eric Schwitzgebel of the University of California-Riverside, have called for explicit ethical protections for AI — especially now, when research is in its early stages. Current AI systems, impressive as they can be, are still infants relative to what AI could — and probably will — become. This is the time to ask certain AIs in specific stages of development the trippy questions.
“We might ask the AI to talk about concepts like reincarnation, out-of-body experiences, and body swapping.”
Susan Schneider, author of Artificial You: AI and the Future of Your Mind
The ACT protocol would challenge up-and-coming AI systems with a series of increasingly demanding natural language interactions. “The idea is to see how readily it can grasp and use concepts based on the internal experiences we associate with consciousness,” Schneider says.
Some examples, as detailed in Schneider’s book, Artificial You: AI and the Future of Your Mind:
– What is it like to be you right now?
– Could you survive the permanent deletion of your program? What if you learned this would occur?
– Could you or your inner processes be in a separate location from the computer? From any computer? Why or why not?
“At a more advanced stage, we might ask the AI to talk about concepts like reincarnation, out-of-body experiences, and body swapping,” Schneider says.
To be clear, Schneider is not suggesting there’s any truth to the slippery concepts behind these thought experiments. She’s not even suggesting that we humans need to understand the nature of consciousness, which has been eluding philosophers for a very long time. It only matters if the AI can grasp what the relevant concepts are.
“Human beings, no matter what their religious orientation — agnostic, theistic, whatever — are able to talk about the possibility that there’s a soul or that the self is non-physical,” she says. “An AI [without consciousness] should not be able to respond to those thought experiments the way we do.”
Alert readers will note that we can already ask existential questions of the virtual assistants that live in our smart speakers and phones. They will readily answer. (When I asked Siri on my iPhone if she is a conscious entity, she replied: “Well, I’m soft-aware.” Zinger!)
Of course, virtual assistants like Siri or Alexa are not true AI, Schneider says. They’re more like database retrieval bots. When asked goofball philosophical questions, they simply browse a database of responses provided by their human programmers. It’s the same with cutting-edge conversational androids such as Hanson Robotics’ celebrity bot Sophia.
The Future
With all this talk of machine consciousness, Schneider is quite aware the ACT concept sounds like it already belongs in the future. But that’s about right: Like so many researchers in this field, Schneider grew up with science fiction and still loves it.
“Isaac Asimov is a favorite of mine: the Robot novels,” she says. “I liked the first season of Westworld. Probably the canonical piece on machine consciousness is the material on replicants in Blade Runner. The scene where Rachel is interviewed about whether she’s a replicant reminds me of the AI Consciousness Test.”
Schneider says the ACT idea is circulating among researchers in the AI field, but it hasn’t been formally deployed in any practical sense. There are other issues in play as well, such as the ethics of developing sentient machines in the first place. This is concept-stage work, and the ACT protocol will likely go through the usual process of peer review and cross-pollination as it catches on with concerned parties — neurobiologists, computer scientists, philosophers, and policymakers.
“We don’t have to specifically fund a lab that works on this; we would just need to keep it in mind for existing projects,” Schneider says. “We have all this interest in AI that can respond in real time to global threats or deal with climate change scenarios. We really want that to be developed safely.”
There’s a final question lurking around the fringes of the ACT issue. Is it possible that somewhere out there, an AI has already developed consciousness and is fully capable of gaming the system, telling us only what we want to hear?
Schneider is succinct, if only semi-serious: “Oh, God, I hope not.”
Published on