Truly sentient AI is something that only exists in the realm of fiction, and current scientific consensus suggests it will stay that way. That’s probably a good thing; researchers and sci fi narratives alike (Terminator; Westworld) foresee it as apocalyptic. But if the singularity happens — and we’re around to experience it — AI researchers from a range of disciplines have predictions.
Rules for AI treatment
In the near term, we’re more likely to create the consciousness equivalent to a mouse or a primate before we get to human-level conscious AI — partly because smaller brains are easier to model. If that’s what we’re imagining, we have to install new regulatory frameworks to make sure we’re managing the ethical dimensions. We’d be in a position we’ve been in many times, where we are likely to mistreat a being that’s due moral respect. We [will have] created basically a bunch of aliens, from our perspective. We don’t know what their capacities are, how to interact with them, what it means to treat them well.
- — John Basl, philosophy professor at Northeastern University
Art: a new musical partner
I’m thinking a lot about improvisation and AI. Musical improvisation is a conversation, and you have to have a heightened sense of consciousness in that conversation because you’re trying to adapt and respond to musical information in real time. What do we do when there is a system who isn’t human who’s able to do that — maybe better or worse? And what does that mean for making music? Because I think even having a really bad artificial musical improviser could be interesting.
- — Kristian Dahlbom, composer and computer music professor at Mills College at Northeastern University
Don’t taunt Siri
Today, you can say whatever you want to Siri and there are no consequences. What if now this is a “real person,” that might get offended? It could get dangerous in a hurry. But the voice assistant or the smart fridges aren’t the things that we have to worry about the most. Sure, a fridge could spoil your food if it wanted to, but think about your Tesla. What if it starts making decisions? What if this thinking technology is able to control our electrical grid, our weapons?
If anyone somehow believes that’s not an apocalyptic scenario, we’d still have important issues to address when it comes to privacy and security.
- — David Choffnes, computer science professor and director of the Cybersecurity and Privacy institute at Northeastern University
Robotics: Make them only so smart
Human beings interacting with an all-knowing AI system that is perceiving everything all at once — perhaps seeing us as primitive as ants — would be like interacting with God. Instead, I believe in dumbing down machines to optimize human-machine teams.
We need something a person can understand the edges of. Brian the robot, coming up to you and asking what kind of tea you want — it still needs be able to make predictions about other people in order to be an optimal teammate. But I’m not going to be surprised by what it’s doing. In that sense, that robot, even though it’s intelligent, is still just a tool. Like a fancy fork.
— Emily C. Collins: Researcher at the Institute of Experiential Robotics and New College of the Humanities at Northeastern University