Computers are slowly but surely learning to read our emotions. Will this mean a future without privacy, or perhaps a golden age of more compassionate and helpful machines?
This edition of the Sleepwalkers podcast looks at AI’s growing power to “read” us—and investigates the sinister and the positive uses of the technology.
Poppy Crum, chief scientist at Dolby Labs and a professor at Stanford University, is using advanced sensors and AI to capture emotional signals. From thermal sensors that track blood flow to CO2 monitors that detect our breathing rates and cameras that track microscopic facial recognition, it’s getting harder to maintain a poker face in front of machines. “We haven’t changed as humans,” Crum says. “What’s changed is the ubiquity of sensors, and the capacity of sensors, and the cost.”
If this sounds a bit frightening, it probably should. As digital sociologist Lisa Talia Moretti notes, we increasingly trust algorithms that may work in unexpected ways. Even the computer scientists who build those algorithms sometimes shirk responsibility, viewing artificial intelligence as something out of their control.
“If you abdicate your responsibility, if you just cower in fear, then you’re not being a good computer scientist,” says Jaron Lanier, a research scientist at Microsoft and the author of books including You Are Not a Gadget and Who Owns The Future? “That is not the responsible way to do things.”
Lanier says we are often so dazzled by a technology’s benefits that we fail to consider potential downsides. He points to the way many people welcomed voice assistants into their homes and families, without considering the effect on children. “I think that the problem isn’t the math or the algorithms,” Lanier says. “I think the problem is our framework for thinking about them—this ideology, of thinking of the machine as being alive.”
There is, of course, also a flip side to letting more powerful and attuned machines into our lives. AI algorithms might help us eliminate human biases and mistakes, for example.
“AI is programmed by people,” says Kai-Fu Lee, an entrepreneur who worked on the technology behind Siri before heading up Google China. “It is up to us to remove the factors that we don’t think are appropriate to be considered in a decision from an AI. If we want to eliminate sexual orientation from a loan decision engine we can do that. Or if we want to eliminate it from a job application, we can do that.”
Crum, of Dolby Labs, thinks devices like Alexa could perhaps be a new, more attentive healthcare helper, one that lacks the foibles of a human. “We make mistakes, we’re not good at integrating information all the time, and our fallacy comes in places that technology can solve,” she says.
She believes users need to know how they’re being monitored, but argues that opting out will soon be impossible. “We have to recognize that this cognitive sovereignty, or agency that we believe in, is a thing of the past,” Crum says. “We have to redefine what that future looks like.”
More Great WIRED Stories