On the last day of 2019, the Canadian artificial-intelligence firm BlueDot — which sends early-warning reports about diseases to commercial clients such as hospitals and airlines — sent out one of the first alerts about a flu-like illness emerging in China. It reported people were getting sick after exposure in a public market in Wuhan.
“It was our AI engine that was picking it up,” says BlueDot CEO and founder Kamran Khan. “We reviewed it, it seemed bona fide and real, so we pushed it to our clients.”
BlueDot’s news-scavenging algorithm was designed to look for early signs of an influenza outbreak. It hoovered up local online news reports, then combined that information with livestock health reports, which can presage a virus jumping to people. It also looked at privately-held commercial airline data that showed where people from Wuhan most often travel.
Based on its data, BlueDot predicted that the disease would likely appear in Bangkok, Seoul, Taipei, and Tokyo within a few days — which it did. The firm’s AI had predicted the early spread of COVID-19 to four of the six cities where it first took off. Now, California Gov. Gavin Newsom is using BlueDot to manage the crisis in the Golden State and forecast how many hospital beds are needed at potential hot spots.
As governments scramble to prepare for an impending surge in COVID-19 cases, AI is emerging as a powerful tool. Machine learning is being used to triage and diagnose patients and research possible treatments, and to contribute to predictive models to help epidemiologists understand how the virus will likely spread in the coming weeks.
But AI’s greatest value might prove to be its power to predict future epidemics before they break out. Some AI experts, already thinking ahead to the next pandemic, are building virtual models of both how a virus spreads and how people respond to it — just as climate scientists build models of the Earth’s future as carbon emissions and global temperatures rise. The experts hope that AI can add greater certainty and accuracy to predictions of the next emerging disease — and perhaps even make it impossible for decision-makers to ignore such evidence before it spreads across the world.
“One confounding factor in predictions is always people who are not behaving in their own best interests. That can be coded into AI.”Ahmer Inam, chief artificial intelligence officer at Pactera Edge, a Portland-based AI consulting firm
As with climate change models, medical pandemic models that use AI have to account for the vagaries of human behavior. They have to factor in good behavior when people heed warnings to stay inside, which keeps the virus from spreading. They also have to include dumb behavior, like when people violate a public-health order to attend a Florida mega-church or invite 60 people to a backyard bonfire in Maryland.
Experts say the knucklehead factor can be quantified mathematically and written into AI algorithms. “One confounding factor in predictions is always people who are not behaving in their own best interests,” says Ahmer Inam, chief artificial intelligence officer at Pactera Edge, a Portland-based AI consulting firm. “That can be coded into AI.”
That includes political leaders’ bad decisions that can accelerate a pandemic. “Additional simulations can be added with a rogue leader of a population to look at the impact of these adversarial behaviors toward the outcome,” Inam says.
Virtual pandemic models that Inam and others are developing are borrowing a cybersecurity concept known as “adversarial neural networks” to model the effects of people’s bad decisions. They are developed with the help of behavioral and social psychologists “so we can understand how bad things can get,” he added.
AI experts say there is still work to do to make machine-learning more effective at prognostication. “We are still a little far from being able to predict a pandemic before it hits,” says Inam.
For instance, to better predict future pandemics, AI experts say they need access to more and better medical data about people’s health, such as the number of people who call in sick at a big Chinese manufacturing plant, or doctor’s office visits to clinics in Queens, New York.
Government agencies in most countries currently protect such data for privacy reasons. To get around privacy fears, AI researchers like Inam propose something called a general adversarial network, or GAN: a go-between program that can read, digest, and synthesize data from a federal health agency or a hospital network without revealing names and numbers of individuals in the database. Inam says he and his colleagues are working on a proposed global engine that would gather real-time public health data such as hospitalizations and diagnosis, as well as the movements of both humans and the pathogen. The goal is to use the general adversarial network to train AI algorithms, then create AI engines to learn how pathogens spread.
Another useful input for AI prediction, experts say, is the ability to track people’s movements in near-real time. Keeping tabs on individuals through cellphone data raises data-privacy and civil-rights concerns. But using massive, anonymized databases of cellphone movements might not be as worrisome. A recent New York Times analysis of anonymous data from 15 million cell phones revealed where people have been moving about during the day in the U.S., as opposed to staying home. It suggested that many residents of Southeastern states, such as Florida, where governors waited longer to issue stay-at-home orders or recommendations, continued to travel outside their homes during March, as the virus spread. (The greater travel might have other explanations too, such as longer distances in the rural South from homes to workplaces or the grocery store.)
One epidemiologist says he would like to see a worldwide group of pandemic modelers put their work together, similar to how climate scientists from around the world assemble the International Panel on Climate Change (IPCC) report every few years.
“You don’t have one climate model, you have a helluva lot of climate models,” says Paul Hunter, professor of medicine at the University of East Anglia, who works with pandemic modelers in the UK. “They have different structures and inputs, then the community comes to a consensus based on running all these models.” In a pandemic, Hunter says, researchers may not have the luxury of time to reach a consensus. In that case, Hunter says, he follows an old rule of thumb.
“If you have three models that are run in different ways and telling you the same thing,” he adds, “you probably got it right.”