By Ker Than
Both women helped kickstart twin revolutions that are profoundly reshaping society in the 21st century – Li in the field of artificial intelligence (AI) and Doudna in the life sciences. Both revolutions can be traced back to 2012, the year that computer scientists collectively recognized the power of Li’s approach to training computer vision algorithms and that Doudna drew attention to a new gene-editing tool known as CRISPR-Cas9 (“CRISPR” for short). Both pioneering scientists are also driven by a growing urgency to raise awareness about the ethical dangers of the technologies they helped create.
“It was just incredible to hear how similar our stories were. Not just the timing of our scientific discoveries, but also our sense of responsibility for the ethics of the science are just so similar,” said Li, who is a professor of computer science at Stanford’s School of Engineering and co-director of the Stanford Institute for Human-Centered Artificial Intelligence (HAI).
“The ethical angle to what we were doing was not something that either of us anticipated but that we found ourselves quickly drawn to,” said Doudna, who is a professor of chemistry and of molecular and cell biology at the University of California, Berkeley.
The echoes between Li and Doudna’s lives were also not lost on the dinner host that night, Stanford political science professor Rob Reich, who invited the pair to resume their conversation in public. Their talk, titled “CRISPR, AI, and the Ethics of Scientific Discovery,” will take place at Stanford on Nov. 19 and will be moderated by Stanford bioengineering professor Russ Altman (livestream will be available here).
The event is organized by the Stanford McCoy Family Center for Ethics in Society and HAI and is part of the Ethics, Society & Technology Integrative Hub that arose from the university’s Long-Range Vision.
“The subject of the lecture hits the sweet spot of what the Integrative Hub’s work is about, which is to cultivate and support the large community of faculty and students who work at the intersection of ethics, society and technology,” said Reich, who directs the Center for Ethics in Society and co-directs the Integrative Hub.
“I can’t think of two better people to engage in a conversation and to really take seriously these questions of how, as you discover the effects of what you’ve created, do you bring ethical implications and societal consequences into the discussion?” said Margaret Levi, a professor of political science at Stanford’s School of Humanities and Sciences. Levi is also the Sara Miller McCune Director of the Center for Advanced Study in the Behavioral Sciences and co-director of the Integrative Hub.
Fei-Fei Li is a professor of computer science and co-director of Stanford’s Institute for Human-Centered Artificial Intelligence. (Image credit: L.A. Cicero)
In 2006, Li wondered if computers could be taught to see the same way that children do – through early exposure to countless objects and scenes, from which they could deduce visual rules and relationships. Her idea ran counter to the approach taken by most AI researchers at the time, which was to create increasingly customized computer algorithms for identifying specific objects in images.
Li’s insight culminated in the creation of ImageNet, a massive dataset consisting of millions of training images, and an international computer vision competition of the same name. In 2012, the winner of the ImageNet contest beat competitors by a wide margin by training a type of AI known as a deep neural network on Li’s dataset.
Li immediately understood that an important milestone in her field had just been reached, and despite being on maternity leave at Stanford, flew to Florence, Italy, to attend the award ceremony in person. “I bought a last-minute ticket,” Li said. “I was literally on the ground for about 18 hours before flying back.”
Computer vision and image recognition are largely responsible for AI’s rapid ascent in recent years. They enable self-driving cars to detect objects, Facebook to tag people in photos and shopping apps to identify real-world objects using a phone’s camera.
“Within a year or so of when the ImageNet result was announced, there was an exponential growth of interest and investment into this technology from the private industry,” Li said. “We recognized that AI had gone through a phase shift, from being a niche scientific field to a potential transformative force of our industry.”
The field of biology underwent its own phase shift in the summer of 2012 when Doudna and her colleagues published a groundbreaking paper in the journal Science that described how components of an ancient antiviral defense system in microbes could be programmed to cut and splice DNA in any living organism, including humans, with surgical precision. CRISPR made genomes “as malleable as a piece of literary prose at the mercy of an editor’s red pen,” Doudna would later write.
CRISPR could one day enable scientists to cure myriad genetic diseases, eradicate mosquito-borne illnesses, create pest-resistant plants and resurrect extinct species. But it also raises the specter of customizable “designer” babies and lasting changes to the human genetic code through so-called germline editing, or edits made to reproductive cells that are transmitted to future generations.
This bioethics nightmare scenario was realized last fall when a Chinese researcher declared that he had used CRISPR to edit the genomes of twin girls in order to make them resistant to HIV. Doudna decried the act but allows that her own views on germline editing are still evolving.
“I’ve gone from thinking ‘never, ever’ to thinking that there could be circumstances that would warrant that kind of genome editing,” she said. “But it would have to be under circumstances where there was a clear medical need that was unmet by any other means and the technology would have to be safe.”
Both Li and Doudna fervently believe in the potential of their technologies to benefit society. But they also fear CRISPR and AI could be abused to fuel discrimination and exacerbate social inequalities.
“The details are different for CRISPR and AI, but I think those concerns really apply to both,” Doudna said.
Rather than just leaving such concerns to others to work out, both scientists have stepped outside of the comfort of their labs and taken actions to help ensure their worst fears don’t come to pass. “I almost feel that at this point of history I need to do this, not that it’s my natural tendency,” Li said. “It really is about our collective future due to technology.”
Both scientists have testified before Congress about the possibilities and perils of their technologies. Li also co-launched a nonprofit called AI4All to increase inclusion and diversity among computer engineers and she co-directs Stanford HAI, which aims to develop human-centered AI technologies and applications. Doudna spends significant time talking to colleagues, students and the public about CRISPR. In 2015 she organized the first conference to discuss the safety and ethics of CRISPR genome editing.
“Because we were involved in the origins of CRISPR, I felt it was especially important for my colleagues and me to be part of that discussion and really help to lead it,” Doudna said. “I asked myself, ‘If I don’t do it, who will?’”
To read all stories about Stanford science, subscribe to the biweekly Stanford Science Digest.
Altman is the Kenneth Fong Professor of Bioengineering, Genetics, Medicine, Biomedical Data Science and host of the Stanford Engineering radio show “The Future of Everything.” Levi is a member of Stanford Bio-X, the Wu Tsai Neurosciences Institute, and the Stanford Woods Institute for the Environment. Li is the Sequoia Capital Professor at Stanford and a member of Stanford Bio-X and the Wu-Tsai Neurosciences Institute.