28 Min.

Dr. Fei-Fei Li: The Worlds I See: Curiosity, Exploration, and Discovery at the Dawn of AI Capital for Good

    • Geldanlage

In this episode of Capital for Good we speak with Dr. Fei-Fei Li, the Sequoia Professor of Computer Science at Stanford and the Denning Co-Director of Stanford’s Human Centered AI-Institute. Dr. Li has been called the godmother of artificial intelligence and has emerged as one of the country’s leading scientists — and humanists. She is also the author of the new book, The Worlds I See: Curiosity, Exploration, and Discovery at the Dawn of AI. 
We begin by discussing how and why Li employs a “double helix” structure in her book to tell two interlacing stories: the evolution of a new field of science and her own coming of age as a scientist. Together, they form an homage to the intellectual foundations of her work, and to the teachers, mentors, and family members whose sacrifices made her work possible. We explore how the very act of writing the book serves to introduce an underrepresented voice — that of a woman, an immigrant, a person of color — into the world of artificial intelligence and science more broadly. Li believes strongly that “progress and discovery come from every corner,” and throughout her career has worked towards “lifting all walks of life.”
In explaining just what she means by “human centered AI,” Li explains that there is “nothing artificial about artificial intelligence.” As a “tool made by and for people,” she argues AI should be used to make people’s lives work better. Li describes any number of extraordinary and beneficial applications of AI, including those in neuroscience, the social and political sciences, business, education, climate change, and health care, from research drug discovery to diagnosis, treatment, and delivery. We also touch on some of the major risks of AI. While Li believes it is important to examine the longer term and potentially existential threats of AI — the current and popular pre-occupation with sentience and machine overlords – she is more concerned with the technology’s urgent (and potentially catastrophic) social risks: significant biases in data and algorithms, issues of privacy, the problems of misinformation and disinformation, and the profound and uneven economic disruptions that the technology can bring about. “AI can grow the global pie of productivity,” Li says, “but there is a difference between increased productivity and shared prosperity.”
Li also warns of severe levels of underinvestment by the public sector in AI. She has worked closely with the state of California, the federal government, and the UN to encourage more of a “moonshot” mentality when it comes to resources for blue sky innovation, and for the development of governance and guardrails essential for public safety and trust.
Li concludes by encouraging others to follow their own North Stars. “My North Star hasn’t changed, it is still AI, but it is the science with an expanded aperture: the greater North Star of doing good that is human centered.” 
Thanks for Listening!
Subscribe to Capital for Good on Apple, Amazon, Google, Spotify, or wherever you get your podcasts. Drop us a line at socialenterprise@gsb.columbia.edu. 
Mentioned in this Episode
The Worlds I See: Curiosity, Exploration, and Discovery at the Dawn of AI Stanford University Human Centered AI Institute AI4All

In this episode of Capital for Good we speak with Dr. Fei-Fei Li, the Sequoia Professor of Computer Science at Stanford and the Denning Co-Director of Stanford’s Human Centered AI-Institute. Dr. Li has been called the godmother of artificial intelligence and has emerged as one of the country’s leading scientists — and humanists. She is also the author of the new book, The Worlds I See: Curiosity, Exploration, and Discovery at the Dawn of AI. 
We begin by discussing how and why Li employs a “double helix” structure in her book to tell two interlacing stories: the evolution of a new field of science and her own coming of age as a scientist. Together, they form an homage to the intellectual foundations of her work, and to the teachers, mentors, and family members whose sacrifices made her work possible. We explore how the very act of writing the book serves to introduce an underrepresented voice — that of a woman, an immigrant, a person of color — into the world of artificial intelligence and science more broadly. Li believes strongly that “progress and discovery come from every corner,” and throughout her career has worked towards “lifting all walks of life.”
In explaining just what she means by “human centered AI,” Li explains that there is “nothing artificial about artificial intelligence.” As a “tool made by and for people,” she argues AI should be used to make people’s lives work better. Li describes any number of extraordinary and beneficial applications of AI, including those in neuroscience, the social and political sciences, business, education, climate change, and health care, from research drug discovery to diagnosis, treatment, and delivery. We also touch on some of the major risks of AI. While Li believes it is important to examine the longer term and potentially existential threats of AI — the current and popular pre-occupation with sentience and machine overlords – she is more concerned with the technology’s urgent (and potentially catastrophic) social risks: significant biases in data and algorithms, issues of privacy, the problems of misinformation and disinformation, and the profound and uneven economic disruptions that the technology can bring about. “AI can grow the global pie of productivity,” Li says, “but there is a difference between increased productivity and shared prosperity.”
Li also warns of severe levels of underinvestment by the public sector in AI. She has worked closely with the state of California, the federal government, and the UN to encourage more of a “moonshot” mentality when it comes to resources for blue sky innovation, and for the development of governance and guardrails essential for public safety and trust.
Li concludes by encouraging others to follow their own North Stars. “My North Star hasn’t changed, it is still AI, but it is the science with an expanded aperture: the greater North Star of doing good that is human centered.” 
Thanks for Listening!
Subscribe to Capital for Good on Apple, Amazon, Google, Spotify, or wherever you get your podcasts. Drop us a line at socialenterprise@gsb.columbia.edu. 
Mentioned in this Episode
The Worlds I See: Curiosity, Exploration, and Discovery at the Dawn of AI Stanford University Human Centered AI Institute AI4All

28 Min.