I talk with the smartest people I can find working or researching anywhere near the intersection of emerging technologies and their ethical impacts.
From AI to social media to quantum computers and blockchain. From hallucinating chatbots to AI judges to who gets control over decentralized applications. If it’s coming down the tech pipeline (or it’s here already), we’ll pick it apart, figure out its implications, and break down what we should do about it.
Ep 22. - Tech Forward Conservatism vs Nature Leaning Liberalism, and Everything in Between
Are you on the political left or the political right? Ben Steyn wants to ask you the same question with regards to nature and technology. Do you lean tech or do you lean nature?
For instance, what do you think about growing human babies outside of a womb (aka ectogenesis)? Are you inclined to find it an affront to nature and you want politicians to make it illegal? Are you inclined to find it a tech wonder and you want to make sure elected officials don’t ban such a thing?
Ben claims that nature vs. tech leanings don’t map on nicely to the political left vs right distinction. We need a new axis by which we evaluate our politicians.
Really thought-provoking conversation - enjoy!
Ep. 21 - The Morality of the Israeli-Hamas War
Before I did AI ethics, I was a philosophy professor, specializing in ethics. One of my senior colleagues in the field was David Enoch, also an ethicist and philosopher of law. David is also Israeli and a long-time supporter of a two-state solution. In fact, he went to military jail for refusing to serve in Gaza for ethical reasons.
Given David’s rare, if not unique, combination of expertise and experience, I wanted to have a conversation with him about the Israeli-Hamas war. In the face of the brutal Hamas attacks of October 7, what is it ethically permissible for Israel to do?
David rejects both extremes. It’s not the case that Israel should be pacifist. That would be for Israel to default on its obligations to safeguard its citizens. Nor should Israel bomb Gaza and its people out of existence; that would be to engage in genocide.
If you’re looking for an “Israel is the best and does nothing wrong” conversation, you won’t find it here. If you’re looking for “Israel is the worst and should drop their weapons and go home,” you won’t find that here, either. It’s a complex situation. David and I do our best to navigate it as best we can.
David Enoch studies law and philosophy in Tel Aviv University, and then clerked for Justice Beinisch at the Israeli Supreme Court. He got a PhD in philosophy from NYU in 2003, and has been a professor of law and philosophy at the Hebrew University ever since. This year he started as the Professor of the Philosophy of Law at Oxford. He does mainly moral, political, and legal philosophy.
Ep. 20 - Creating Responsible AI in the Face of Our Ignorance
We want to create AI that makes accurate predictions. We want that not only because we want our products to work, but also because reliable products are, all else equal, ethically safe products.
But we can’t always know whether our AI is accurate. Our ignorance leaves us with a question: which of the various AI models that we’ve developed is the right one for this particular use case?
In some circumstances, we might decide that using AI isn’t the right call. We just don’t know enough. In other instances, we may know enough, but we also have to choose our model in light of the ethical values we’re trying to achieve.
Julia and I talk about this and a lot of other (ethical) problems that beset AI practitioners on the ground, and what can and cannot be done about it.
Dr. Julia Stoyanovich is Associate Professor of Computer Science & Engineering and of Data Science, and Director of the Center for Responsible AI at NYU. Her goal is to make “responsible AI” synonymous with “AI”. Julia has co-authored over 100 academic publications, and has written for the New York Times, the Wall Street Journal and Le Monde. She engages in technology policy, has been teaching responsible AI to students, practitioners and the public, and has co-authored comic books on this topic. She received her Ph.D. in Computer Science from Columbia University.
Ep. 19 - The Turing Test is not Intelligent (and what it would take for AI to understand)
If I look inside your head when you’re talking, I’ll see various neurons lighting up, probably in the prefrontal cortex as you engage in the reasoning that’s necessary to say whatever it is you’re saying. But if I opened your head and instead found a record playing and no brain, I’d realize I was dealing with a puppet, not a person with a brain/intellect.
In both cases you’re saying the same things (let’s suppose). But because of what’s going on in the head, or “under the hood,” it’s clear there’s intelligence in the first case and not in the second.
Does an LLM (large language models like GPT or Bard) have intelligence. Well, to know that we need to look under the hood, as Lisa Titus argues. It’s not impossible that AI could be intelligent, she says, but judging by what’s going on under the hood at the moment, it’s not.
Fascinating discussion about the nature of intelligence, why we attribute it to each other (mostly), and why we shouldn’t attribute it to AI.
Lisa Titus (née Lisa Miracchi) is a tenured Associate Professor of Philosophy at the University of Denver.
Previously, she was a tenured Associate Professor of Philosophy at the University of Pennsylvania, where she was also a General Robotics, Automation, Sensing, and Perception (GRASP) Lab affiliate and a MindCORE affiliate.
She works on issues regarding mind and intelligence. What makes intelligent systems different from other kinds of systems? What kinds of explanations of intelligent systems are possible, or most important? What are appropriate conceptions of real-world intelligent capacities like those for agency, knowledge, and rationality? How can conceptual clarity on these issues advance cognitive science and aid in the effective and ethical development and application of AI and robotic systems? Her work draws together diverse literatures in the cognitive sciences, AI, robotics, epistemology, ethics, law, and policy to systematically address these questions.
Ep. 18 - Innovation Hype and Why We Should Wait on AI Regulation
Innovation is great…but hype is bad. Not only has all this talk of innovation not increased innovation, but it also creates a bad environment in which leaders can make reasoned judgments about where to devote resources. So says Lee Vinsel in my latest podcast episode.
ALSO: We want proactive regulations before the sh!t hits the fan, right? Not so fast, says Lee. Proactive regulations presuppose we’re good at predicting how technologies will be applied, and we have a terrible track record on that front. Perhaps reactive regs are appropriate (and we need to focus on making a more agile government).
Super interesting conversation that will push you to think differently about innovation and what appropriate regulation looks like.
Lee Vinsel is an Associate Professor of Science, Technology, and Society at Virginia Tech and host of Peoples & Things, a podcast about human life with technology. His work examines the social dimensions of technology with particular focus on the relationship between government and technological change. He is the author of Moving Violations: Automobiles, Experts, and Regulations in the United States and, with Andrew L. Russell, The Innovation Delusion: How Our Obsession with the New Has Disrupted the Work That Matters Most.
Ep. 17 -The Sexy Cyber Threats of GenAI: How to Avoid Exposing Yourself
We're all familiar with cybersecurity threats. Stories of companies being hacked and data and secrets being stolen abound. Now we have generative AI to throw fuel on the fire.
I don't know much about cybersecurity, but Matthew does. In this conversation, he provides some fun and scary stories about how hackers have operated in the past, how they can leverage genAI to get access to things they shouldn't have access to, and what cybersecurity professionals are doing to slow them down.
Matthew Rosenquist is the Chief Information Security Officer (CISO) for Eclipz, the former Cybersecurity Strategist for Intel Corp, and benefits from over 30+ diverse years in the fields of cyber, physical, and information security. Matthew specializes in security strategy, measuring value, developing best-practices for cost-effective capabilities, and establishing organizations that deliver optimal levels of cybersecurity, privacy, governance, ethics, and safety. As a cybersecurity CISO and strategist, he identifies emerging risks and opportunities to help organizations balance threats, costs, and usability factors to achieve an optimal level of security. Matthew is very active in the industry. He is an experienced keynote speaker, collaborates with industry partners to tackle pressing problems and has published acclaimed articles, white papers, blogs, and videos on a wide range of cybersecurity topics. Matthew is a member of multiple advisory boards and consults on best-practices and emerging risks to academic, business, and government audiences across the globe.
Useful frameworks for complex issues
Blackman is great at taking seemingly complex issues and making them easy to understand.
Makes me feel smarter!
Blackman brings a complex issue down to earth and allows anyone to enter the conversation.
AI made easy
Every day, artificial intelligence grows in relevance as well as complexity. Reid is the first person to break it down to a level where I can understand it. Thanks!