Pondering AI

Kimberly Nevala, Strategic Advisor - SAS
Pondering AI

How is the use of artificial intelligence (AI) shaping our human experience? Kimberly Nevala ponders the reality of AI with a diverse group of innovators, advocates and data scientists. Ethics and uncertainty. Automation and art. Work, politics and culture. In real life and online. Contemplate AI’s impact, for better and worse. All presentations represent the opinions of the presenter and do not represent the position or the opinion of SAS.

  1. Technical Morality with John Danaher

    9月25日

    Technical Morality with John Danaher

    John Danaher assesses how AI may reshape ethical and social norms, minds the anticipatory gap in regulation, and applies the MVPP to decide against digitizing himself.   John parlayed an interest in science fiction into researching legal philosophy, emerging technology, and society. Flipping the script on ethical assessment, John identifies six (6) mechanisms by which technology may reshape ethical principles and social norms. John further illustrates the impact AI can have on decision sets and relationships. We then discuss the dilemma articulated by the aptly named anticipatory gap. In which the effort required to regulate nascent tech is proportional to our understanding of its ultimate effects.   Finally, we turn our attention to the rapid rise of digital duplicates. John provides examples and proposes a Minimally Viable Permissibility Principle (MVPP) for evaluating the use of digital duplicates. Emphasizing the difficulty of mitigating the risks posed after a digital duplicate is let loose in the wide, John declines the opportunity to digitally duplicate himself.   John Danaher is a Sr. Lecturer in Ethics at the NUI Galway School of Law. A prolific scholar, he is the author of Automation and Utopia: Human Flourishing in a World Without Work (Harvard University Press, 2019). Papers referenced in this episode include The Ethics of Personalized Digital Duplicates: A Minimal Viability Principle and How Technology Alters Morality and Why It Matters.  A transcript of this episode is here.

    46分
  2. Artificial Empathy with Ben Bland

    9月11日

    Artificial Empathy with Ben Bland

    Ben Bland expressively explores emotive AI’s shaky scientific underpinnings, the gap between reality and perception, popular applications, and critical apprehensions.  Ben exposes the scientific contention surrounding human emotion. He talks terms (emotive? empathic? not telepathic!) and outlines a spectrum of emotive applications. We discuss the powerful, often subtle, and sometimes insidious ways emotion can be leveraged. Ben explains the negative effects of perpetual positivity and why drawing clear red lines around the tech is difficult.  He also addresses the qualitative sea change brought about by large language models (LLMs), implicit vs explicit design and commercial objectives. Noting that the social and psychological impacts of emotive AI systems have been poorly explored, he muses about the potential to actively evolve your machine’s emotional capability.  Ben confronts the challenges of defining standards when the language is tricky, the science is shaky, and applications are proliferating. Lastly, Ben jazzes up empathy as a human superpower. While optimistic about empathic AI’s potential, he counsels proceeding with caution.  Ben Bland is an independent consultant in ethical innovation. An active community contributor, Ben is the Chair of the IEEE P7014 Standard for Ethical Considerations in Emulated Empathy in Autonomous and Intelligent Systems and Vice-Chair of IEEE P7014.1 Recommended Practice for Ethical Considerations of Emulated Empathy in Partner-based General-Purpose Artificial Intelligence Systems. A transcript of this episode is here.

    46分
  3. Chief Data Concerns with Heidi Lanford

    7月3日

    Chief Data Concerns with Heidi Lanford

    Heidi Lanford connects data to cocktails and campaigns while considering the nature of data disruption, getting from analytics to AI, and using data with confidence. Heidi studied mathematics and statistics and never looked back. Reflecting on analytics then and now, she confirms the appetite for data has never been higher. Yet adoption, momentum and focus remain evergreen barriers. Heidi issues a cocktail party challenge while discussing the core competencies of effective data leaders. Heidi believes data and CDOs are disruptive by nature. But this only matters if your business incentives are properly aligned. She revels in agile experimentation while counseling that speed is not enough. We discuss how good old-fashioned analytics put the right pressure on the foundational data needed for AI.  Heidi then campaigns for endemic data literacy. Along the way she pans JIT holiday training and promotes confident decision making as the metric that matters. Never saying never, Heidi celebrates human experts and the spotlight AI is shining on data. Heidi Lanford is a Global Chief Data & Analytics Officer who has served as Chief Data Officer (CDO) at the Fitch Group and VP of Enterprise Data & Analytics at Red Hat (IBM). In 2023, Heidi co-founded two AI startups LiveFire AI and AIQScore. Heidi serves as a Board Member at the University of Virginia School of Data Science, is a Founding Board Member of the Data Leadership Collaborative, and an Advisor to Domino Data Labs and Linea.  A transcript of this episode is here.

    50分
  4. Ethical Control and Trust with Marianna B. Ganapini

    6月19日

    Ethical Control and Trust with Marianna B. Ganapini

    Marianna B. Ganapini contemplates AI nudging, entropy as a bellwether of risk, accessible ethical assessment, ethical ROI, the limits of trust and irrational beliefs.  Marianna studies how AI-driven nudging ups the ethical ante relative to autonomy and decision-making. This is a solvable problem that may still prove difficult to regulate. She posits that the level of entropy within a system correlates with risks seen and unseen. We discuss the relationship between risk and harm and why a lack of knowledge imbues moral responsibility. Marianna describes how macro-level assessments can effectively take an AI system’s temperature (risk-wise). Addressing the evolving responsible AI discourse, Marianna asserts that limiting trust to moral agents is overly restrictive. The real problem is conflating trust between humans with the trust afforded any number of entities from your pet to your Roomba. Marianna also cautions against hastily judging another’s beliefs, even when they overhype AI. Acknowledging progress, Marianna advocates for increased interdisciplinary efforts and ethical certifications.  Marianna B. Ganapini is a Professor of Philosophy and Founder of Logica.Now, a consultancy which seeks to educate and engage organizations in ethical AI inquiry. She is also a Faculty Director at the Montreal AI Ethics Institute and Visiting Scholar at the ND-IBM Tech Ethics Lab .   A transcript of this episode is here.

    59分

番組について

How is the use of artificial intelligence (AI) shaping our human experience? Kimberly Nevala ponders the reality of AI with a diverse group of innovators, advocates and data scientists. Ethics and uncertainty. Automation and art. Work, politics and culture. In real life and online. Contemplate AI’s impact, for better and worse. All presentations represent the opinions of the presenter and do not represent the position or the opinion of SAS.

その他のおすすめ

露骨な表現を含むエピソードを聴くには、サインインしてください。

この番組の最新情報をチェック

サインインまたは登録して番組をフォローし、エピソードを保存し、最新のアップデートを入手しましょう。

国または地域を選択

アフリカ、中東、インド

アジア太平洋

ヨーロッパ

ラテンアメリカ、カリブ海地域

米国およびカナダ