
31 episodes

Pondering AI Kimberly Nevala, Strategic Advisor - SAS
-
- Technology
-
-
4.7 • 14 Ratings
-
How is the use of artificial intelligence (AI) shaping our human experience?
Kimberly Nevala ponders the reality of AI with a diverse group of innovators, advocates and data scientists. Ethics and uncertainty. Automation and art. Work, politics and culture. In real life and online. Contemplate AI’s impact, for better and worse.
All presentations represent the opinions of the presenter and do not represent the position or the opinion of SAS.
-
AI Stories at Work with Ganes Kesari
Ganes Kesari confronts AI hype and calls for balance, reskilling, data literacy, decision intelligence and data storytelling to adopt AI productively.
Ganes reveals the reality of AI and analytics adoption in the enterprise today. Highlighting extreme divides in understanding and expectations, Ganes provides a grounded point of view on delivering sustained business value.
Cautioning against a technocentric approach, Ganes discusses the role of data literacy and data translators in enabling AI adoption. Discussing common barriers to change, Kimberly and Ganes discuss growing resistance from technologists, not just end users. Ganes muses about the impact of AI on creative tasks and his own experiences with generative AI. Ganes also underscores the need to address workforce reskilling yet remains optimistic about the future of human endeavor. While discussing the need for improved decision-making, Ganes identifies decision intelligence as a critical new business competency. Finally, Ganes strongly advocates for taking a business-first approach and using data storytelling as part of the responsible AI and analytics toolkit.
Ganes Kesari is the co-founder and Chief Decision Scientist for Gramener and Innovation Titan.
A transcript of this episode is here. -
Keeping Work Human with Dr. Christina Colclough
Dr. Christina Colclough addresses tech determinism, the value of human labor, managerial fuzz, collective will, digital rights, and participatory AI deployment.
Christina traces the path of digital transformation and the self-sustaining narrative of tech determinism. As well as how the perceptions of the public, the C-Suite and workers (aka wage earners) diverge. Thereby highlighting the urgent need for robust public dialogue, education and collective action.
Championing constructive debate, Christina decries ‘for-it-or-against-it’ views on AI and embraces the Luddite label. Kimberly and Christina discuss the value of human work, we vs. they work cultures, the divisiveness of digital platforms, and sustainable governance. Christina questions why emerging AI regulations give workers short shift and whether regulation is being privatized. She underscores the dangers of stupid algorithms and the quantification of humans. But notes that knowledge is key to tapping into AI’s benefits while avoiding harm. Christina ends with a persuasive call for responsible regulation, radical transparency and widespread communication to combat collective ignorance.
Dr. Christina Jayne Colclough is the founder of The Why Not Lab where she fiercely advocates for worker rights and dignity for all in the digital age.
A transcript of this episode is here. -
Practical Ethics with Reid Blackman
Reid Blackman confronts whack-a-mole approaches to AI ethics, ethical ‘do goodery,’ squishy values, moral nuance, advocacy vs. activism and overfitting for AI.
Reid distinguishes AI for ‘not bad’ from AI ‘for good’ and corporate social responsibility. He describes how the language of risk creates a bridge between ethics and business. Debunking the notion of ethicists as moral priests, Reid provides practical steps for making ethics palatable and effective.
Reid and Kimberly discuss developing organizational muscle to reckon with moral nuance. Reid emphasizes that disagreement and uncertainty aren’t unique to ethics. Nor do squishy value statements make ethics squishy. Reid identifies a cocktail of motivations driving organization to engage, or not, in AI ethics. We also discuss the tendency for self-regulation to cede to market forces and the government’s role in ensuring access to basic human goods. Cautioning against overfitting an ethics program to AI alone, Reid illustrates the benefits of distinguishing digital ethics from ethics writ large. Last but not least, Reid considers how organizations may stitch together responses to the evolving regulatory patchwork.
Reid Blackman is the author of “Ethical Machines” and the CEO of Virtue Consultants.
A transcript of this episode is here. -
Generative AI: Unreal Realities with Ilke Demir
Ilke Demir depicts the state of generative AI, deepfakes for good, the emotional shelf life of synthesized media, and methods to identify AI-generated content.
Ilke provides a primer on traditional generative models and generative AI. Outlining the fast-evolving capabilities of generative AI, she also notes their current lack of controls and transparency. Ilke then clarifies the term deepfake and highlights applications of ‘deepfakes for good.’
Ilke and Kimberly discuss whether the explosion of generated imagery creates an un-reality that sets ‘perfectly imperfect’ humans up for failure. An effervescent optimist, Ilke makes a compelling case that the true value of photos and art comes from our experiences and memories. She then provides a fascinating tour of emerging techniques to detect and indelibly identify generated media. Last but not least, Ilke affirms the need for greater public literacy and accountability by design.
Ilke Demir is a Sr. Research Scientist at Intel. Her research team focuses on generative models for digitizing the real world, deepfake detection and generation techniques.
A transcript of this episode is here. -
Plain Talk About Talking AI with J Mark Bishop
Professor J Mark Bishop reflects on the trickiness of language, how LLMs work, why ChatGPT can’t understand, the nature of AI and emerging theories of mind.
Mark explains what large language models (LLM) do and provides a quasi-technical overview of how they work. He also exposes the complications inherent in comprehending language. Mark calls for more philosophical analysis of how systems such as GPT-3 and ChatGPT replicate human knowledge. Yet, understand nothing. Noting the astonishing outputs resulting from more or less auto-completing large blocks of text, Mark cautions against being taken in by LLM’s disarming façade.
Mark then explains the basis of the Chinese Room thought experiment and the hotly debated conclusion that computation does not lead to semantic understanding. Kimberly and Mark discuss the nature of learning through the eyes of a child and whether computational systems can ever be conscious. Mark describes the phenomenal experience of understanding (aka what it feels likes). And how non-computational theories of mind may influence AI development. Finally, Mark reflects on whether AI will be good for the few or the many.
Professor J Mark Bishop is the Professor of Cognitive Computing (Emeritus) at Goldsmith College, University of London and Scientific Advisor to FACT360.
A transcript of this episode is here. -
In AI We Trust with Chris McClean
Chris McClean reflects on ethics vs. risk, ethically positive outcomes, the nature of trust, looking beyond ourselves, privacy at work and in the metaverse.
Chris outlines the key differences between digital ethics and risk management. He emphasizes the discovery of positive outcomes as well as harms and where a data-driven approach can fall short. From there, Chris outlines a comprehensive digital ethics framework and why starting with impact is key. He then describes a pragmatic approach for making ethics accessible without sacrificing rigor.
Kimberly and Chris discuss the definition of trust, the myriad reasons we might trust someone or something, and why trust isn’t set-it-and-forget-it. From your smart doorbell to self-driving cars and social services, Chris argues persuasively for looking beyond ‘how does this affect me.’ Highlighting Eunice Kyereme’s work on digital makers and takers, Chris describes the role we each play – however unwittingly – in creating the digital ecosystem. We then discuss surveillance vs. monitoring in the workplace and the potential for great good and abuse inherent in the Metaverse. Finally, Chris stresses that ethically positive outcomes go beyond ‘tech for good’ and that ethics is good business.
Chris McClean is the Global Head of Digital Ethics at Avanade and a PhD candidate in Applied Ethics at the University of Leeds.
A transcript of this episode is here.