17 episodes

Artificial intelligence is changing business, and the world. How can you navigate through the hype to understand AI's true potential, and the ways it can be implemented effectively, responsibly, and safely? Wharton Professor and Chair of Legal Studies and Business Ethics Kevin Werbach has analyzed emerging technologies for thirty years, and created one of the first business school course on legal and ethical considerations of AI in 2016. He interviews the experts and executives building accountable AI systems in the real world, today.

The Road to Accountable AI Kevin Werbach

    • Technology
    • 5.0 • 16 Ratings

Artificial intelligence is changing business, and the world. How can you navigate through the hype to understand AI's true potential, and the ways it can be implemented effectively, responsibly, and safely? Wharton Professor and Chair of Legal Studies and Business Ethics Kevin Werbach has analyzed emerging technologies for thirty years, and created one of the first business school course on legal and ethical considerations of AI in 2016. He interviews the experts and executives building accountable AI systems in the real world, today.

    Nuala O'Connor: Frontline Human-Machine Interface at Walmart

    Nuala O'Connor: Frontline Human-Machine Interface at Walmart

    Join Kevin and Nuala as they discuss Walmart's approach to AI governance, emphasizing the application of existing corporate principles to new technologies. She explains the Walmart Responsible AI Pledge, its collaborative creation process, and the importance of continuous monitoring to ensure AI tools align with corporate values. Nuala reveals her  commitment to responsible AI with a focus on customer centricity at Walmart with the mantra “Inform, Educate, Entertain” and examples like the "Ask Sam" tool that aids associates. They address the complexities of AI implementation, including bias, accuracy, and trust, and the challenges of standardizing AI frameworks. Kevin and Nuala conclude with reflections on the need for humility and agility in the evolving AI landscape, emphasizing the ongoing responsibility of technology providers to ensure positive impacts.
    Nuala O’Connor is the SVP and chief counsel, digital citizenship, at Walmart. Nuala leads the company’s Digital Citizenship organization, which advances the ethical use of data and responsible use of technology. Before joining Walmart, Nuala served as president and CEO of the Center for Democracy and Technology. In the private sector, Nuala has served in a variety of privacy leadership and legal counsel roles at Amazon, GE and DoubleClick. In the public sector, Nuala served as the first chief privacy officer at the U.S. Department of Homeland Security. She also served as deputy director of the Office of Policy and Strategic Planning, and later as chief counsel for technology at the U.S. Department of Commerce. Nuala holds a B.A. from Princeton University, an M.Ed. from Harvard University and a J.D. from Georgetown University Law Center. 
     
    Nuala O'Connor to Join Walmart in New Digital Citizenship Role
    Walmart launches its own voice assistant, ‘Ask Sam,’ initially for employee use
    Our Responsible AI Pledge: Setting the Bar for Ethical AI

    Want to learn more? ​​Engage live with Professor Werbach and other Wharton faculty experts in Wharton's new Strategies for Accountable AI online executive education program. It's perfect for managers, entrepreneurs, and advisors looking to harness AI’s power while addressing its risks.

    • 34 min
    Suresh Venkatasubramanian: Blueprints and Redesign: Academia to the White House and Back

    Suresh Venkatasubramanian: Blueprints and Redesign: Academia to the White House and Back

    Join Kevin and Suresh as they discuss the latest tools and frameworks that companies can use to effectively combat algorithmic bias, all while navigating the complexities of integrating AI into organizational strategies. Suresh describes his experiences at the White House Office of Science and Technology Policy and the creation of the Blueprint for an AI Bill of Rights, including its five fundamental principles—safety and effectiveness, non-discrimination, data minimization, transparency, and accountability. Suresh and Kevin dig into the economic and logistical challenges that academics face in government roles and highlight the importance of collaborative efforts alongside clear rules to follow in fostering ethical AI. The discussion highlights the importance of education, cultural shifts, and the role of the European Union's AI Act in shaping global regulatory frameworks. Suresh discusses his creation of Brown University's Center on Technological Responsibility, Reimagination, and Redesign, and why trust and accountability are paramount, especially with the rise of Large Language Models.
     
    Suresh Venkatasubramanian is a Professor of Data Science and Computer Science at Brown University. Suresh's background is in algorithms and computational geometry, as well as data mining and machine learning. His current research interests lie in algorithmic fairness, and more generally the impact of automated decision-making systems in society. Prior to Brown University, Suresh was at the University of Utah, where he received a CAREER award from the NSF for his work in the geometry of probability. He has received a test-of-time award at ICDE 2017 for his work in privacy. His research on algorithmic fairness has received press coverage across North America and Europe, including NPR’s Science Friday, NBC, and CNN, as well as in other media outlets. For the 2021–2022 academic year, he served as Assistant Director for Science and Justice in the White House Office of Science and Technology Policy. 
     
    Blueprint for an AI Bill of Rights
    Brown University's Center on Technological Responsibility, Reimagination, and Redesign
    Brown professor Suresh Venkatasubramanian tackles societal impact of computer science at White House
     
    Want to learn more? ​​Engage live with Professor Werbach and other Wharton faculty experts in Wharton's new Strategies for Accountable AI online executive education program. It's perfect for managers, entrepreneurs, and advisors looking to harness AI’s power while addressing its risks.
     

    • 35 min
    Diya Wynn: People-Centric Technology

    Diya Wynn: People-Centric Technology

    Kevin Werbach speaks with Diya Wynn, the responsible AI lead at Amazon Web Services (AWS). Diya shares how she pioneered a formal practice for ethical AI at AWS, and explains AWS’s “Well-Architected” framework to assist customers in responsibly deploying AI. Kevin and Diya also discuss the significance of diversity and human bias in AI systems, revealing the necessity of incorporating diverse perspectives to create more equitable AI outcomes. 
    Diya Wynn leads a team at AWS that helps customers implement responsible AI practices. She has over 25 years of experience as a technologist scaling products for acquisition; driving inclusion, diversity & equity initiatives; and leading operational transformation. She serves on the AWS Health Equity Initiative Review Committee; mentors at Tulane University, Spelman College, and GMI; was a mayoral appointee in Environment Affairs for six years; and guest lectures regularly on responsible and inclusive technology.

    Responsible AI for the greater good: insights from AWS’s Diya Wynn
     Ethics In AI: A Conversation With Diya Wynn, AWS Responsible AI Lead
     
    Want to learn more? ​​Engage live with Professor Werbach and other Wharton faculty experts in Wharton's new Strategies for Accountable AI online executive education program. It's perfect for managers, entrepreneurs, and advisors looking to harness AI’s power while addressing its risks.

    • 32 min
    Paula Goldman: Putting Humans at the Helm

    Paula Goldman: Putting Humans at the Helm

    Kevin Werbach is joined by Paula Goldman, Chief Ethical and Humane Use Officer at Salesforce, to discuss the pioneering efforts of her team in building a culture of ethical technology use. Paula shares insights on aligning risk assessments and technical mitigations with business goals to bring stakeholders on board. She explains how AI governance functions in a large business with enterprise customers, who have distinctive needs and approaches. Finally, she highlights the shift from "human in the loop" to "human at the helm" as AI technology advances, stressing that today's investments in trustworthy AI are essential for managing tomorrow’s more advanced systems.
    Paula Goldman leads Salesforce in creating a framework to build and deploy ethical technology that optimizes social benefit. Prior to Salesforce, she served Global Lead of the Tech and Society Solutions Lab at Omidyar Network, and has extensive entrepreneurial experience managing frontier market businesses.
    Creating safeguards for the ethical use of technology
    Trusted AI Needs a Human at the Helm
    Responsible Use of Technology: The Salesforce Case Study
     
    Want to learn more? ​​Engage live with Professor Werbach and other Wharton faculty experts in Wharton's new Strategies for Accountable AI online executive education program. It's perfect for managers, entrepreneurs, and advisors looking to harness AI’s power while addressing its risks.

    • 31 min
    Navrina Singh: AI Alignment in Practice

    Navrina Singh: AI Alignment in Practice

    Kevin Werbach speaks with Navrina Singh of Credo AI, which automates AI oversight and regulatory compliance. Singh addresses the increasing importance of trust and governance in the AI space. She discusses the need to standardize and scale oversight mechanisms by helping companies align and translate their systems to include all stakeholders and comply with emerging global standards. Kevin and Navrina also explore the importance of sociotechnical approaches to AI governance, the necessity of mandated AI disclosures, the democratization of generative AI, adaptive policymaking, and the need for enhanced AI literacy within organizations to keep pace with evolving technologies and regulatory landscapes.
    Navrina Singh is the Founder and CEO of Credo AI, a Governance SaaS platform empowering enterprises to deliver responsible AI. Navrina previously held multiple product and business leadership roles at Microsoft and Qualcomm. She is a member of the U.S. Department of Commerce National Artificial Intelligence Advisory Committee (NAIAC), an executive board member of Mozilla Foundation, and a Young Global Leader of the World Economic Forum. 
    Credo.ai
    ISO/ 42001 standard for AI governance
    Navrina Singh Founded Credo AI To Align AI With Human Values
     
    Want to learn more? ​​Engage live with Professor Werbach and other Wharton faculty experts in Wharton's new Strategies for Accountable AI online executive education program. It's perfect for managers, entrepreneurs, and advisors looking to harness AI’s power while addressing its risks.

    • 37 min
    Scott Zoldi: Using the Best AI Tools for the Job

    Scott Zoldi: Using the Best AI Tools for the Job

    Kevin Werbach speaks with Scott Zoldi of FICO, which pioneered consumer credit scoring in the 1950s and now offers a suite of analytics and fraud detection tools. Zoldi explains the importance of transparency and interpretability in AI models, emphasizing a “simpler is better” approach to creating clear and understandable algorithms. He discusses FICO's approach to responsible AI, which includes establishing model governance standards, and enforcing these standards through the use of blockchain technology. Zoldi explains how blockchain provides an immutable record of the model development process, enhancing accountability and trust. He also highlights the challenges organizations face in implementing responsible AI practices, particularly in light of upcoming AI regulations, and stresses the need for organizations to catch up in defining governance standards to ensure trustworthy and accountable AI models.
    Dr. Scott Zoldi is Chief Analytics Officer of  FICO, responsible for analytics and AI innovation across FICO's portfolio. He has authored more than 130 patents, and is a long-time advocate and inventor in the space of responsible AI. He was nomianed for American Banker’s 2024 Innovator Award and received Corinium’s Future Thinking Award in 2022. Zoldi is a member of the Board of Advisors for FinReg Lab, and serves on the Boards of Directors of Software San Diego and San Diego Cyber Center of Excellence. He received his Ph.D. in theoretical and computational physics from Duke University.
     
    Navigating the Wild AI with Dr. Scott Zoldi
     
    How to Use Blockchain to Build Responsible AI
     
    The State of Responsible AI in Financial Services

    • 31 min

Customer Reviews

5.0 out of 5
16 Ratings

16 Ratings

Top Podcasts In Technology

Acquired
Ben Gilbert and David Rosenthal
All-In with Chamath, Jason, Sacks & Friedberg
All-In Podcast, LLC
Lex Fridman Podcast
Lex Fridman
Hard Fork
The New York Times
The Vergecast
The Verge
TED Radio Hour
NPR

You Might Also Like

AI Chat: ChatGPT & AI News, Artificial Intelligence, OpenAI, Machine Learning
Jaeden Schafer
Me, Myself, and AI
MIT Sloan Management Review and Boston Consulting Group (BCG)
Practical AI: Machine Learning, Data Science, LLM
Changelog Media
The AI Daily Brief (Formerly The AI Breakdown): Artificial Intelligence News and Analysis
Nathaniel Whittemore
WIRED Politics Lab
WIRED
The AI Podcast
NVIDIA