13 afleveringen

Artificial intelligence is changing business, and the world. How can you navigate through the hype to understand AI's true potential, and the ways it can be implemented effectively, responsibly, and safely? Wharton Professor and Chair of Legal Studies and Business Ethics Kevin Werbach has analyzed emerging technologies for thirty years, and created one of the first business school course on legal and ethical considerations of AI in 2016. He interviews the experts and executives building accountable AI systems in the real world, today.

The Road to Accountable AI Kevin Werbach

    • Technologie

Artificial intelligence is changing business, and the world. How can you navigate through the hype to understand AI's true potential, and the ways it can be implemented effectively, responsibly, and safely? Wharton Professor and Chair of Legal Studies and Business Ethics Kevin Werbach has analyzed emerging technologies for thirty years, and created one of the first business school course on legal and ethical considerations of AI in 2016. He interviews the experts and executives building accountable AI systems in the real world, today.

    Navrina Singh: AI Alignment in Practice

    Navrina Singh: AI Alignment in Practice

    Kevin Werbach speaks with Navrina Singh of Credo AI, which automates AI oversight and regulatory compliance. Singh addresses the increasing importance of trust and governance in the AI space. She discusses the need to standardize and scale oversight mechanisms by helping companies align and translate their systems to include all stakeholders and comply with emerging global standards. Kevin and Navrina also explore the importance of sociotechnical approaches to AI governance, the necessity of mandated AI disclosures, the democratization of generative AI, adaptive policymaking, and the need for enhanced AI literacy within organizations to keep pace with evolving technologies and regulatory landscapes.
    Navrina Singh is the Founder and CEO of Credo AI, a Governance SaaS platform empowering enterprises to deliver responsible AI. Navrina previously held multiple product and business leadership roles at Microsoft and Qualcomm. She is a member of the U.S. Department of Commerce National Artificial Intelligence Advisory Committee (NAIAC), an executive board member of Mozilla Foundation, and a Young Global Leader of the World Economic Forum. 
    Credo.ai
    ISO/ 42001 standard for AI governance
    Navrina Singh Founded Credo AI To Align AI With Human Values
     
    Want to learn more? ​​Engage live with Professor Werbach and other Wharton faculty experts in Wharton's new Strategies for Accountable AI online executive education program. It's perfect for managers, entrepreneurs, and advisors looking to harness AI’s power while addressing its risks.

    • 37 min.
    Scott Zoldi: Using the Best AI Tools for the Job

    Scott Zoldi: Using the Best AI Tools for the Job

    Kevin Werbach speaks with Scott Zoldi of FICO, which pioneered consumer credit scoring in the 1950s and now offers a suite of analytics and fraud detection tools. Zoldi explains the importance of transparency and interpretability in AI models, emphasizing a “simpler is better” approach to creating clear and understandable algorithms. He discusses FICO's approach to responsible AI, which includes establishing model governance standards, and enforcing these standards through the use of blockchain technology. Zoldi explains how blockchain provides an immutable record of the model development process, enhancing accountability and trust. He also highlights the challenges organizations face in implementing responsible AI practices, particularly in light of upcoming AI regulations, and stresses the need for organizations to catch up in defining governance standards to ensure trustworthy and accountable AI models.
    Dr. Scott Zoldi is Chief Analytics Officer of  FICO, responsible for analytics and AI innovation across FICO's portfolio. He has authored more than 130 patents, and is a long-time advocate and inventor in the space of responsible AI. He was nomianed for American Banker’s 2024 Innovator Award and received Corinium’s Future Thinking Award in 2022. Zoldi is a member of the Board of Advisors for FinReg Lab, and serves on the Boards of Directors of Software San Diego and San Diego Cyber Center of Excellence. He received his Ph.D. in theoretical and computational physics from Duke University.
     
    Navigating the Wild AI with Dr. Scott Zoldi
     
    How to Use Blockchain to Build Responsible AI
     
    The State of Responsible AI in Financial Services

    • 31 min.
    Olivia Gambelin: Ethical AI as Good Business Practice

    Olivia Gambelin: Ethical AI as Good Business Practice

    Professor Kevin Werbach and AI ethicist Olivia Gambelin discuss the moral responsibilities surrounding Artificial Intelligence, and the practical steps companies should take to address tehm. Olivia explains how companies can begin their responsible AI journey, starting with taking inventory of their systems and using Olivia's Value Canvas to map the ethical terrain. Kevin and Olivia delve into the potential reasons companies avoid investing in ethical AI, the financial and compliance benefits of making the investment, and best practices of companies who succeed in AI governance. Olivia also discusses her initiative to build a network of responsible AI practitioners and promote development of the field.
    Olivia Gameblin is founder and CEO of Ethical Intelligence, an advisory firm specializing in Ethics-as-a-Service for businesses ranging from Fortune 500 companies to Series A startups. Her book, Responsible AI, offers a comprehensive guide to integrating ethical practices for AI deployment. She serves on the Founding Editorial Board for Springer Nature’s AI and Ethics Journal, co-chairs the IEEE AI Expert Network Criteria Committee, and advises the Ethical AI Governance Group and The Data Tank. She is deeply involved in both the Silicon Valley startup ecosystem and advising on AI policy and regulation in Europe. 
    Olivia Gameblin’s Website
    Responsible AI: Implement an Ethical Approach in Your Organization
    The EI (Ethical Intelligence) Network 
    The Values Canvas
     

    • 34 min.
    Beth Noveck: Effective and Accountable AI in the Public Sector

    Beth Noveck: Effective and Accountable AI in the Public Sector

    Join Professor Kevin Werbach and Beth Noveck, New Jersey's first Chief AI Strategist, as they explore AI's transformative power in public governance. Beth reveals how AI is revolutionizing government operations, from rewriting complex unemployment insurance letters in plain English to analyzing call data for faster responses. They discuss New Jersey's innovative use of generative AI to cut response times in half, empowering public servants to better serve their communities while balancing ethical considerations and privacy concerns. Learn about New Jersey's training programs, sandboxes, and pilot projects designed to integrate AI safely into public service. Beth also shares inspiring global examples, like Taiwan's citizen-engaged decision-making processes and Iceland's Better Reykjavik initiative, which inform local projects like New Jersey's mycareernj.gov career coaching tool. 
    Beth Simone Noveck directs the Governance Lab (GovLab) at New York University's Tandon School of Engineering. As the inaugural U.S. Deputy Chief Technology Officer and leader of the White House Open Government Initiative under President Obama, she crafted innovative strategies to enhance governmental transparency, cooperation, and public engagement. Noveck authored "Wiki Government," a seminal work advocating for the use of digital tools to revolutionize civic interaction. Her roles have included Chief Innovation Officer for New Jersey and Senior Advisor for the Open Government Initiative, earning her wide acclaim and numerous accolades for her contributions to the field. Noveck's work emphasizes the transformative potential of technology in fostering more open, transparent, and participatory governance structures.
    Open Government Initiative 
    The GovLab
    Wiki Government
    Beth Noveck TED Talk: Demand a more open-source government
     

    • 37 min.
    Jean-Enno Charton: Operationalizing AI Ethics

    Jean-Enno Charton: Operationalizing AI Ethics

    Join Professor Kevin Werbach and Jean-Enno Charton, Director of Digital Ethics and Bioethics at Merck KGAA, as they explore the ethical challenges of AI in healthcare and life sciences. Charton delves into the evolution of Merck's AI ethics program, which stemmed from their bioethics advisory panel addressing complex ethical dilemmas in areas like fertility research and clinical trials. He details the formation of a dedicated digital ethics panel, incorporating industry experts and academics, and developing the Principle at Risk Analysis (PARA) tool to identify and mitigate ethical risks in AI applications. Highlighting the significance of trust, transparency, and pragmatic solutions, Charton discusses how these principles are applied across Merck's diverse business units. Listen in to thoroughly examine the intersection between bioethics, trust, and AI.
    Jean-Enno Charton is the Chief Data and AI Officer at Merck KGAA, a global pharmaceutical and life sciences company. He chairs the Digital Ethics Advisory Panel, focusing on ethical data use and AI applications within the company. Charton led the development of Merck's Code of Digital Ethics, guiding ethical principles such as autonomy, justice, and transparency in digital initiatives. A recognized speaker on digital ethics, his work contributes to responsible data-driven technology deployment in the healthcare and life sciences sector.
    Merck Code of Digital Ethics
    IEEE Ethically Aligned Design
    Principle-at-Risk Analysis
     

    • 35 min.
    Dominique Shelton Leipzig: Building Trust When Every Company is an AI Company

    Dominique Shelton Leipzig: Building Trust When Every Company is an AI Company

    Join Professor Kevin Werbach and Dominique Shelton Leipzig, an expert in data privacy and technology law, as they share practical insights on AI's transformative potential and regulatory challenges in this episode on The Road to Accountable AI. They dissect the ripple effects of recent legislation, and why setting industry standards and codifying trust in AI are more than mere legal checkboxes—they're the bedrock of innovation and integrity in business. Transitioning from theory to practice, this episode uncovers what it truly means to govern AI systems that are accurate, safe, and respectful of privacy. Kevin and Dominique navigate through the high-risk scenarios outlined by the EU and discuss how companies can future-proof their brands by adopting AI governance strategies. 
    Dominique Shelton Leipzig is a partner and head of the Ad Tech Privacy & Data Management team and the Global Data Innovation team at the law firm Mayer Brown. She is the author of the recent book Trust: Responsible AI, Innovation, Privacy and Data Leadership. Dominique co-founded NxtWork, a non-profit aimed at diversifying leadership in corporate America, and has trained over 50,000 professionals in data privacy, AI, and data leadership. She has been named a "Legal Visionary" by the Los Angeles Times, a "Top Cyber Lawyer" by the Daily Journal, and a "Leading Lawyer" by Legal 500. 
    Trust: Responsible AI, Innovation, Privacy and Data Leadership
    Mayer Brown Digital Trust Summit
    A Framework for Assessing AI Risk
    Dominique’s Data Privacy Recommendation Enacted in Biden’s EO
     

    • 34 min.

Top-podcasts in Technologie

✨Poki - Podcast over Kunstmatige Intelligentie AI
Alexander Klöpping & Wietse Hage
Bright Podcast
Bright B.V.
Tweakers Podcast
Tweakers
Lex Fridman Podcast
Lex Fridman
Acquired
Ben Gilbert and David Rosenthal
De Technoloog | BNR
BNR Nieuwsradio

Suggesties voor jou

Possible
Reid Hoffman
Dwarkesh Podcast
Dwarkesh Patel
Acquired
Ben Gilbert and David Rosenthal
Odd Lots
Bloomberg
a16z Podcast
Andreessen Horowitz
AI Chat: ChatGPT & AI News, Artificial Intelligence, OpenAI, Machine Learning
Jaeden Schafer