The Road to Accountable AI

Kevin Werbach

Artificial intelligence is changing business, and the world. How can you navigate through the hype to understand AI's true potential, and the ways it can be implemented effectively, responsibly, and safely? Wharton Professor and Chair of Legal Studies and Business Ethics Kevin Werbach has analyzed emerging technologies for thirty years, and created one of the first business school course on legal and ethical considerations of AI in 2016. He interviews the experts and executives building accountable AI systems in the real world, today.

  1. Dean Ball: The World is Going to Be Totally Different in 10 Years

    2天前

    Dean Ball: The World is Going to Be Totally Different in 10 Years

    Kevin Werbach interviews Dean Ball, Senior Fellow at the Foundation for American Innovation and one of the key shapers of the Trump Administration's approach to AI policy. Ball reflects on his career path from writing and blogging to shaping federal policy, including his role as Senior Policy Advisor for AI and Emerging Technology at the White House Office of Science and Technology Policy, where he was the primary drafter of the Trump Administration's recent AI Action Plan. He explains how he has developed influence through a differentiated viewpoint: rejecting the notion that AI progress will plateau and emphasizing that transformative adoption is what will shape global competition. He critiques both the Biden administration’s “AI Bill of Rights” approach, which he views as symbolic and wasteful, and the European Union’s AI Act, which he argues imposes impossible compliance burdens on legacy software while failing to anticipate the generative AI revolution. By contrast, he describes the Trump administration’s AI Action Plan as focused on pragmatic measures under three pillars: innovation, infrastructure, and international security. Looking forward, he stresses that U.S. competitiveness depends less on being first to frontier models than on enabling widespread deployment of AI across the economy and government. Finally, Ball frames tort liability as an inevitable and underappreciated force in AI governance, one that will challenge companies as AI systems move from providing information to taking actions on users’ behalf. Dean Ball is a Senior Fellow at the Foundation for American Innovation, author of Hyperdimensional, and former Senior Policy Advisor at the White House OSTP. He has also held roles at the National Science Foundation, the Mercatus Center, and Fathom. His writing spans artificial intelligence, emerging technologies, bioengineering, infrastructure, public finance, and governance, with publications at institutions including Hoover, Carnegie, FAS, and American Compass. Transcript https://drive.google.com/file/d/1zLLOkndlN2UYuQe-9ZvZNLhiD3e2TPZS/view America's AI Action Plan Dean Ball's Hyperdimensional blog

    38 分钟
  2. David Hardoon: You Can't Outsource Accountability

    9月18日

    David Hardoon: You Can't Outsource Accountability

    Kevin Werbach interviews David Hardoon, Global Head of AI Enablement at Standard Chartered Bank and former Chief Data Officer of the Monetary Authority of Singapore (MAS), about the evolving practice of responsible AI. Hardoon reflects on his perspective straddling both government and private-sector leadership roles, from designing the landmark FEAT principles at MAS to embedding AI enablement inside global financial institutions. Hardoon explains the importance of justifiability, a concept he sees as distinct from ethics or accountability. Organizations must not only justify their AI use to themselves, but also to regulators and, ultimately, the public. At Standard Chartered, he focuses on integrating AI safety and AI talent into one discipline, arguing that governance is not a compliance burden but a driver of innovation and resilience. In the era of generative AI and black-box models, he stresses the need to train people in inquiry--interrogating outputs, cross-referencing, and, above all, exercising judgment. Hardoon concludes by reframing governance as a strategic advantage: not a cost center, but a revenue enabler. By embedding trust and transparency, organizations can create sustainable value while navigating the uncertainties of rapidly evolving AI risks. David Hardoon is the Global Head of AI Enbablement at Standard Chartered Bank with over 23 years of experience in Data and AI across government, finance, academia, and industry. He was previously the first Chief Data Officer at the Monetary Authority of Singapore, and CEO of Aboitiz Data Innovation.  MAS Feat Principles on Repsonsible AI (2018) Veritas Initative – MAS-backed consortium applying FEAT principles in practice Can AI Save Us From the Losing War With Scammers? Perhaps (Business Times, 2024) Can Artificial Intelligence Be Moral?  (Business Times, 2021)

    37 分钟
  3. Karine Perset: Building Bridges for Global AI Governance

    9月11日

    Karine Perset: Building Bridges for Global AI Governance

    Kevin Werbach interviews Karine Perset, Acting Head of the OECD’s AI and Emerging Technology Division, about the global effort to shape responsible AI. Perset explains how the OECD—an intergovernmental organization with 38 member countries—has become a central forum for governments to cooperate on complex, interdependent challenges like AI. Since launching its AI foresight forum in 2016, the OECD has spearheaded two cornerstone initiatives: the OECD Recommendation on AI, the first intergovernmental standard adopted in 2019, and OECD.AI, a policy observatory that tracks global trends, policies, and metrics. Perset highlights the organization’s unique role in convening evidence-based dialogue across governments, experts, and stakeholders worldwide. She describes the challenge of reconciling diverse national approaches while developing common tools, like a global incident-reporting framework and over 250 indicators that measure AI maturity across investment, research, infrastructure, and workforce skills. She underscores both the urgency and the opportunity: AI systems are diffusing rapidly across all sectors, powered by common algorithms that create shared risks. Without aligned safeguards and interoperable standards, countries risk repeating one another’s mistakes. Yet if governments can coordinate, share data responsibly, and support one another’s policy development, AI can strengthen economic resilience, innovation, and public trust. Karine Perset is the Acting Head of the OECD AI and Emerging Digital Technologies Division, where she oversees the OECD.AI Policy Observatory, the Global Partnership on AI (GPAI) & integrated network of experts as well as the OECD Global Forum on Emerging Technologies. She oversees the development of analysis, policies and tools inline with the OECD AI Principles. She also helps governments manage the opportunities and challenges that AI and emerging technologies raise for governments. Previously she was Advisor to ICANN’s Governmental Advisory Committee and Counsellor of the OECD’s Science, Technology and Industry Director. OECD.ai

    30 分钟
  4. DJ Patil: AI's Steering Wheel Challenge

    9月4日

    DJ Patil: AI's Steering Wheel Challenge

    Kevin Werbach interviews DJ Patil, the first U.S. Chief Data Scientist under the Obama Administration, about the evolving role of AI in government, healthcare, and business. Patil reflects on how the mission of government data leadership has grown more critical today: ensuring good data, using it responsibly, and unleashing its power for public benefit. He describes both the promise and the paralysis of today’s “big data” era, where dashboards abound, but decision-making often stalls. He highlights the untapped potential of federal datasets, such as the VA’s Million Veterans Project, which could accelerate cures for major diseases if unlocked. Yet funding gaps, bureaucratic resistance, and misalignment with Congress continue to stand in the way. Turning to AI, Patil describes a landscape of extraordinary progress: tools that help patients ask the right questions of their physicians, innovations that enhance customer service, and a wave of entrepreneurial energy transforming industries. At the same time, he raises alarms about inequitable access, job disruption, complacency in relying on imperfect systems, and the lack of guardrails to prevent harmful misuse. Rather than relentlessly stepping on the gas in the AI "race," he emphasizes, we need a steering wheel, in the form of public policy, to ensure that AI development serves the public good.  DJ Patil is an entrepreneur, investor, scientist, and public policy leader who served as the first U.S. Chief Data Scientist under the Obama Administration. He has held senior leadership roles at PayPal, eBay, LinkedIn, and Skype, and is currently a General Partner at Greylock Ventures. Patil is recognized as a pioneer in advancing the use of data science to drive innovation, inform policy, and create public benefit. Transcript Ethics of Data Science, Co-Authored by DJ Patil

    43 分钟
  5. Kay Firth-Butterfield: Using AI Wisely

    6月26日

    Kay Firth-Butterfield: Using AI Wisely

    Kevin Werbach interviews Kay Firth-Butterfield about how responsible AI has evolved from a niche concern to a global movement. As the world’s first Chief AI Ethics Officer and former Head of AI at the World Economic Forum, Firth-Butterfield brings deep experience aligning AI with human values. She reflects on the early days of responsible AI—when the field was dominated by philosophical debates—to today, when regulation such as the European Union's AI Act is defining the rules of the road.. Firth-Butterfield highlights the growing trust gap in AI, warning that rapid deployment without safeguards is eroding public confidence. Drawing on her work with Fortune 500 firms and her own cancer journey, she argues for human-centered AI, especially in high-stakes areas like healthcare and law. She also underscores the equity issues tied to biased training data and lack of access in the Global South, noting that AI is now generating data based on historical biases. Despite these challenges, she remains optimistic and calls for greater focus on sustainability, access, and AI literacy across sectors. Kay Firth-Butterfield is the founder and CEO of Good Tech Advisory LLC. She was the world’s first C-suite appointee in AI ethics and was the inaugural Head of AI and Machine Learning at the World Economic Forum from 2017 to 2023. A former judge and barrister, she advises governments and Fortune 500 companies on AI governance and remains affiliated with Doughty Street Chambers in the UK.  Transcript Kay Firth-Butterfield Is Shaping Responsible AI Governance (Time100 Impact Awards) Our Future with AI Hinges on Global Cooperation Building an Organizational Approach to Responsible AI Co-Existing with AI - Firth-Butterfield's Forthcoming Book

    30 分钟
  6. Dale Cendali: How Courts (and Maybe Congress!) Will Determine AI's Copyright Fate

    6月19日

    Dale Cendali: How Courts (and Maybe Congress!) Will Determine AI's Copyright Fate

    Kevin Werbach interviews Dale Cendali, one of the country’s leading intellectual property (IP) attorneys, to discuss how courts are grappling with copyright questions in the age of generative AI. Over 30 lP awsuits already filed against major generative AI firms, and the outcomes may shape the future of AI as well as creative industries. While we couldn't discuss specifics of one of the most talked-about cases, Thomson Reuters v. ROSS -- because Cendali is litigating it on behalf of Thomson Reuters -- she drew on her decades of experience in IP law to provide an engaging look at the legal battlefield and the prospects for resolution.  Cendali breaks down the legal challenges around training AI on copyrighted materials—from books to images to music—and explains why these cases are unusually complex for copyright law. She discusses the recent US Copyright Office report on Generative AI training, what counts as infringement in AU outputs, and what is sufficient human authorship for copyirght protection of AI works. While precedent offers some guidance, Cendali notes that outcomes will depend heavily on the specific facts of each case. The conversation also touches on how well courts can adapt existing copyright law to these novel technologies, and the prospects for a legislative solution. Dale Cendali is a partner at Kirkland & Ellis, where she leads the firm’s nationwide copyright, trademark, and internet law practice. She has been named one of the 25 Icons of IP Law and one of the 100 Most Influential Lawyers in America. She also serves as an advisor to the American Law Institute’s Copyright Restatement project and sits on the Board of the International Trademark Association. Transcript Thompson Reuters Wins Key Fair Use Fight With AI Startup Dale Cendali - 2024 Law360 MVP Copyright Office Report on Generative AI Training

    40 分钟
  7. Brenda Leong: Building AI Law Amid Legal Uncertainty

    6月12日

    Brenda Leong: Building AI Law Amid Legal Uncertainty

    Kevin Werbach interviews Brenda Leong, Director of the AI division at boutique technology law firm ZwillGen, to explore how legal practitioners are adapting to the rapidly evolving landscape of artificial intelligence. Leong explains why meaningful AI audits require deep collaboration between lawyers and data scientists, arguing that legal systems have not kept pace with the speed and complexity of technological change. Drawing on her experience at Luminos.Law—one of the first AI-specialist law firms—she outlines how companies can leverage existing regulations, industry-specific expectations, and contextual risk assessments to build practical, responsible AI governance frameworks. Leong emphasizes that many organizations now treat AI oversight not just as a legal compliance issue, but as a critical business function. As AI tools become more deeply embedded in legal workflows and core operations, she highlights the growing need for cautious interpretation, technical fluency, and continuous adaptation within the legal field. Brenda Leong is Director of ZwillGen’s AI Division, where she leads legal-technical collaboration on AI governance, risk management, and model audits. Formerly Managing Partner at Luminos.Law, she pioneered many of the audit practices now used at ZwillGen. She serves on the Advisory Board of the IAPP AI Center, teaches AI law at IE University, and previously led AI and ethics work at the Future of Privacy Forum.  Transcript   AI Audits: Who, When, How...Or Even If?   Why Red Teaming Matters Even More When AI Starts Setting Its Own Agenda

    37 分钟
  8. Shameek Kundu: AI Testing and the Quest for Boring Predictability

    6月5日

    Shameek Kundu: AI Testing and the Quest for Boring Predictability

    Kevin Werbach interviews Shameek Kundu, Executive Director of AI Verify Foundation, to explore how organizations can ensure AI systems work reliably in real-world contexts. AI Verify, a government-backed nonprofit in Singapore, aims to build scalable, practical testing frameworks to support trustworthy AI adoption. Kundu emphasizes that testing should go beyond models to include entire applications, accounting for their specific environments, risks, and data quality. He draws on lessons from AI Verify’s Global AI Assurance pilot, which matched real-world AI deployers—such as hospitals and banks—with specialized testing firms to develop context-aware testing practices. Kundu explains that the rise of generative AI and widespread model use has expanded risk and complexity, making traditional testing insufficient. Instead, companies must assess whether an AI system performs well in context, using tools like simulation, red teaming, and synthetic data generation, while still relying heavily on human oversight. As AI governance evolves from principles to implementation, Kundu makes a compelling case for technical testing as a backbone of trustworthy AI. Shameek Kundu is Executive Director of the AI Verify Foundation. He previously held senior roles at Standard Chartered Bank, including Group Chief Data Officer and Chief Innovation Officer, and co-founded a startup focused on testing AI systems. Kundu has served on the Bank of England’s AI Forum, Singapore’s FEAT Committee, the Advisory Council on Data and AI Ethics, and the Global Partnership on AI.   Transcript AI Verify Foundation Findings from the Global AI Assurance Pilot Starter Kit for Safety Testing of LLM-Based Applications

    37 分钟

评分及评论

5
共 5 分
23 个评分

关于

Artificial intelligence is changing business, and the world. How can you navigate through the hype to understand AI's true potential, and the ways it can be implemented effectively, responsibly, and safely? Wharton Professor and Chair of Legal Studies and Business Ethics Kevin Werbach has analyzed emerging technologies for thirty years, and created one of the first business school course on legal and ethical considerations of AI in 2016. He interviews the experts and executives building accountable AI systems in the real world, today.

你可能还喜欢