Artificial intelligence is rapidly transforming the consumer financial services industry. From underwriting and fraud detection to customer engagement and collections, financial institutions are increasingly deploying advanced AI tools to automate processes, personalize services, and improve operational efficiency. We are releasing today, on our Consumer Finance Monitor Podcast show, a discussion of what may be the next major technological shift for the industry: Agentic AI in Consumer Financial Services — AI systems capable of acting autonomously, making decisions, and interacting directly with consumers. The discussion featured Professor Oren Bar-Gill of New York University School of Law, along with Ballard Spahr partners Joseph Schuster and Adam Maarec. The discussion was hosted by Alan Kaplinsky, the founder and practice group leader for 25 years of the Consumer Financial Services Group and now Senior Counsel. The panel examined how agentic AI differs from earlier forms of automation, the benefits it offers financial institutions and consumers, and the significant legal and regulatory risks it may create. Below are the key takeaways from the discussion. What Is Agentic AI? Agentic AI refers to AI systems that can independently take actions on behalf of users or organizations. Unlike traditional automation, which performs predefined tasks, or generative AI, which primarily produces content, agentic AI systems can: · Make autonomous decisions · Interact directly with consumers · Initiate actions such as transactions or communications · Learn from prior interactions In financial services, these systems may soon conduct customer service interactions, initiate collections calls, execute payments, or manage purchasing tasks for consumers. While these capabilities promise major efficiencies, they also raise complex legal questions regarding accountability, fairness, and consumer protection. Understanding AI-Driven Consumer Harm Professor Bar-Gill framed the discussion by examining potential consumer harms associated with AI-powered decision-making. Drawing on his recent book with Cass Sunstein, Algorithmic Harm: Protecting People in the Age of Artificial Intelligence, he explained that the impact of AI depends largely on the type of market in which it operates. The book is available on Amazon here. Sophisticated vs. Unsophisticated Markets Bar-Gill distinguishes between: · Sophisticated markets, where consumers are generally able to make informed decisions · Unsophisticated markets, where consumers are more likely to misunderstand complex products In sophisticated markets, AI-driven personalization, such as individualized pricing, can increase efficiency and expand access to products by offering lower prices to consumers with lower willingness to pay. In contrast, in markets involving complex financial products, such as credit cards, mortgages, or insurance, AI-powered personalization may harm consumers who misjudge product costs or benefits. For example, if a consumer mistakenly overestimates the value of a financial product, an AI system may set the price just below that mistaken valuation, leading the consumer to pay more than the product is actually worth. Algorithmic Price Discrimination One area of growing concern is AI-enabled price discrimination, where algorithms tailor prices to each consumer's willingness to pay. Examples cited during the discussion included: · Airlines experimenting with AI-based pricing strategies · Online retail platforms offering individualized prices for identical products · Insurance companies using algorithms to optimize premiums While pricing based on individual risk, such as in insurance underwriting, is widely accepted, pricing based on willingness to pay raises significant consumer protection concerns. As these practices expand, they are likely to attract increased attention from regulators and lawmakers, particularly at the state level. AI Use Cases in Consumer Finance The panel also highlighted several areas where AI is already being deployed across the consumer financial services lifecycle. Marketing and Customer Acquisition Financial institutions are using AI to analyze large data sets and create highly personalized marketing campaigns. Large language models can generate customized messaging tailored to specific demographic groups or individual consumers. While this personalization improves targeting and engagement, it also creates compliance challenges related to: · Misleading advertising · Disclosure requirements · Potential discriminatory targeting Underwriting and Credit Decisions AI-driven underwriting tools allow lenders to analyze alternative data, such as cash-flow information, to assess creditworthiness. These tools may expand access to credit for consumers who previously lacked traditional credit histories. However, they also raise fair lending concerns under laws such as the Equal Credit Opportunity Act and its implementing regulation, Regulation B. Because many AI models operate as "black boxes," institutions may struggle to explain how decisions are made, an issue that can complicate discrimination analyses and regulatory oversight. Fraud Detection AI is particularly powerful in fraud detection, where pattern recognition is essential. Advanced models can analyze transaction behavior in real time to identify suspicious activity while minimizing unnecessary transaction declines. These tools also allow financial institutions to communicate with customers instantly, confirming transactions or investigating suspicious activity through automated interactions. Servicing and Collections Agentic AI may soon conduct both inbound and outbound customer interactions, including: · Customer service conversations · Dispute resolution · Collections calls In some cases, AI-driven voice systems can conduct conversations that are indistinguishable from human interactions. While this technology may improve efficiency and reduce costs, it raises legal concerns about consumer deception, harassment, and compliance with debt collection laws. Core Legal Risks Despite the novelty of the technology, many of the key legal risks arise from existing laws, not new AI-specific statutes. Liability for AI Actions As Joseph Schuster emphasized, AI is a tool, not a liability shield. Institutions remain responsible for the actions of AI systems just as they would for the actions of employees or third-party vendors. Traditional legal doctrines, including agency law, vicarious liability, and unfair or deceptive acts or practices, continue to apply. UDAP Risks AI systems interacting with consumers may create risks under federal and state UDAP laws if they: · Provide inaccurate information ("hallucinations") · Fail to deliver required disclosures · Exhibit overconfidence in uncertain responses · Engage in manipulative behavioral targeting. Fair Lending and Discrimination AI models can unintentionally produce discriminatory outcomes, even when protected characteristics are not used as inputs. As Professor Bar-Gill noted, future litigation may increasingly focus on disparate impact analysis, which examines whether outcomes disproportionately affect protected classes regardless of the model's internal logic. Governance and Risk Management Given these risks, institutions are increasingly adopting governance frameworks for AI deployment. Common practices include: · AI governance committees with cross-functional participation · Model inventories and risk-tiering systems · Vendor due diligence for AI providers · Data mapping and validation processes · Continuous monitoring of AI outputs. Financial regulators are already asking supervised institutions detailed questions about how AI is being used. Institutions that implement structured governance processes are better positioned to respond to these inquiries. The Rise of Agentic Commerce One emerging application of agentic AI involves autonomous purchasing. For example, a consumer might instruct an AI assistant to plan and purchase supplies for a birthday party. The AI would then select vendors, place orders, and initiate payments using the consumer's stored payment credentials. But what happens if AI makes a mistake, such as ordering supplies for 1,000 guests instead of 10? Such scenarios raise difficult questions involving: · consumer authorization · merchant liability · payment network rules · dispute resolution These issues are only beginning to receive attention from regulators and industry participants. Key Takeaways for Financial Institutions The panel concluded with several recommendations for institutions exploring AI deployment. First, distinguish beneficial uses from harmful ones. AI can deliver significant consumer benefits, but firms must remain vigilant about potential misuse or unintended harm. Second, prioritize governance. Robust policies, oversight structures, and risk management processes are essential. Th