AI Risk Reward

Alec Crawford

I am your host, Alec Crawford, Founder and CEO of Artificial Intelligence Risk, Inc. and this is AI Risk-Reward, a podcast about balancing the risk and reward of using AI personally, professionally, and as a large organization! We will discuss hot topics such as, will AI take my job or make it better? When I ask Chat-GPT work questions, is that even safe? From an ethical perspective, is it enough for big companies to anonymize private data before using it? (Probably not.) I am discussing these issues with AI experts to answer burning questions and stay ahead of the curve on AI. I’d also like to give a shoutout to our podcast producer and audio engineering team at Troutman Street Audio. You can check them out on LinkedIn.

  1. 2D AGO

    Jack Hubbard on AI in Banking, Staying Safe With AI, and Building a Career Through Diverse Roles

    In the AI Risk Reward podcast, our host, Alec Crawford (@alec06830), Founder and CEO of Artificial Intelligence Risk, Inc. aicrisk.com , interviews guests about balancing the risk and reward of Artificial Intelligence for you, your business, and society as a whole. Podcast production and sound engineering by Troutman Street Audio. You can find them on LinkedIn. In this episode, Alec speaks with Jack Hubbard, Chairman of St. Meyer and Hubbard, about his accidental path from aspiring sports broadcaster to longtime banker, consultant, and board member. Jack explains why community banks can no longer afford to delay AI adoption, noting that bankers are already using these tools and need secure, institution-approved options instead of ungoverned workarounds. He shares how AI can transform sales preparation and pre-call planning, while emphasizing that CEOs must learn the technology themselves if they want their organizations to use it effectively. The conversation also focuses on ethical AI use, including the need for clear policies, human oversight, role-specific training, and leadership accountability across the bank. Jack closes with practical career advice for younger bankers, encouraging them to find mentors, gain broad experience, attend banking schools, and commit to lifelong learning. Summary: Accidental Career Journey: Jack Hubbard reflects on the unexpected experiences that led him from college radio into a 53-year career in banking and consulting.AI in Community Banking: He argues that community banks must stop waiting on AI and instead provide safe, practical tools for bankers already experimenting with it.Leadership Responsibility: CEOs and senior leaders need hands-on AI understanding so they can fund, guide, and model adoption from the top.Ethics and Governance: Clear policies, human review, and strong training are essential to reduce data risks, compliance issues, and AI misuse.Banker Development: Jack encourages future bankers to seek mentors, pursue rotations, attend banking schools, and stay committed to reading and continuous learning. Referenced in this episode: Companies/Organizations: St. Meyer and HubbardVerapathNorthern Illinois UniversityUnion Bank of ElginFTRHarris BankBMO HarrisSt. Charles Bank and TrustWintrustDynex CapitalCornerstone AdvisorsPerformance InsightsRelProVertical IQLinkedInBlockPeapack Gladstone BankCapital OneFleetAmerican Bankers AssociationWharton SchoolUniversity of WisconsinLSU School of BankingMassachusetts BankersPerry School of BankingMichigan Bankers AssociationSelling PowerBarlow ResearchChicago Cubs Books: Heart SpokenConversations with ProspectsI Know Jack 53 Years of Banking Excellence Movies: Animal HouseCaddyshack Copyright © 2026 by Artificial Intelligence Risk, Inc.

    49 min
  2. APR 21

    Matthew Rosenquist on AI, Cyber Risk, and the Future of Defense

    In the AI Risk Reward podcast, our host, Alec Crawford (@alec06830), Founder and CEO of Artificial Intelligence Risk, Inc. aicrisk.com , interviews guests about balancing the risk and reward of Artificial Intelligence for you, your business, and society as a whole. Podcast production and sound engineering by Troutman Street Audio. You can find them on LinkedIn. In this deep dive episode, Alec speaks with Matthew Rosenquist, cybersecurity strategist and CISO, about how AI is rapidly reshaping both cyber defense and cyber offense. Matthew explains how new AI models are dramatically accelerating vulnerability discovery and exploit creation, putting pressure on traditional patching, risk management, and incident response processes. He also shares practical guidance for consumers and businesses on defending against AI-powered phishing, deepfakes, account compromise, and unsafe use of public AI tools. The conversation highlights why strong fundamentals like multi-factor authentication, least-privilege access, segmented data practices, and careful verification matter more than ever in an AI-driven threat landscape. Alec and Matthew close by exploring the emerging risks of agentic AI and MCP-connected systems, emphasizing that companies must adopt AI security controls with urgency, discipline, and realistic expectations. Summary: AI-Driven Vulnerabilities: Matthew discusses how advanced AI models can find and exploit software flaws far faster than traditional security processes can handle.Consumer Cyber Hygiene: The episode stresses multi-factor authentication, account alerts, password discipline, and skepticism toward emails, texts, calls, and social media interactions.Deepfakes and Social Engineering: AI is making scams more personalized, scalable, and convincing, which means users must verify before trusting.Enterprise AI Risk: Companies need to be cautious with sensitive data in public AI tools and apply strong governance to internal AI deployments.Agentic AI Security: Granting broad permissions to AI agents creates major new attack surfaces, making least-privilege design and access controls essential. Referenced in this episode: Companies/Organizations: VerapathAnthropicGoogleWestern UnionSalesforce Copyright © 2026 by Artificial Intelligence Risk, Inc.

    51 min
  3. APR 14

    Antony Baker, CEO and Founder of FIFTEEN Group, on Using AI to Identify the Right People for Your Company

    In the AI Risk Reward podcast, our host, Alec Crawford (@alec06830), Founder and CEO of Artificial Intelligence Risk, Inc. aicrisk.com , interviews guests about balancing the risk and reward of Artificial Intelligence for you, your business, and society as a whole. Podcast production and sound engineering by Troutman Street Audio. You can find them on LinkedIn. In this episode, Alec speaks with Antony Baker, CEO and Founder of FIFTEEN Group, about his unconventional path from championship sports to consulting and building AI-enabled business services. Antony explains how FIFTEEN Group was created to challenge traditional consulting models by combining talent assessment, process improvement, and practical AI adoption for mid-market companies. He emphasizes that successful AI implementation depends less on hype and more on human intelligence, training, change management, and starting with simple, high-friction business tasks that employees already dislike. The conversation also explores risks around governance, model changes, and the uncertainty created when organizations rely on rapidly evolving AI tools without strong controls. Alec and Antony close with a discussion on leadership, instinct, culture, and why hard work, talent, and adaptability remain essential even as AI becomes more embedded in business. Summary: Talent First: Antony Baker argues that strong people, work ethic, and the right cultural fit are the foundation for successful AI adoption.Practical AI Adoption: Companies get better results when they begin with simple use cases like meeting notes, email workflows, and reporting automation.Human and Artificial Intelligence: The episode highlights that AI performs best when paired with trained employees who know how to guide and educate the system.Governance Risk: Rapid model changes and limited user control can create serious challenges, especially for regulated industries and large enterprises.Entrepreneurial Mindset: Antony shares that resilience, learning through failure, and trusting instinct are critical to building durable businesses in fast-moving markets. Referenced in this episode: Companies/Organizations: FIFTEEN GroupArtificial Intelligence Risk, Inc.NomuraSVBPwCEYBarclaysBusiness AI AllianceNatWest MarketsMicrosoftOpenAIClaudeChatGPTGrokMetaUFC Books: Principles Movies: The Matrix TV Shows: The Ultimate Fighter Copyright © 2026 by Artificial Intelligence Risk, Inc.

    55 min
  4. APR 7

    Aleks Jakulin of Data.Flowers on Governing AI Through Accountability and Resilience, Not Output Control

    In the AI Risk Reward podcast, our host, Alec Crawford (@alec06830), Founder and CEO of Artificial Intelligence Risk, Inc. aicrisk.com, interviews guests about balancing the risk and reward of Artificial Intelligence for you, your business, and society as a whole. Podcast production and sound engineering by Troutman Street Audio. You can find them on LinkedIn. In this episode, Alec speaks with Aleks Jakulin, Founder and President of Data.Flowers, about why current AI governance approaches often focus too heavily on policing model outputs instead of building accountability around real-world actions and system resilience. Aleks argues that AI should be governed more like fire or other critical infrastructure, with strong safeguards, reporting mechanisms, and downstream institutional redesign rather than unrealistic attempts to fully control the technology itself. He also reflects on his early work in deep learning and computational conceptualization, explaining how machines can discover new concepts through interactions in data and why better data infrastructure will be essential for reliable AI systems. The conversation explores how AI is already breaking workflows in hiring, finance, education, and cybersecurity, and why organizations should prioritize resilience, accountability loops, and high-quality input data over superficial ethics frameworks. Alec and Aleks close by discussing the decentralized promise of open models, the need for incident reporting similar to aviation safety, and the long-term potential for AI to improve human flourishing through better communication, faster learning, and broader intelligence augmentation. Summary: AI Governance: Aleks argues that AI oversight should focus on accountability, resilience, and managing real-world consequences rather than policing every generated output.Data Infrastructure: High-quality, controllable data infrastructure is presented as the missing foundation for safer and more reliable AI adoption.System Resilience: Organizations need to redesign vulnerable processes in hiring, finance, education, and operations so they can withstand widespread AI use.Open Models: Aleks suggests AI is ultimately a decentralizing force, with open and local models expanding access and reducing dependence on centralized providers.Human Flourishing: The episode highlights AI’s potential to accelerate learning, improve visual communication, and support a more capable and intelligent society. Referenced in this episode: Companies/Organizations: Data.FlowersArtificial Intelligence Risk, Inc.ColumbiaNvidiaNISTOpenAIMicrosoftOECDMITFAANTSBNASAIRSGoogle Copyright © 2026 by Artificial Intelligence Risk, Inc.

    1h 9m
  5. MAR 31

    Is AI Making Us Stupid? Michael Erlihson, PhD, Head of AI at DriveNet

    In the AI Risk Reward podcast, our host, Alec Crawford (@alec06830), Founder and CEO of Artificial Intelligence Risk, Inc. aicrisk.com , interviews guests about balancing the risk and reward of Artificial Intelligence for you, your business, and society as a whole. Podcast production and sound engineering by Troutman Street Audio. You can find them on LinkedIn. In this episode, Alec welcomes Dr. Michael Erlihson, Math PhD, AI influencer, and Head of AI at DriveNets, for an insightful conversation on the evolving risks and opportunities in artificial intelligence. Dr. Erlihson shares his journey from a science-focused family in Russia to leading AI initiatives in Israel, emphasizing the foundational role of mathematics in modern AI. The discussion explores the theme "AI is making us stupid," drawing parallels to historical debates about technology’s impact on cognition, and offering strategies to ensure ongoing learning and critical thinking in an AI-driven world. Dr. Erlihson discusses his approach to reviewing scientific literature without AI tools, the importance of connecting historical math papers to today’s AI, and his work optimizing LLM inference costs. The episode closes with a practical lightning round covering AI’s impact on education, employment, data privacy, and the democratization of AI knowledge. Summary: AI’s Cognitive Impact: Dr. Erlihson argues that while AI will make most people less knowledgeable, it can make a select few even smarter if used to augment ongoing learning.Mathematics in AI: Emphasizes the enduring importance of math in AI development, connecting historical mathematical insights to contemporary machine learning advances.Optimizing AI Infrastructure: Details DriveNet’s focus on reducing LLM inference costs to ensure the economic sustainability of AI deployment.Education & Employment: Raises critical concerns about the future of traditional education and white-collar employment as AI accelerates automation and self-learning.Data Privacy Risks: Highlights the underestimated risks of personalizing AI with private data and advocates for stronger safeguards and user control. Referenced in this episode: Companies/Organizations: DriveNetsArtificial Intelligence Risk, Inc.RCOMNVIDIAAMDGoogleAWSIntelDarwinAIApple Podcasts: Data Science DecodedExplAInable Movies: Snow White and the Seven DwarfsTerminator Copyright © 2026 by Artificial Intelligence Risk, Inc.

    44 min
  6. MAR 24

    Deep Dive: Trust, Quantum Computing, and the Future of AI Risk with Peter Mancini, Founder of A8A8

    In the AI Risk Reward podcast, our host, Alec Crawford (@alec06830), Founder and CEO of Artificial Intelligence Risk, Inc. aicrisk.com, interviews guests about balancing the risk and reward of Artificial Intelligence for you, your business, and society as a whole. Podcast production and sound engineering by Troutman Street Audio. You can find them on LinkedIn. In this episode, Alec sits down with Peter Mancini, founder of A8A8, and a seasoned data science expert who has leveraged AI since 2005. Peter shares his unconventional entry into artificial intelligence and reflects on key lessons learned from years of deploying AI and quantum computing in high-stakes environments, including work for the US Army and financial institutions. The conversation explores the critical importance of trust, metacognition, and continuous risk assessment throughout the AI lifecycle, with practical anecdotes ranging from model uncertainty in banking to emergent cybersecurity vulnerabilities. Peter discusses the profound implications of AI’s collaborative nature, the ethical dilemmas posed by AI-generated content, and the evolving intersection of AI, quantum computing, and blockchain. The episode concludes with concrete recommendations for transparency, explainability, and incident response, emphasizing the need for vigilance against both known and unforeseen risks, including elusive black swan events. Summary: Trust and Verification: Peter emphasizes that over-trusting AI models without robust verification is a primary and often overlooked risk. Metacognition in Risk Management: He advocates for ongoing critical thinking, group validation, and policy over rigid frameworks to manage AI risks. AI-Driven Cybersecurity Threats: Real-world examples illustrate how AI can inadvertently expose sensitive associations and aid adversaries, highlighting the need for advanced guardrails. Quantum Computing Integration: Peter discusses how quantum computing accelerates probabilistic analysis but may also expose encryption vulnerabilities and new risk vectors. Ethical and Societal Impacts: The episode covers manipulation risks, deepfake challenges, and the essential role of transparency and explainability for both users and developers. Referenced in this episode: Companies/Organizations: A8A8Artificial Intelligence Risk, Inc.US ArmyFidelity InvestmentsRocket MortgageOpenAIGoogleMetaMicrosoft Movies: Blade Runner Copyright © 2026 by Artificial Intelligence Risk, Inc.

    1h 12m
  7. MAR 17

    What’s Working in AI Use Cases Now: Lucas Erb, LinkedIn Top Voice & AIexperts.com Founder

    In the AI Risk Reward podcast, our host, Alec Crawford (@alec06830), Founder and CEO of Artificial Intelligence Risk, Inc. aicrisk.com, interviews guests about balancing the risk and reward of Artificial Intelligence for you, your business, and society as a whole. Podcast production and sound engineering by Troutman Street Audio. You can find them on LinkedIn. In this episode, Alec welcomes Lucas Erb, Founder of AIexperts.com and seasoned advisor on AI strategy, who shares his journey from early computer science interests to consulting at Deloitte, and ultimately founding his own firm. Lucas discusses the evolution of AI adoption, emphasizing the critical gap in mid-market business AI enablement and describing how his company demystifies automation and agent-based solutions for this segment. Key practical examples are explored, focusing on AI’s real-world impact—particularly in sales automation and productivity—rather than generic tool adoption. The conversation also dives deep into the ethical and social challenges of AI, highlighting the ongoing risks of bias and the necessity for thoughtful, transparent implementation. Alec and Lucas conclude with insights into future workforce implications, AI for good initiatives, and advice for young professionals navigating the rapidly changing technology landscape. Summary: AI Journey: Lucas Erb recounts his path from early technical curiosity to founding AIexperts.com, highlighting his time at HP and Deloitte. Mid-Market Enablement: He identifies a critical gap in AI adoption for midsize businesses and shares how his firm provides practical, ROI-driven automation. Ethical Challenges: The episode addresses pressing issues around model bias, data selection, and the importance of ongoing evaluation to ensure fairness. Future of Work: Discussion centers on the shifting landscape for new graduates and the need for leaders to shape a responsible AI-driven workforce. AI for Good: Lucas underscores the importance of broad participation in AI ethics and safety, stressing that collective action is necessary to keep pace with innovation. Referenced in this episode: Companies/Organizations: AIexperts.comArtificial Intelligence Risk, Inc.DeloitteHP AnthropicAccentureMcKinseyHarvard University (AI for Human Flourishing Program)NASDAQMITGlobal AI Ethics InstituteXerox PARCApple Movies: InceptionJurassic Park Copyright © 2026 by Artificial Intelligence Risk, Inc.

    40 min
  8. MAR 10

    Deep Dive: AI Policy and Risk Governance with Asad Ramzanali, Director of AI and Tech Policy

    In the AI Risk Reward podcast, our host, Alec Crawford (@alec06830), Founder and CEO of Artificial Intelligence Risk, Inc. aicrisk.com, interviews guests about balancing the risk and reward of Artificial Intelligence for you, your business, and society as a whole. Podcast production and sound engineering by Troutman Street Audio. You can find them on LinkedIn. In this deep dive episode, Alec welcomes Asad Ramzanali, Director of AI and Tech Policy at the Vanderbilt Policy Accelerator, for a comprehensive discussion on the current landscape of AI policy and risk governance. Asad explains how AI’s broad and general-purpose nature requires sector-specific regulatory strategies, emphasizing that existing frameworks must adapt to both new and exacerbated risks. The conversation covers the challenges of benchmarking and evaluating large models, the balance between federal and state governance, and the ongoing debate over regulation versus innovation. Asad highlights the importance of direct regulatory interventions, robust enforcement mechanisms, and maintaining public trust, particularly as AI adoption accelerates across public and private sectors. The episode closes with reflections on economic disruption, business model risks, and future research priorities in AI policy. Summary: Defining AI Risk: Asad stresses the need for adaptable, use-case-driven frameworks due to AI’s general-purpose scope. Sectoral Regulation: Different regulators must address AI risks where they specifically arise, especially in finance, health, and national security. Benchmarking Challenges: Evaluating AI models requires independent, evolving methodologies, not just self-reported metrics from companies. Regulation vs. Innovation: The current regulatory environment is far from overreaching, and well-crafted policies can actually foster safer innovation. Accountability and Public Trust: Clear liability, enforcement, and transparency are critical for democratic legitimacy and effective AI risk management. Referenced in this episode: Companies/Organizations: Vanderbilt Policy AcceleratorArtificial Intelligence Risk, Inc.Vanderbilt UniversityFDA (U.S. Food and Drug Administration)FCC (Federal Communications Commission)NIST (National Institute of Standards and Technology)OpenAIAnthropicGoogleNOAA (National Oceanic and Atmospheric Administration)Hamilton Project (Brookings Institution)Global AI Ethics Institute Movies: Terminator Copyright © 2026 by Artificial Intelligence Risk, Inc.

    50 min
4.8
out of 5
16 Ratings

About

I am your host, Alec Crawford, Founder and CEO of Artificial Intelligence Risk, Inc. and this is AI Risk-Reward, a podcast about balancing the risk and reward of using AI personally, professionally, and as a large organization! We will discuss hot topics such as, will AI take my job or make it better? When I ask Chat-GPT work questions, is that even safe? From an ethical perspective, is it enough for big companies to anonymize private data before using it? (Probably not.) I am discussing these issues with AI experts to answer burning questions and stay ahead of the curve on AI. I’d also like to give a shoutout to our podcast producer and audio engineering team at Troutman Street Audio. You can check them out on LinkedIn.

You Might Also Like