The AI Grapple

The AI Grapple

Unravel the complexities of AI with The AI Grapple Podcast, hosted by Kate vanderVoort. Dive into thought-provoking discussions on the most critical AI issues shaping our world. Perfect for marketers and business professionals, this podcast is your guide to integrating AI responsibly and ethically into your organization. Join us as we navigate the future of technology and its profound impact on humanity.

  1. MAR 2

    Ep 50: Mari Smith on Trust, AI, and Staying Human in the Future of Social Media

    For our milestone 50th episode of The AI Grapple, Kate vanderVoort (Founder and CEO, AI Success Lab) sits down with social media pioneer Mari Smith, widely known as the “Queen of Facebook.” With nearly two decades at the forefront of social media marketing, Mari has witnessed the rise, peak and reinvention of platforms like Facebook. Now, as AI reshapes marketing, business and human interaction itself, she finds herself at a crossroads, asking the same question many marketers are grappling with: What does authentic connection look like in a world where AI generates most of what we see? This episode explores the future of social media, the erosion of trust online, Meta’s acquisition of Manus, agentic AI, and how Human Design can help business leaders stay grounded and aligned in an AI-saturated world. In This Episode We Explore The Trust Crisis in Social Media Mari reflects on how the early days of Facebook enabled genuine connection and direct access to thought leaders. Today, with AI-generated content flooding platforms and algorithms shaping perception, trust is declining. Key themes discussed: The prediction that 90 percent of online content could soon be AI-generated Why community may be the last true moat in business The shift from social media to “interest media” and potentially “intent media” The importance of mindful scrolling versus mindless consumption Why discernment is becoming a critical business skill Meta’s Acquisition of Manus and the Rise of Agentic AI Kate and Mari unpack Meta’s acquisition of Manus and what it signals for the future of AI integration within social platforms. They discuss: The difference between generative AI and agentic AI Why agentic AI is a much bigger shift for marketers than simple content tools How AI agents may soon manage advertising, workflows and customer interactions The implications for small businesses and ad specialists Security, trust and the risks of giving AI deeper access to our systems This conversation highlights how AI is rapidly moving from content assistant to autonomous operator. Human Design and Authentic Marketing in an AI World Mari shares how her deep study of Human Design has reshaped the way she approaches business and marketing. Topics include: What Human Design is and how it differs from traditional personality systems The concept of inner authority and body-based decision-making Why traditional urgency marketing can conflict with personal alignment How understanding your energy type can improve marketing strategy The importance of saying no to opportunities that do not feel aligned Rather than outsourcing identity to AI or blindly following marketing formulas, Mari advocates returning to self-awareness and personal authority. Staying Human in an AI-Saturated World As AI tools become embedded into everything, the conversation turns to responsibility. Key reflections include: AI as a mirror of human consciousness The danger of outsourcing thinking and creativity The difference between augmentation and replacement Why live video and in-person experiences may increase in value The growing need for trusted guides, coaches and human leadership Mari shares her optimistic vision of a future where empowered individuals collaborate more deeply and bring their uniquely human strengths forward. Practical Takeaways for Marketers Build real community, not just audience size Prioritise human-to-human conversations in DMs and private groups Use AI as a co-creative partner, not a replacement for your voice Develop discernment before reacting or amplifying content Lean into what energises you rather than rigid posting formulas Protect your authenticity even when using AI tools Mari’s Favourite AI Tools ChatGPT (including a custom GPT trained on her Human Design profile) Fathom AI for meeting notes Granola AI for audio-based summaries Descript for video editing and content repurposing About Mari Smith Mari Smith is a world-renowned social media thought leader and online marketing strategist. Known as the Queen of Facebook, she has helped entrepreneurs and global brands build authentic, high-impact marketing strategies for over 20 years. Today, she blends her marketing expertise with the transformative lens of Human Design, guiding creators and business leaders to align with their true voice and thrive authentically in an AI-driven world. Connect with Mari Smith Organisation: Mari Smith International, Inc. Website: www.marismith.com Facebook (Page): https://www.facebook.com/marismith Facebook (Profile): https://www.facebook.com/maris Instagram: https://www.instagram.com/mari_smith LinkedIn: https://www.linkedin.com/in/marismith/ Final Reflection As AI becomes embedded into social media, marketing and everyday life, the question is no longer whether AI will change business. It already has. The real question is this: Will we use AI to replace our humanity, or to amplify it? Episode 50 of The AI Grapple invites you to grapple with exactly that.

    58 min
  2. FEB 26

    Ep 49 Raising AI-Literate Humans: What Schools and Parents Must Get Right Now

    AI is already shaping how young people learn, think, and engage with the world. The real question is whether education systems are preparing them for that reality. In this episode of The AI Grapple, Kate vanderVoort (Founder and CEO, the AI Success Lab) is joined by educator, author, and AI literacy advocate Lindy Hockenbary (also known as LindyHoc). Lindy brings nearly two decades of experience in instructional technology and curriculum design, working directly with K–12 schools, educators, and EdTech companies to make technology usable, relevant, and human-centred. Lindy’s work focuses on one clear message: AI literacy does not belong on the sidelines. It belongs in the core curriculum. What We Cover in This Episode Why AI literacy belongs in every subject Lindy explains why AI is not a “tech topic” but an extension of foundational learning. From pattern recognition in early maths to writing, research, and civics, AI intersects with every subject students already study. What AI literacy actually means Moving beyond buzzwords, Lindy breaks AI literacy down into three practical skills: understanding how AI works, evaluating its outputs, and using it with intention. These skills sit within broader information literacy and are essential for discernment in an AI-driven world. There’s no such thing as digital citizenship, just citizenship Online and offline life are no longer separate. Lindy challenges the idea that digital skills can be taught in isolation and explains why civics education must now include algorithms, data, and the influence of technology on democracy. How algorithms shape young people’s beliefs and behaviour From search results to social media feeds, students are constantly influenced by systems they don’t fully understand. Lindy shares why teaching kids how these systems work is key to protecting their agency and voice. AI, personalisation, and the classroom Kate and Lindy explore the promise of personalised learning, including how AI can reduce teacher workload and support students with anxiety or different learning needs. They also unpack the risks of overuse, screen fatigue, and losing collaboration in the classroom. Fear, misinformation, and job loss myths Lindy addresses common concerns among educators and parents, including fears about AI replacing teachers. She explains why those fears are fading as AI literacy increases and how understanding the technology reduces anxiety. Why AI literacy reduces misuse Research shows that when people understand AI better, they use it more selectively. Lindy explains the “magical thinking trap” and why low AI literacy often leads to over-reliance rather than thoughtful use. What parents and educators can do right now Practical ideas for starting conversations early, modelling healthy AI use, and using tools that encourage curiosity rather than shortcuts. This includes shared learning, asking better questions, and using AI to support thinking instead of replacing it. What becomes possible if we get this right Lindy paints a hopeful picture of a future where strong AI literacy creates more creativity, narrows equity gaps, and gives young people the space and confidence to tackle complex global problems. Tools and Resources Mentioned NotebookLM – A powerful learning tool for summarising, exploring, and studying content SchoolAI – Guardrailed AI tools designed for educators and students Suno – AI music generation for creative learning and engagement ChatGPT (Study Mode) – Using AI to guide thinking rather than provide answers Varsity Tutors – Parent-Powered AI Webinar The Good Robot podcast – Exploring different perspectives on AI About Lindy Hockenbary Lindy Hockenbary is an educator, author of A Teacher’s Guide to Online Learning, and a 2025 Women in AI honoree recognised by the ASU+GSV Summit. With a background in classroom teaching, instructional technology, and professional learning, she works globally with educators and EdTech companies to ensure AI tools support real learning needs. Her work centres on AI literacy, information literacy, and rethinking civics education for an AI-shaped world. Connect with Lindy: Website: https://lindyhoc.com LinkedIn: https://www.linkedin.com/in/lindyhockenbary/ Listen to the Episode 🎧 Tune in to The AI Grapple with Lindy Hockenbary on your favourite podcast platform. If this episode sparked ideas or questions, share it with a parent, educator, or leader navigating AI right now.

    56 min
  3. FEB 16

    Ep 48: Australia’s AI Strategy Explained by the Head of the National AI Centre

    In this episode of The AI Grapple, Kate vanderVoort, Founder & CEO, the AI Success Lab, is joined by Lee Hickin, Executive Director of the Australian Government’s National AI Centre, for a wide-ranging and practical conversation about AI adoption in Australia. With more than 30 years in senior technology leadership roles across Microsoft and AWS, Lee brings a rare perspective that bridges enterprise technology, public policy and real-world business impact. Together, Kate and Lee unpack where Australia truly sits on AI adoption, why our national approach has prioritised innovation alongside safety, and what business leaders need to understand to move beyond experimentation into meaningful use of AI. This episode is essential listening for leaders navigating AI strategy, governance, data risk and long-term competitiveness. Key themes and takeaways Where Australia really stands on AI adoption Why global AI rankings can distort the Australian story What per-capita adoption reveals about real-world usage The risk of fear-driven narratives slowing progress Why AI adoption is not a zero-sum global race From experimentation to scale Why many organisations get stuck in the “AI dabble” phase The growing complexity gap between tools and outcomes Where SMEs and large enterprises face different blockers How clarity around business problems unlocks momentum Understanding the National AI Strategy What the National AI Strategy is designed to achieve Why Australia chose a principles-based, pro-innovation approach How this differs from heavy regulatory models overseas What this means in practice for Australian businesses Governance, risk and responsibility Why most AI failures are human, not technical What “good enough” AI governance actually looks like The role of AI literacy, fluency and leadership accountability Why responsibility cannot be outsourced to vendors or government Data security, privacy and sovereignty Separating AI risk from existing data security challenges Common misconceptions about AI training and data leakage Why sovereignty is about ecosystems, not isolation How poor data practices get amplified by AI systems Tech stack lock-in and innovation Why enterprise comfort with existing platforms can limit progress When staying inside your current stack makes sense How to think about best-fit AI tools for different use cases A clear framework for AI adoption: using AI, transforming with AI, or building with AI International investment and Australia’s AI ecosystem What global AI companies investing in Australia really signals How Australia can benefit without losing local capability The importance of a strong domestic AI ecosystem Opportunities for Australian AI companies on the global stage Environmental impact and sustainability The real environmental costs of AI and data centres Australian innovation in sustainable AI infrastructure Why AI-specific data centres require new design thinking Where Australia could lead globally in green AI solutions Looking ahead What success looks like if Australia gets AI adoption right How AI may reshape work and opportunity for future generations Why confidence, capability and clarity are key to progress Connect and learn more Lee Hickin on LinkedIn: https://www.linkedin.com/in/leehickin/ National AI Centre on LinkedIn: https://www.linkedin.com/company/national-ai-centre/ National AI Centre website: https://www.industry.gov.au/national-artificial-intelligence-centre

    1h 11m
  4. FEB 8

    Ep 47: Is AI Actually Good for the Economy? An Economist’s Clear Answer

    In this episode of The AI Grapple, Kate vanderVoort, Founder and CEO the AI Success Lab, is joined by economist Dr Bill Conerly to unpack what AI means for business, jobs, and the wider economy. Rather than hype or fear, Bill brings an economic lens to what is actually happening now and what signals matter most. They explore why the real impact of AI will come from specialised tools embedded into everyday workflows, not from people chatting to bots. Bill explains how general purpose technologies historically lower costs, shift prices, and improve living standards, even when disruption feels uncomfortable in the short term. The conversation also tackles job displacement, entry-level roles, ageing workforces, and what young professionals should focus on in an uncertain market. Toward the end, Kate and Bill discuss AI risk, interpretability, and whether extreme predictions about AI deserve serious concern. The episode closes with a grounded but optimistic view of how AI could improve productivity, choice, and quality of life if organisations implement it responsibly. Key Highlights • Most businesses are still in the experimentation phase Bill explains that while awareness of AI is high, clarity is not. Many leaders still see AI as synonymous with chatbots, which limits progress. As a result, activity is fragmented, with small trials rather than deliberate strategy. • The biggest economic impact will not come from chatbots Large language models are helpful, but Bill argues they are not where the major productivity gains will come from. The real shift is happening through AI embedded into specific business tasks that remove friction and save time without requiring people to learn new tools. • Specialised AI tools are already delivering measurable gains Examples from healthcare, sales, and the trades show AI quietly saving time and improving accuracy. Doctors reclaim minutes per patient, sales teams avoid manual CRM updates, and contractors generate faster, more accurate estimates, all without needing AI expertise. • AI will increasingly fade into the background Rather than being something people consciously “use,” AI will operate ambiently inside existing systems. Like GPS or recommendation engines, it becomes infrastructure, not a headline feature, which accelerates adoption. • AI fits the pattern of past general purpose technologies Bill frames AI as comparable to steam power or electricity. These technologies reshaped economies by lowering the cost of producing goods and services across many industries, rather than replacing one job or function at a time. • Why falling prices matter more than job numbers From an economic view, Bill focuses on purchasing power rather than employment counts alone. Productivity gains tend to push prices down faster than wages, which historically improves living standards, even when transitions are uncomfortable. • There may be financial froth, but real demand underneath While some AI investments and valuations may prove unsustainable, Bill believes the core economics are sound. AI-enabled services create genuine value, meaning the infrastructure will remain even if some companies fail. • Expect rapid business churn in the short term Lower barriers to building software will lead to many new companies appearing and disappearing quickly. This pattern mirrors earlier technology waves where experimentation was high and long-term winners emerged over time. • Job disruption will be visible before benefits Early job losses, especially in knowledge work, are easier to spot than gradual productivity gains. This timing gap fuels fear, even though new work and improved services tend to follow. • Older workers may feel the pressure more acutely Bill notes that people later in their careers doing repetitive knowledge tasks may find it harder to adapt or retrain. Some may opt for early retirement, raising real social and economic concerns during the transition. • Young professionals need adaptability, not certainty Entry-level roles in high AI exposure fields are shrinking. Bill suggests two responses: choosing work that is slower to automate or learning how to identify and improve repetitive processes using AI tools. • AI exposes broken systems rather than causing the problem Many frustrations blamed on AI stem from inefficient workflows that existed long before AI arrived. The technology simply removes the ability to ignore them. • Serious risks deserve attention, not panic Bill takes concerns around AI interpretability seriously, noting that experts still cannot fully explain why models behave as they do. While extreme doomsday scenarios are unlikely, thoughtful governance and oversight are necessary. • A realistic, optimistic future is possible If implemented well, AI can reduce waste, improve productivity, and give people more choice in how they work and live. The real opportunity lies not just in efficiency, but in better quality of life.   Links and Resources Guest Dr Bill Conerly Website: www.ConerlyConsulting.com LinkedIn: http://www.linkedin.com/in/businomics X (Twitter): https://x.com/BillConerly YouTube: https://www.youtube.com/@DrBillConerly Host Kate vanderVoort LinkedIn https://www.linkedin.com/in/katevandervoort/  Website: www.aisuccesslab.com Facebook Group: https://www.facebook.com/groups/aisuccessslab

    43 min
  5. JAN 26

    Ep 46: Remote Work in the Age of AI: Who Wins, Who Loses, and Why

    In this episode of The AI Grapple, Kate vanderVoort, Founder and CEO of the AI Success Lab, sits down with Chris Nolte, Founder and CEO of Kayana Remote Professionals, to unpack what’s really happening at the intersection of AI, remote work, and human capability. Chris brings the rare lens of an investor-operator who has led and scaled businesses across finance, retail, prop tech, healthcare, and real estate. As an early beta tester of OpenAI and a long-time advocate for global talent, he shares grounded insights into why AI adoption often stalls, and what leaders are getting wrong about jobs, productivity, and automation. This is a practical, human-centred conversation about execution, not hype. Meet Chris Nolte Chris Nolte is the Founder and CEO of Kayana Remote Professionals, a company helping growth-minded businesses, nonprofits, solopreneurs, and PE and VC-backed portfolio companies scale using top-tier Filipino talent, supported by AI-enabled matching and workflow automation. Before Kayana, Chris spent 17 years running a family office where he bought, built, and operated companies with long-term capital. He has served as President and CEO of Verlo Mattress, co-founded AneVista Group, and advised startups through Dragonfly Group. His work today sits squarely at the intersection of AI, remote talent, and the future of work. Why AI Tools Don’t Automatically Change How Work Gets Done One of the central themes of this conversation is what Chris calls the human execution gap. Many organisations invest heavily in AI tools, only to find that very little actually changes. Chris explains why this happens and why the real barrier isn’t technology, but the expectation of full automation without redesigning roles, workflows, or accountability. Kate and Chris explore why AI still needs human judgment, why unfinished automations are everywhere, and why execution breaks down when leaders expect tools to replace thinking. AI, Repeatable Work, and the Future of Remote Roles As AI takes on more repeatable, task-based work, the nature of many roles is shifting fast. Chris shares why this shift doesn’t spell the end of remote work, but rather raises expectations for what remote professionals contribute. Entry-level, low-context roles are disappearing, while higher-level work is becoming accessible far earlier in a career. The conversation reframes the fear around AI and jobs, focusing instead on how remote workforces can move up the value chain rather than being pushed out. Global Talent, AI, and the Levelling of Opportunity A powerful thread in this episode is how AI is reshaping global opportunity. Chris explains why AI is acting as a great enabler for talented people in countries that historically lacked access to education, capital, or global markets. With AI closing gaps in language, research, and communication, opportunity is becoming more evenly distributed, even if outcomes are not. Kate and Chris discuss what this means for competition, wages, and the reality that professionals are no longer competing locally, but globally. What Makes Remote Professionals Irreplaceable With AI available to everyone, differentiation now comes from distinctly human qualities. Chris outlines why professionalism, discernment, curiosity, and output-focused thinking matter more than ever. He explains why remote professionals must actively learn how to operate at a professional standard, and why trust is built through consistency, judgment, and ownership, not just technical skill. Kate adds real-world examples from her own team, showing how AI-supported workflows free leaders from bottlenecks while raising quality and expectations. AI, Custom GPTs, and Scaling Expertise The episode dives into how custom GPTs and AI workflows are changing knowledge transfer inside businesses. Kate shares how embedding her expertise into AI systems allows her remote team to work in her voice and style, reducing rework and approvals. Chris builds on this by explaining how content, education, and even books are changing as AI becomes part of how people learn and apply information. This section offers a glimpse into how expertise can now scale without burning out the expert. Productivity, Pace, and the Reality of Change AI is moving faster than people can adapt, and both Kate and Chris acknowledge the tension this creates. They discuss why productivity gains don’t come from layering AI on top of broken processes, and why many organisations are stuck between fear, regulation, and falling behind. The conversation also touches on global differences in AI adoption and the long arc of change businesses need to prepare for. Looking Ahead: The Future of Work That Never Sleeps To close, Chris shares his view on what’s coming next. He describes a future where even small and mid-sized businesses operate across time zones, supported by AI and distributed teams that follow the sun. Work doesn’t stop, even when leaders do. The result is faster execution, higher output, and more opportunity for people who know how to work with AI rather than around it. Key Takeaways AI adoption fails when leaders expect automation without changing how work is designed Repeatable, low-context roles are fading, while higher-value work is becoming accessible earlier Remote professionals compete on a global stage, with AI as the common denominator Human judgment, curiosity, and professionalism matter more, not less AI scales expertise best when paired with strong workflows and clear standards Connect with Chris Learn more about Chris’s work and Kayana Remote Professionals: Kayana Website: https://www.hirekayana.com Kayana Capital Website: https://kayanacapital.com/ LinkedIn: https://www.linkedin.com/in/chrisnolte/

    49 min
  6. JAN 12

    Ep 45: How Smart Leaders Navigate Fear, Trust, and Change with AI with Sarah Daly

    In this episode of The AI Grapple, Kate vanderVoort is joined by Sarah Daly (founder AI360 Review), AI Strategist and Researcher, to explore what’s really holding organisations back from successful AI adoption. Rather than focusing on tools or trends, this conversation goes deep into trust, leadership responsibility, workforce impact, and the human systems that determine whether AI succeeds or fails at scale. Sarah brings insights from six years of doctoral research into trust in AI at work, alongside her enterprise experience advising boards and senior leaders. Together, Kate and Sarah unpack why AI is not a technology problem, why people are already trusting AI more than they realise, and what organisations must do to navigate disruption honestly and responsibly. Key Topics Covered 1. Why Trust Is the Real Issue in AI Adoption Sarah explains that while public narratives focus on distrust, people are already placing deep trust in AI, often without realising it. From sharing personal information with AI tools to relying on outputs without verification, trust is already present but poorly calibrated. The challenge for organisations is not whether people trust AI, but whether they trust it in the right ways. 2. The Human Foundations of AI Performance At AI360Review, Sarah’s work begins with people, not platforms. She shares why technology is often easier to control than human systems, and how trust can be deliberately designed through environment, leadership behaviour, and culture. When the right conditions exist, even AI sceptics can become strong advocates. 3. Strategy Before Tools Rather than positioning AI as the strategy, Sarah argues it must support existing organisational goals. The starting point is always the problem being solved and the value being created. From there, organisations must consider governance, capability building, culture, education, innovation processes, and fit-for-purpose technology. This approach is formalised in the AI360 framework, which assesses AI readiness across six organisational dimensions. 4. Leadership, Governance, and Risk A recurring theme in the conversation is leadership clarity. When leaders lack confidence or avoid decisions, teams work around restrictions, often using AI in uncontrolled ways. Sarah reframes AI risk as a management issue, not a binary decision, and stresses that strong governance enables experimentation rather than shutting it down. 5. Australia’s AI Sentiment and the National AI Plan Kate and Sarah discuss Australia’s low trust levels in AI compared to global peers, particularly in the workplace. Sarah shares why enterprise sentiment varies widely depending on enablement and leadership support. They also explore Australia’s national AI plan, with Sarah supporting the decision to embed AI governance within existing regulatory structures rather than creating new bodies. 6. AI as a Thinking Partner The conversation shifts to how AI is changing how people think, write, and make decisions. Sarah highlights the difference between using AI as a creative partner versus outsourcing thinking entirely. Kate introduces discernment and personal responsibility as essential skills in the age of AI, especially given how readily people believe AI-generated outputs. 7. Workforce Impact and Difficult Conversations One of the most powerful sections of the episode focuses on workforce disruption. Sarah speaks candidly about automation, role changes, and job loss, and why avoiding these conversations damages trust. She advocates for transparency, agency, and AI literacy so employees can create value for their organisation and their future careers. 8. Consumer Backlash and Lessons from Early Movers Sarah shares lessons from organisations that moved too fast without accountability, including well-known AI failures. These examples show why companies must own AI-driven decisions, test rigorously, and protect customer experience. Second movers, particularly in Australia, have the advantage of learning from these mistakes. 9. Transparency and Ethical Use of AI The episode explores whether organisations should disclose AI use publicly. Sarah explains how expectations shift when AI becomes embedded in everyday work, while stressing that transparency around customer data, privacy, and protection remains essential. Over time, AI disclosures may become as standard as privacy policies. 10. A Human-Centred Case Study: IKEA Sarah shares an inspiring example from IKEA, where AI voice tools were introduced into call centres. Instead of job losses, staff were retrained as in-store interior designers, expanding customer experience and creating transferable skills for employees. This case shows what’s possible when organisations lead with people, not fear. 11. What the Future Could Look Like Looking ahead, Sarah remains optimistic. While human drivers like autonomy, mastery, and purpose remain constant, AI has the potential to reshape how people work, think, and create meaning. Used well, AI can augment human capability rather than diminish it, opening new possibilities for work and life. 12. Sarah’s AI Toolkit Rather than a single favourite tool, Sarah uses a mix of: Microsoft Copilot ChatGPT Claude Each serves a different purpose, reinforcing the idea that effective AI use is about intentionality, not loyalty to one platform. Resources & Links AI360Review: https://www.ai360review.com/ Connect with Sarah on LinkedIn: https://www.linkedin.com/in/sarah-daly-au/ Final Thoughts This episode is a must-listen for leaders navigating AI change, not just from a technical standpoint, but from a human one. Sarah Daly brings clarity, research-backed thinking, and real-world examples that challenge organisations to lead with courage, transparency, and responsibility as AI reshapes how we work. If you’re responsible for strategy, culture, or people, this conversation will give you a clearer lens on what really matters in AI adoption.

    47 min
  7. JAN 5

    Ep 44: Building Business Resilience Through Better Data with Davis DeRodes

    In this episode, Kate vanderVoort, CEO, AI Success Lab, sits down with Davis DeRodes, Head of Data Science Innovation at Fusion Risk Management, for a clear and practical look at the role data plays in business resilience. Davis has a rare gift for breaking down technical concepts, helping leaders understand how better data, smarter systems and simple planning can protect organisations from disruption. Davis explains why resilience is no longer just an enterprise issue and shares tangible steps small to medium businesses can take right now to prepare for the rapid change AI is bringing. They talk about AI-generated scenarios, data simulations, model transparency, synthetic data, employee-facing agents and how organisations can approach data in ways that set them up for long-term stability. This episode is perfect for leaders who want a grounded understanding of how data supports smart decisions, resilient systems and confident use of AI. Key Themes What enterprise resilience really means and why every organisation now needs it How AI-generated scenarios work and why they outperform traditional tabletop exercises The difference between data science and decision science How small and medium businesses can transform their data into a resilience asset The role of structured vs unstructured data in an AI-driven world What model context protocol (MCP) means for how AI accesses business systems Practical steps for leaders to strengthen resilience today Future trends in data collaboration, governance and synthetic data Why the “business brain” approach gives companies more control What work looks like when AI becomes a close collaborator Insights Data science vs decision science Davis explains the distinction in a way that helps non-technical leaders understand what data is actually for in a business and why waiting for perfect accuracy can slow teams down. AI-generated scenarios He walks through how Fusion uses AI to create highly tailored disruption scenarios that expose weak points organisations would never have spotted on their own. Monte Carlo simulations Davis describes modern simulation techniques that replace slow, expensive tabletop exercises with fast, repeatable, data-driven insights. Resilience for smaller businesses He outlines simple, accessible steps any organisation can take to strengthen resilience, including mapping revenue drivers, centralising key data, and understanding dependencies. Data governance as a superpower Why businesses that invest early in clean, structured data gain massive efficiency later. Synthetic data, future risks and the pollution of the internet A thoughtful conversation on how AI trains itself, the risks of AI training on AI, and why high-quality walled-off data sources will become even more valuable. AI as an employee How organisations will soon handle agents just like staff members, including permissions, access and responsibilities. Links Mentioned LinkedIn – Davis DeRodes https://www.linkedin.com/in/davis-derodes/ Fusion Risk Management https://www.fusionrm.com/ Kaggle synthetic datasets https://www.kaggle.com/datasets Google AI Studio https://aistudio.google.com/ If you enjoyed this episode, share it with a colleague who needs practical clarity on AI and data. Subscribe to The AI Grapple on your favourite podcast platform so you never miss an episode.

    49 min
  8. 12/29/2025

    Ep 43: The Truth About AI, Sustainability, and Trust: Time Will Tell

    In this episode of The AI Grapple, Kate VanderVoort (Founder of the AI Success Lab) is joined by sustainability author, consultant, and speaker John Pabon to explore one of the most pressing and uncomfortable questions facing AI adoption today: its impact on the environment, trust, and society. With more than 20 years working across public policy, consulting, and sustainability strategy, John brings a calm, pragmatic voice to a conversation often dominated by fear or hype. Together, Kate and John unpack what businesses actually need to consider as AI becomes embedded into operations, reporting, and decision-making. Meet the Guest: John Pabon John Pabon is a sustainability expert with a background spanning the United Nations, McKinsey, AC Nielsen, and a decade living and working in China. He is the author of Sustainability for the Rest of Us: Your No BS 5 Point Plan for Saving the Planet* and is widely known as Australia’s only independent greenwashing expert. John works with organisations to move sustainability out of marketing spin and into real, strategic action, with a strong focus on transparency, governance, and trust. The Environmental Impact of AI: What We Know and What We Don’t One of the most common concerns Kate hears in AI training sessions is about energy use, data centres, and AI’s carbon and water footprint. John explains why these concerns are valid, particularly when it comes to the rapid expansion of data centres and the resources required to cool them. At the same time, he cautions against alarmist thinking. AI’s environmental impact is still being measured in different ways, and the technology is evolving quickly. The bigger challenge right now is uncertainty — and the pressure on companies to scale AI fast while still meeting sustainability targets. Sustainability Is More Than the Environment A key theme in the conversation is that sustainability is not just about emissions or energy use. John emphasises the importance of the social and governance sides of sustainability, especially as AI becomes more influential in reporting, decision-making, and communication. From fabricated reports to unverified claims, AI introduces new risks when expertise is missing. This is where governance, oversight, and what Kate calls “expert in the loop” become critical to avoid misinformation and reputational damage. Greenwashing, Greenhushing, and AI John breaks down greenwashing in simple terms: when organisations use the language of sustainability without the substance to support it. He explains why AI creates fresh opportunities for greenwashing, particularly when companies make vague or exaggerated claims about “responsible” or “sustainable” AI without evidence. The conversation also introduces the idea of greenhushing — when companies say nothing at all out of fear of getting it wrong. John argues that silence erodes trust just as much as misleading claims, and that openness, honesty, and progress matter more than perfection. Can AI Support Sustainability Instead of Undermining It? Despite the risks, John is clear that AI also holds real promise. From supply chain traceability to emissions reporting, AI can help businesses understand what is actually happening inside their operations — especially where sustainability impacts have traditionally been hard to measure. Used well, AI can support better decision-making, reduce inefficiencies, and help organisations focus on what truly matters rather than chasing trends. Trust, Transparency, and Consumer Backlash As public awareness of AI grows, Kate and John discuss the very real possibility of consumer backlash, particularly when AI use conflicts with a company’s stated values. John stresses that trust is built through transparency — explaining not just what a company is doing with AI, but why. People don’t expect organisations to have all the answers. They do expect honesty, clarity, and a willingness to take responsibility. Regulation, Education, and Personal Responsibility The episode also explores the uneven global approach to AI regulation, from Europe’s safety-first stance to America’s innovation push. John and Kate agree that education has not kept pace with adoption, leaving many people unsure how to use AI responsibly. John shares how he personally uses AI as a thinking partner in his consulting work, while remaining cautious about outsourcing expertise or creative judgement. Both emphasise personal responsibility — how individuals and organisations choose to engage with AI matters. A Hopeful Look Ahead The episode closes on an optimistic note. John shares his vision of a future where sustainability is so embedded into business that every purchase becomes sustainable by default. In that future, AI plays a supporting role — helping organisations get there faster and more effectively, without leaving people behind. Connect with John Pabon To learn more about John’s work, visit https://www.johnpabon.com Social Media Links: TikTok/Instagram: @johnapabon LinkedIn: https://www.linkedin.com/in/johnpabon

    43 min

About

Unravel the complexities of AI with The AI Grapple Podcast, hosted by Kate vanderVoort. Dive into thought-provoking discussions on the most critical AI issues shaping our world. Perfect for marketers and business professionals, this podcast is your guide to integrating AI responsibly and ethically into your organization. Join us as we navigate the future of technology and its profound impact on humanity.