The AI Grapple

The AI Grapple

Unravel the complexities of AI with The AI Grapple Podcast, hosted by Kate vanderVoort. Dive into thought-provoking discussions on the most critical AI issues shaping our world. Perfect for marketers and business professionals, this podcast is your guide to integrating AI responsibly and ethically into your organization. Join us as we navigate the future of technology and its profound impact on humanity.

  1. 5D AGO

    Ep 47: Is AI Actually Good for the Economy? An Economist’s Clear Answer

    In this episode of The AI Grapple, Kate vanderVoort, Founder and CEO the AI Success Lab, is joined by economist Dr Bill Conerly to unpack what AI means for business, jobs, and the wider economy. Rather than hype or fear, Bill brings an economic lens to what is actually happening now and what signals matter most. They explore why the real impact of AI will come from specialised tools embedded into everyday workflows, not from people chatting to bots. Bill explains how general purpose technologies historically lower costs, shift prices, and improve living standards, even when disruption feels uncomfortable in the short term. The conversation also tackles job displacement, entry-level roles, ageing workforces, and what young professionals should focus on in an uncertain market. Toward the end, Kate and Bill discuss AI risk, interpretability, and whether extreme predictions about AI deserve serious concern. The episode closes with a grounded but optimistic view of how AI could improve productivity, choice, and quality of life if organisations implement it responsibly. Key Highlights • Most businesses are still in the experimentation phase Bill explains that while awareness of AI is high, clarity is not. Many leaders still see AI as synonymous with chatbots, which limits progress. As a result, activity is fragmented, with small trials rather than deliberate strategy. • The biggest economic impact will not come from chatbots Large language models are helpful, but Bill argues they are not where the major productivity gains will come from. The real shift is happening through AI embedded into specific business tasks that remove friction and save time without requiring people to learn new tools. • Specialised AI tools are already delivering measurable gains Examples from healthcare, sales, and the trades show AI quietly saving time and improving accuracy. Doctors reclaim minutes per patient, sales teams avoid manual CRM updates, and contractors generate faster, more accurate estimates, all without needing AI expertise. • AI will increasingly fade into the background Rather than being something people consciously “use,” AI will operate ambiently inside existing systems. Like GPS or recommendation engines, it becomes infrastructure, not a headline feature, which accelerates adoption. • AI fits the pattern of past general purpose technologies Bill frames AI as comparable to steam power or electricity. These technologies reshaped economies by lowering the cost of producing goods and services across many industries, rather than replacing one job or function at a time. • Why falling prices matter more than job numbers From an economic view, Bill focuses on purchasing power rather than employment counts alone. Productivity gains tend to push prices down faster than wages, which historically improves living standards, even when transitions are uncomfortable. • There may be financial froth, but real demand underneath While some AI investments and valuations may prove unsustainable, Bill believes the core economics are sound. AI-enabled services create genuine value, meaning the infrastructure will remain even if some companies fail. • Expect rapid business churn in the short term Lower barriers to building software will lead to many new companies appearing and disappearing quickly. This pattern mirrors earlier technology waves where experimentation was high and long-term winners emerged over time. • Job disruption will be visible before benefits Early job losses, especially in knowledge work, are easier to spot than gradual productivity gains. This timing gap fuels fear, even though new work and improved services tend to follow. • Older workers may feel the pressure more acutely Bill notes that people later in their careers doing repetitive knowledge tasks may find it harder to adapt or retrain. Some may opt for early retirement, raising real social and economic concerns during the transition. • Young professionals need adaptability, not certainty Entry-level roles in high AI exposure fields are shrinking. Bill suggests two responses: choosing work that is slower to automate or learning how to identify and improve repetitive processes using AI tools. • AI exposes broken systems rather than causing the problem Many frustrations blamed on AI stem from inefficient workflows that existed long before AI arrived. The technology simply removes the ability to ignore them. • Serious risks deserve attention, not panic Bill takes concerns around AI interpretability seriously, noting that experts still cannot fully explain why models behave as they do. While extreme doomsday scenarios are unlikely, thoughtful governance and oversight are necessary. • A realistic, optimistic future is possible If implemented well, AI can reduce waste, improve productivity, and give people more choice in how they work and live. The real opportunity lies not just in efficiency, but in better quality of life.   Links and Resources Guest Dr Bill Conerly Website: www.ConerlyConsulting.com LinkedIn: http://www.linkedin.com/in/businomics X (Twitter): https://x.com/BillConerly YouTube: https://www.youtube.com/@DrBillConerly Host Kate vanderVoort LinkedIn https://www.linkedin.com/in/katevandervoort/  Website: www.aisuccesslab.com Facebook Group: https://www.facebook.com/groups/aisuccessslab

    43 min
  2. JAN 26

    Ep 46: Remote Work in the Age of AI: Who Wins, Who Loses, and Why

    In this episode of The AI Grapple, Kate vanderVoort, Founder and CEO of the AI Success Lab, sits down with Chris Nolte, Founder and CEO of Kayana Remote Professionals, to unpack what’s really happening at the intersection of AI, remote work, and human capability. Chris brings the rare lens of an investor-operator who has led and scaled businesses across finance, retail, prop tech, healthcare, and real estate. As an early beta tester of OpenAI and a long-time advocate for global talent, he shares grounded insights into why AI adoption often stalls, and what leaders are getting wrong about jobs, productivity, and automation. This is a practical, human-centred conversation about execution, not hype. Meet Chris Nolte Chris Nolte is the Founder and CEO of Kayana Remote Professionals, a company helping growth-minded businesses, nonprofits, solopreneurs, and PE and VC-backed portfolio companies scale using top-tier Filipino talent, supported by AI-enabled matching and workflow automation. Before Kayana, Chris spent 17 years running a family office where he bought, built, and operated companies with long-term capital. He has served as President and CEO of Verlo Mattress, co-founded AneVista Group, and advised startups through Dragonfly Group. His work today sits squarely at the intersection of AI, remote talent, and the future of work. Why AI Tools Don’t Automatically Change How Work Gets Done One of the central themes of this conversation is what Chris calls the human execution gap. Many organisations invest heavily in AI tools, only to find that very little actually changes. Chris explains why this happens and why the real barrier isn’t technology, but the expectation of full automation without redesigning roles, workflows, or accountability. Kate and Chris explore why AI still needs human judgment, why unfinished automations are everywhere, and why execution breaks down when leaders expect tools to replace thinking. AI, Repeatable Work, and the Future of Remote Roles As AI takes on more repeatable, task-based work, the nature of many roles is shifting fast. Chris shares why this shift doesn’t spell the end of remote work, but rather raises expectations for what remote professionals contribute. Entry-level, low-context roles are disappearing, while higher-level work is becoming accessible far earlier in a career. The conversation reframes the fear around AI and jobs, focusing instead on how remote workforces can move up the value chain rather than being pushed out. Global Talent, AI, and the Levelling of Opportunity A powerful thread in this episode is how AI is reshaping global opportunity. Chris explains why AI is acting as a great enabler for talented people in countries that historically lacked access to education, capital, or global markets. With AI closing gaps in language, research, and communication, opportunity is becoming more evenly distributed, even if outcomes are not. Kate and Chris discuss what this means for competition, wages, and the reality that professionals are no longer competing locally, but globally. What Makes Remote Professionals Irreplaceable With AI available to everyone, differentiation now comes from distinctly human qualities. Chris outlines why professionalism, discernment, curiosity, and output-focused thinking matter more than ever. He explains why remote professionals must actively learn how to operate at a professional standard, and why trust is built through consistency, judgment, and ownership, not just technical skill. Kate adds real-world examples from her own team, showing how AI-supported workflows free leaders from bottlenecks while raising quality and expectations. AI, Custom GPTs, and Scaling Expertise The episode dives into how custom GPTs and AI workflows are changing knowledge transfer inside businesses. Kate shares how embedding her expertise into AI systems allows her remote team to work in her voice and style, reducing rework and approvals. Chris builds on this by explaining how content, education, and even books are changing as AI becomes part of how people learn and apply information. This section offers a glimpse into how expertise can now scale without burning out the expert. Productivity, Pace, and the Reality of Change AI is moving faster than people can adapt, and both Kate and Chris acknowledge the tension this creates. They discuss why productivity gains don’t come from layering AI on top of broken processes, and why many organisations are stuck between fear, regulation, and falling behind. The conversation also touches on global differences in AI adoption and the long arc of change businesses need to prepare for. Looking Ahead: The Future of Work That Never Sleeps To close, Chris shares his view on what’s coming next. He describes a future where even small and mid-sized businesses operate across time zones, supported by AI and distributed teams that follow the sun. Work doesn’t stop, even when leaders do. The result is faster execution, higher output, and more opportunity for people who know how to work with AI rather than around it. Key Takeaways AI adoption fails when leaders expect automation without changing how work is designed Repeatable, low-context roles are fading, while higher-value work is becoming accessible earlier Remote professionals compete on a global stage, with AI as the common denominator Human judgment, curiosity, and professionalism matter more, not less AI scales expertise best when paired with strong workflows and clear standards Connect with Chris Learn more about Chris’s work and Kayana Remote Professionals: Kayana Website: https://www.hirekayana.com Kayana Capital Website: https://kayanacapital.com/ LinkedIn: https://www.linkedin.com/in/chrisnolte/

    49 min
  3. JAN 12

    Ep 45: How Smart Leaders Navigate Fear, Trust, and Change with AI with Sarah Daly

    In this episode of The AI Grapple, Kate vanderVoort is joined by Sarah Daly (founder AI360 Review), AI Strategist and Researcher, to explore what’s really holding organisations back from successful AI adoption. Rather than focusing on tools or trends, this conversation goes deep into trust, leadership responsibility, workforce impact, and the human systems that determine whether AI succeeds or fails at scale. Sarah brings insights from six years of doctoral research into trust in AI at work, alongside her enterprise experience advising boards and senior leaders. Together, Kate and Sarah unpack why AI is not a technology problem, why people are already trusting AI more than they realise, and what organisations must do to navigate disruption honestly and responsibly. Key Topics Covered 1. Why Trust Is the Real Issue in AI Adoption Sarah explains that while public narratives focus on distrust, people are already placing deep trust in AI, often without realising it. From sharing personal information with AI tools to relying on outputs without verification, trust is already present but poorly calibrated. The challenge for organisations is not whether people trust AI, but whether they trust it in the right ways. 2. The Human Foundations of AI Performance At AI360Review, Sarah’s work begins with people, not platforms. She shares why technology is often easier to control than human systems, and how trust can be deliberately designed through environment, leadership behaviour, and culture. When the right conditions exist, even AI sceptics can become strong advocates. 3. Strategy Before Tools Rather than positioning AI as the strategy, Sarah argues it must support existing organisational goals. The starting point is always the problem being solved and the value being created. From there, organisations must consider governance, capability building, culture, education, innovation processes, and fit-for-purpose technology. This approach is formalised in the AI360 framework, which assesses AI readiness across six organisational dimensions. 4. Leadership, Governance, and Risk A recurring theme in the conversation is leadership clarity. When leaders lack confidence or avoid decisions, teams work around restrictions, often using AI in uncontrolled ways. Sarah reframes AI risk as a management issue, not a binary decision, and stresses that strong governance enables experimentation rather than shutting it down. 5. Australia’s AI Sentiment and the National AI Plan Kate and Sarah discuss Australia’s low trust levels in AI compared to global peers, particularly in the workplace. Sarah shares why enterprise sentiment varies widely depending on enablement and leadership support. They also explore Australia’s national AI plan, with Sarah supporting the decision to embed AI governance within existing regulatory structures rather than creating new bodies. 6. AI as a Thinking Partner The conversation shifts to how AI is changing how people think, write, and make decisions. Sarah highlights the difference between using AI as a creative partner versus outsourcing thinking entirely. Kate introduces discernment and personal responsibility as essential skills in the age of AI, especially given how readily people believe AI-generated outputs. 7. Workforce Impact and Difficult Conversations One of the most powerful sections of the episode focuses on workforce disruption. Sarah speaks candidly about automation, role changes, and job loss, and why avoiding these conversations damages trust. She advocates for transparency, agency, and AI literacy so employees can create value for their organisation and their future careers. 8. Consumer Backlash and Lessons from Early Movers Sarah shares lessons from organisations that moved too fast without accountability, including well-known AI failures. These examples show why companies must own AI-driven decisions, test rigorously, and protect customer experience. Second movers, particularly in Australia, have the advantage of learning from these mistakes. 9. Transparency and Ethical Use of AI The episode explores whether organisations should disclose AI use publicly. Sarah explains how expectations shift when AI becomes embedded in everyday work, while stressing that transparency around customer data, privacy, and protection remains essential. Over time, AI disclosures may become as standard as privacy policies. 10. A Human-Centred Case Study: IKEA Sarah shares an inspiring example from IKEA, where AI voice tools were introduced into call centres. Instead of job losses, staff were retrained as in-store interior designers, expanding customer experience and creating transferable skills for employees. This case shows what’s possible when organisations lead with people, not fear. 11. What the Future Could Look Like Looking ahead, Sarah remains optimistic. While human drivers like autonomy, mastery, and purpose remain constant, AI has the potential to reshape how people work, think, and create meaning. Used well, AI can augment human capability rather than diminish it, opening new possibilities for work and life. 12. Sarah’s AI Toolkit Rather than a single favourite tool, Sarah uses a mix of: Microsoft Copilot ChatGPT Claude Each serves a different purpose, reinforcing the idea that effective AI use is about intentionality, not loyalty to one platform. Resources & Links AI360Review: https://www.ai360review.com/ Connect with Sarah on LinkedIn: https://www.linkedin.com/in/sarah-daly-au/ Final Thoughts This episode is a must-listen for leaders navigating AI change, not just from a technical standpoint, but from a human one. Sarah Daly brings clarity, research-backed thinking, and real-world examples that challenge organisations to lead with courage, transparency, and responsibility as AI reshapes how we work. If you’re responsible for strategy, culture, or people, this conversation will give you a clearer lens on what really matters in AI adoption.

    47 min
  4. JAN 5

    Ep 44: Building Business Resilience Through Better Data with Davis DeRodes

    In this episode, Kate vanderVoort, CEO, AI Success Lab, sits down with Davis DeRodes, Head of Data Science Innovation at Fusion Risk Management, for a clear and practical look at the role data plays in business resilience. Davis has a rare gift for breaking down technical concepts, helping leaders understand how better data, smarter systems and simple planning can protect organisations from disruption. Davis explains why resilience is no longer just an enterprise issue and shares tangible steps small to medium businesses can take right now to prepare for the rapid change AI is bringing. They talk about AI-generated scenarios, data simulations, model transparency, synthetic data, employee-facing agents and how organisations can approach data in ways that set them up for long-term stability. This episode is perfect for leaders who want a grounded understanding of how data supports smart decisions, resilient systems and confident use of AI. Key Themes What enterprise resilience really means and why every organisation now needs it How AI-generated scenarios work and why they outperform traditional tabletop exercises The difference between data science and decision science How small and medium businesses can transform their data into a resilience asset The role of structured vs unstructured data in an AI-driven world What model context protocol (MCP) means for how AI accesses business systems Practical steps for leaders to strengthen resilience today Future trends in data collaboration, governance and synthetic data Why the “business brain” approach gives companies more control What work looks like when AI becomes a close collaborator Insights Data science vs decision science Davis explains the distinction in a way that helps non-technical leaders understand what data is actually for in a business and why waiting for perfect accuracy can slow teams down. AI-generated scenarios He walks through how Fusion uses AI to create highly tailored disruption scenarios that expose weak points organisations would never have spotted on their own. Monte Carlo simulations Davis describes modern simulation techniques that replace slow, expensive tabletop exercises with fast, repeatable, data-driven insights. Resilience for smaller businesses He outlines simple, accessible steps any organisation can take to strengthen resilience, including mapping revenue drivers, centralising key data, and understanding dependencies. Data governance as a superpower Why businesses that invest early in clean, structured data gain massive efficiency later. Synthetic data, future risks and the pollution of the internet A thoughtful conversation on how AI trains itself, the risks of AI training on AI, and why high-quality walled-off data sources will become even more valuable. AI as an employee How organisations will soon handle agents just like staff members, including permissions, access and responsibilities. Links Mentioned LinkedIn – Davis DeRodes https://www.linkedin.com/in/davis-derodes/ Fusion Risk Management https://www.fusionrm.com/ Kaggle synthetic datasets https://www.kaggle.com/datasets Google AI Studio https://aistudio.google.com/ If you enjoyed this episode, share it with a colleague who needs practical clarity on AI and data. Subscribe to The AI Grapple on your favourite podcast platform so you never miss an episode.

    49 min
  5. 12/29/2025

    Ep 43: The Truth About AI, Sustainability, and Trust: Time Will Tell

    In this episode of The AI Grapple, Kate VanderVoort (Founder of the AI Success Lab) is joined by sustainability author, consultant, and speaker John Pabon to explore one of the most pressing and uncomfortable questions facing AI adoption today: its impact on the environment, trust, and society. With more than 20 years working across public policy, consulting, and sustainability strategy, John brings a calm, pragmatic voice to a conversation often dominated by fear or hype. Together, Kate and John unpack what businesses actually need to consider as AI becomes embedded into operations, reporting, and decision-making. Meet the Guest: John Pabon John Pabon is a sustainability expert with a background spanning the United Nations, McKinsey, AC Nielsen, and a decade living and working in China. He is the author of Sustainability for the Rest of Us: Your No BS 5 Point Plan for Saving the Planet* and is widely known as Australia’s only independent greenwashing expert. John works with organisations to move sustainability out of marketing spin and into real, strategic action, with a strong focus on transparency, governance, and trust. The Environmental Impact of AI: What We Know and What We Don’t One of the most common concerns Kate hears in AI training sessions is about energy use, data centres, and AI’s carbon and water footprint. John explains why these concerns are valid, particularly when it comes to the rapid expansion of data centres and the resources required to cool them. At the same time, he cautions against alarmist thinking. AI’s environmental impact is still being measured in different ways, and the technology is evolving quickly. The bigger challenge right now is uncertainty — and the pressure on companies to scale AI fast while still meeting sustainability targets. Sustainability Is More Than the Environment A key theme in the conversation is that sustainability is not just about emissions or energy use. John emphasises the importance of the social and governance sides of sustainability, especially as AI becomes more influential in reporting, decision-making, and communication. From fabricated reports to unverified claims, AI introduces new risks when expertise is missing. This is where governance, oversight, and what Kate calls “expert in the loop” become critical to avoid misinformation and reputational damage. Greenwashing, Greenhushing, and AI John breaks down greenwashing in simple terms: when organisations use the language of sustainability without the substance to support it. He explains why AI creates fresh opportunities for greenwashing, particularly when companies make vague or exaggerated claims about “responsible” or “sustainable” AI without evidence. The conversation also introduces the idea of greenhushing — when companies say nothing at all out of fear of getting it wrong. John argues that silence erodes trust just as much as misleading claims, and that openness, honesty, and progress matter more than perfection. Can AI Support Sustainability Instead of Undermining It? Despite the risks, John is clear that AI also holds real promise. From supply chain traceability to emissions reporting, AI can help businesses understand what is actually happening inside their operations — especially where sustainability impacts have traditionally been hard to measure. Used well, AI can support better decision-making, reduce inefficiencies, and help organisations focus on what truly matters rather than chasing trends. Trust, Transparency, and Consumer Backlash As public awareness of AI grows, Kate and John discuss the very real possibility of consumer backlash, particularly when AI use conflicts with a company’s stated values. John stresses that trust is built through transparency — explaining not just what a company is doing with AI, but why. People don’t expect organisations to have all the answers. They do expect honesty, clarity, and a willingness to take responsibility. Regulation, Education, and Personal Responsibility The episode also explores the uneven global approach to AI regulation, from Europe’s safety-first stance to America’s innovation push. John and Kate agree that education has not kept pace with adoption, leaving many people unsure how to use AI responsibly. John shares how he personally uses AI as a thinking partner in his consulting work, while remaining cautious about outsourcing expertise or creative judgement. Both emphasise personal responsibility — how individuals and organisations choose to engage with AI matters. A Hopeful Look Ahead The episode closes on an optimistic note. John shares his vision of a future where sustainability is so embedded into business that every purchase becomes sustainable by default. In that future, AI plays a supporting role — helping organisations get there faster and more effectively, without leaving people behind. Connect with John Pabon To learn more about John’s work, visit https://www.johnpabon.com Social Media Links: TikTok/Instagram: @johnapabon LinkedIn: https://www.linkedin.com/in/johnpabon

    43 min
  6. 12/22/2025

    Ep 42: Maker, Shaper or Taker? David Espindola’s Guide to Smart AI Strategy

    In this episode, Kate vanderVoort (Founder and CEO at the AI Success Lab) speaks with futurist, author and technologist David Espindola, founder of Brainyus and author of Soulful: You in the Future of Artificial Intelligence. With more than 30 years in the tech industry, David has guided organisations through major waves of disruption. His work now focuses on human and AI collaboration, ethical adoption and how businesses can prepare for rapid change. What We Cover David’s AI journey David shares how his early work in technology set the stage for exploring AI long before it hit the mainstream. He explains the shift from AI being an academic topic to something every industry now has to face head-on. His first book, The Exponential Era, explored the convergence of fast-growing technologies, with AI standing out as the most powerful force shaping business and society. Why AI is different from past technology waves While tech change isn’t new, the speed and scale of AI is. David highlights how robotics, quantum computing and AI are blending, creating a level of disruption few leaders are ready for. The Maker, Shaper, Taker model David breaks down one of the most practical strategic models in this space: Makers build frontier AI models. Shapers fine-tune models on their own data and culture. Takers use AI built into existing tools. Most businesses don’t even realise these options exist. The conversation explores how smaller organisations can gain an advantage by choosing their place in this model with intention. The human side of AI adoption Kate and David dig into the fear, uncertainty and culture challenges that show up inside organisations. David shares how one client used an AI champion, clear policies and structured training to build confidence, capability and responsible use. He stresses the importance of trust, transparency and honest conversations about job changes. Workforce changes and agentic AI David discusses the shift ahead as agentic AI becomes part of everyday workflows. With half of entry-level roles at risk, he talks through the long-term impact on talent pipelines and how leaders should prepare their people now. Education’s turning point Both Kate and David explore the role of AI in learning and how personalised tutoring could transform the way people develop skills. They look at why bans don’t work, how critical thinking becomes even more important and what students need in order to thrive in an AI-driven world. David’s podcast and working with Zena, his AI colleague David shares the story behind his podcast Conversations with Zena and what happened when he trained an AI agent on his books, writing, values and language. He talks through the challenges of three-way conversations with AI, how context shapes quality and the surprising moments where Zena raised questions he didn’t expect. Global AI ethics, regulation and the geopolitical tension ahead The discussion covers the EU’s AI Act, US innovation, China’s influence and the need for shared approaches to safety, human rights and access. What’s possible if we get this right David closes with an optimistic view of what AI could unlock: abundance, less manual work, more meaningful creativity, and more time for humans to grow, reflect and connect. He also speaks to the risks and the need for strong global safeguards. Links and Resources David Espindola’s Website: davidespindola.com Brainyus: brainyus.com Book: Soulful: You in the Future of Artificial Intelligence Podcast: Conversations with Zena, My AI Colleague Connect with David LinkedIn: https://www.linkedin.com/in/davidespindola/ Instagram:  https://www.instagram.com/despindola23/ X: https://twitter.com/despindola23 YouTube: https://www.youtube.com/@despindola23 Connect with Kate at the AI Success Lab AI Success Lab AI Success Lab Facebook Community LinkedIn

    47 min
  7. 12/16/2025

    Ep 41: Raising Future-Ready Kids: The Family AI Game Plan with Amy D. Love

    In this episode of The AI Grapple, Kate vanderVoort speaks with Amy D. Love – founder of the international movement Discovering AI and best-selling author of Raising Entrepreneurs and Discovering AI: A Parent’s Guide to Raising Future-Ready Kids. A former Fortune 500 Chief Marketing Officer and Harvard MBA, Amy has turned her focus to helping families prepare their children for life and success in the age of AI. Amy and Kate dive into why families – not just schools or governments – are critical to AI readiness. They explore the need for practical, values-led guidance in navigating AI with kids and discuss how the FAMILY AI GAME PLAN is empowering parents to raise children who are not only aware of AI, but equipped to thrive alongside it. This episode is packed with practical strategies, real-life anecdotes, and thoughtful reframes that challenge the way we think about parenting, education, and technology. What We Cover: Why families are the frontline of AI education The vision behind Discovering AI and Amy’s shift from tech exec to children’s advocate Moving from fear to confidence as a parent in the age of AI A walkthrough of the FAMILY AI GAME PLAN – and how any family can use it “Create more, consume less” – why this mantra matters now more than ever The hidden risks of leaving AI education solely to schools or governments Real-life family AI activities that promote creativity, ethics and digital literacy What’s possible if every family gets this right in a single generation About Amy D. Love: Amy D. Love is the founder of Discovering AI, an international movement helping families prepare children to thrive in an AI-powered world. With a background as a Fortune 500 CMO and a Harvard MBA, Amy has advised AI leaders and policymakers on aligning tech with human values. She is the author of the best-selling Raising Entrepreneurs and the newly released Discovering AI. Her signature FAMILY AI GAME PLAN offers parents a practical framework to guide children’s use of AI with confidence, creativity and care. Resources and Links: Website: www.discoveringai.org www.discoveringai.com Books: Raising Entrepreneurs – Available on Amazon Discovering AI: A Parent’s Guide to Raising Future-Ready Kids – Available on Amazon Free resources, MindSpark activities and the FAMILY AI GAME PLAN available on the Discovering AI website Connect with Kate: Website: www.aisuccesslab.com LinkedIn: Kate vanderVoort Subscribe & Review: If you enjoyed this episode, please subscribe, rate and leave a review on your favourite podcast platform. Share this episode with a fellow parent or educator who’s navigating the world of AI with kids.

    45 min
  8. 12/08/2025

    Ep 40: From Fear to Confidence: Guiding Teams Through AI Adoption with Leadership Expert Neil Tunnah

    In this episode, Kate vanderVoort, Founder of the AI Success Lab, speaks with Neil Tunnah - former elite rugby coach, global leadership consultant and founder of The Performance Chain Group. Neil works with organisations across Australia and North America to help leaders build behavioural consistency, navigate uncertain environments and guide their people through rapid AI-driven change. Neil brings grounded thinking and honest reflection to some of the biggest leadership challenges of this moment. Together, we explore why clarity is the currency of trust, how fear spreads when leaders avoid hard conversations, and why AI won’t replace good leaders but will absolutely expose the weak ones. He also shares lessons from elite sport on resilience, habit-building and culture that apply directly to today’s workplaces. The discussion moves through strategy, psychology, culture and the realities facing teams on the ground. Neil also speaks openly about raising kids in this era and what the future of learning could look like with AI in the mix. What We Cover: How AI is disrupting leadership and why behavioural consistency matters more than ever Why many leaders are confused about AI strategy - and how that confusion cascades through organisations Creating clarity when the truth is that leaders don’t have all the answers yet The danger of top-down AI strategies that ignore frontline experience Human friction points organisations keep missing when adopting AI The cultural gaps that stop AI projects from gaining traction Fear, job security and why avoidance only increases anxiety Lessons from elite sport that shape how leaders can develop resilience and habits that actually stick How AI can enhance coaching, development and performance conversations Why the future of learning needs to shift away from memorising and towards real personalised development Raising children during an AI-driven transformation and building the foundations they’ll need What the “re-engineering” of workplaces and society might look like over the next few years Guest Bio: Neil Tunnah is a former elite rugby coach turned global leadership consultant and founder of The Performance Chain Group. He helps organisations across Australia and North America navigate change, embed behavioural consistency and lead well in an AI-shaped world. Known for a no-fluff approach to people and performance, Neil works at the intersection of culture, behaviour and leadership. He’s also a dad of two, a gym regular and still deeply connected to the rugby community. Connect with Neil: LinkedIn: https://www.linkedin.com/in/neil-tunnah-0a2071122/ The Performance Chain Group Listen & Subscribe: If you’re a leader, marketer or business professional wanting to understand how to navigate the human side of AI adoption, this episode offers timely, grounded guidance. Listen on your favourite podcast platform and follow the show for future conversations on practical AI in business.

    51 min

About

Unravel the complexities of AI with The AI Grapple Podcast, hosted by Kate vanderVoort. Dive into thought-provoking discussions on the most critical AI issues shaping our world. Perfect for marketers and business professionals, this podcast is your guide to integrating AI responsibly and ethically into your organization. Join us as we navigate the future of technology and its profound impact on humanity.