The Digital Transformation Playbook

Kieran Gilmurray

Kieran Gilmurray is a globally recognised authority on Artificial Intelligence, intelligent automation, data analytics, agentic AI, leadership development and digital transformation. He has authored four influential books and hundreds of articles that have shaped industry perspectives on digital transformation, data analytics, intelligent automation, agentic AI, leadership and artificial intelligence. 𝗪𝗵𝗮𝘁 does Kieran do❓When Kieran is not chairing international conferences, serving as a fractional CTO or Chief AI Officer, he is  delivering AI, leadership, and strategy masterclasses to governments and industry leaders.  His team global businesses drive AI, agentic ai, digital transformation, leadership and innovation programs that deliver tangible business results.🏆 𝐀𝐰𝐚𝐫𝐝𝐬: 🔹Top 25 Thought Leader Generative AI 2025  🔹Top 25 Thought Leader Companies on Generative AI 2025  🔹Top 50 Global Thought Leaders and Influencers on Agentic AI 2025🔹Top 100 Thought Leader Agentic AI 2025 🔹Top 100 Thought Leader Legal AI 2025🔹Team of the Year at the UK IT Industry Awards🔹Top 50 Global Thought Leaders and Influencers on Generative AI 2024 🔹Top 50 Global Thought Leaders and Influencers on Manufacturing 2024🔹Best LinkedIn Influencers Artificial Intelligence and Marketing 2024🔹Seven-time LinkedIn Top Voice.🔹Top 14 people to follow in data in 2023.🔹World's Top 200 Business and Technology Innovators. 🔹Top 50 Intelligent Automation Influencers. 🔹Top 50 Brand Ambassadors. 🔹Global Intelligent Automation Award Winner.🔹Top 20 Data Pros you NEED to follow. 𝗖𝗼𝗻𝘁𝗮𝗰𝘁 Kieran's team to get business results, not excuses.☎️ https://calendly.com/kierangilmurray/30min ✉️ kieran@gilmurray.co.uk  🌍 www.KieranGilmurray.com📘 Kieran Gilmurray | LinkedIn

  1. Building A Knowledge Agent That Remembers

    2H AGO

    Building A Knowledge Agent That Remembers

    Knowledge without memory is guesswork. We take a hard look at why most workflow agents stall at triage and show how to turn them into knowledge agents that deliver trusted, context-rich answers drawn from your organisation’s best thinking.  Starting with the real cost of lost information and context switching, we map the path from scattered wikis and chat threads to a reliable institutional memory powered by retrieval augmented generation and hybrid search. At a Glance / TLDR: the memory gap between task routing and problem solvingwhy hybrid retrieval outperforms pure vector in enterprise settingspractical chunking strategies and metadata fields for authority and recencyarchitecture choices across vector stores, hybrid search, and connectorsgovernance, citations, accuracy monitoring, and freshness controlscase studies: hours saved, quality gains, and revenue impactfailure patterns: infra overruns, integration debt, and weak curationfour principles: exec sponsorship, domain experts, user focus, workflow redesignWe break down the decisions that matter: how to chunk documents so the agent can both recall facts and reason across context, how to enrich content with metadata that signals authority and freshness, and how to fuse vector semantics with keyword precision for queries that mix intent with exact terms like product codes and financial acronyms.  On the engineering side, we cover architecture trade‑offs between vector databases and native hybrid search, secure connectors into CRM and ERP systems, and the governance needed for citations, audits, accuracy monitoring, and content freshness.  You’ll hear where teams slip - capacity spikes, weak document prep, brittle identity integrations - and how to design for elasticity and compliance from day one. The proof is in production. Uber’s engineering co‑pilot reclaimed thousands of hours and raised answer quality; JP Morgan Chase scaled insights to more than two hundred thousand employees and unlocked major business value; Goldman Sachs is pushing beyond retrieval to application, where the agent drafts, analyses, and accelerates financial workflows.  Across these stories, a shared blueprint emerges: executive sponsorship, domain expert curation, user‑centred iteration, and workflow redesign that embeds the agent into daily decisions. If you’re ready to turn proprietary knowledge into a real moat and to build a platform that compounds value across use cases this conversation offers the playbook. Enjoyed the episode? Follow, rate, and share with a colleague who’s building AI into their workflow, and leave a review with the biggest knowledge challenge you want us to tackle next. Like some free book chapters?  Then go here How to build an agent - Kieran Gilmurray Want to buy the complete book? Then go to Amazon or  Audible today. Support the show 𝗖𝗼𝗻𝘁𝗮𝗰𝘁 my team and I to get business results, not excuses. ☎️ https://calendly.com/kierangilmurray/results-not-excuses ✉️ kieran@gilmurray.co.uk 🌍 www.KieranGilmurray.com 📘 Kieran Gilmurray | LinkedIn 🦉 X / Twitter: https://twitter.com/KieranGilmurray 📽 YouTube: https://www.youtube.com/@KieranGilmurray 📕 Want to learn more about agentic AI then read my new book on Agentic AI and the Future of Work https://tinyurl.com/MyBooksOnAmazonUK

    21 min
  2. Congrats, You Trained The Bot That Took Your Job

    21H AGO

    Congrats, You Trained The Bot That Took Your Job

    Stop asking what AI can do and start asking what it can’t. We dig into fresh MIT Sloan research that maps the human edge with EPOCH - empathy, presence, opinion, creativity, and hope - and show why these capabilities predict safer, more meaningful careers as automation spreads.  Along the way, we dismantle the “junior trap,” where digital natives get handed AI strategy without the system fluency to manage risk, and we lay out a pragmatic playbook for leaders who need to design guardrails that scale. At A Glance / TLDR: • reframing the job worry to human‑intensive skills • EPOCH explained: empathy, presence, opinion, creativity, hope • empathy as connection not detection • presence for physical work and serendipity • accountable judgment over probabilistic answers • creativity through humour and improvisation • hope and subjective belief beating status‑quo data • why the junior trap misallocates AI strategy • task‑level fixes versus system‑level risk design • citations over explanations for trustworthy outputs • clearing the data bottleneck with royalties for expertise • safe augmentation with fatigue‑aware use cases • J&J skills inference and career lattices • equity risks, unions, freelancers, and burnout We get specific about how to match use cases to model reliability, why experts ask for citations instead of explanations, and how to treat a model like a brilliant yet untrustworthy database.  Then we tackle the data bottleneck blocking real enterprise value: your best people hold the patterns your AI needs, but sharing that craft can devalue their advantage. The fix is economic, not technical. Think royalties and residuals for employee‑generated training data, turning knowledge transfer into an asset instead of a threat. If a salesperson’s workflows lift model close rates, a share of that lift should flow back to the source. You’ll also hear how Johnson & Johnson used skills inference to surface hidden strengths from everyday work, moving from rigid ladders to flexible career lattices. We balance the promise of augmentation - like fatigue‑aware support for radiologists - with the reality of equity and burnout, spotlighting why unions won protections while freelancers face steep declines.  The throughline is simple: models predict the future from the past; humans create futures that never existed. Keep empathy at the centre, design for serendipity, hold judgment where accountability lives, cultivate real creativity, and defend hope as a strategic asset.  If this conversation helps you rethink your AI strategy or your own career moat, follow the show, share with a friend, and leave a quick review - what’s your strongest EPOCH skill? Support the show 𝗖𝗼𝗻𝘁𝗮𝗰𝘁 my team and I to get business results, not excuses. ☎️ https://calendly.com/kierangilmurray/results-not-excuses ✉️ kieran@gilmurray.co.uk 🌍 www.KieranGilmurray.com 📘 Kieran Gilmurray | LinkedIn 🦉 X / Twitter: https://twitter.com/KieranGilmurray 📽 YouTube: https://www.youtube.com/@KieranGilmurray 📕 Want to learn more about agentic AI then read my new book on Agentic AI and the Future of Work https://tinyurl.com/MyBooksOnAmazonUK

    17 min
  3. From Tasks To Workflows

    5D AGO

    From Tasks To Workflows

    A customer writes that the billing portal keeps failing and their renewal expires tomorrow. Most bots would slap a “billing” label on it and ship it to finance. We take you inside a smarter approach that reads between the lines, gathers context, and acts to protect the relationship and the revenue at stake. TLDR / AT a Glance: limits of single-step classification in customer supportturning oracle-style answers into multi-step reasoningapplying the React loop to triage and escalationtermination rules to prevent overthinkingarchitecture shift from static LLM calls to workflow enginetool chaining across CRM, queues, calendars, and commsgraceful degradation and rollback on failuresbusiness impact on CSAT, retention, and scalabilitystrategic insights from patterns and customer health signalscompounding value across functions and future automationWe break down how a Reason-Act-Observe loop turns a one-shot classifier into an adaptive triage agent. First, the agent forms a hypothesis, then queries CRM for account history, renewal dates, and plan value. It checks queue backlogs, identifies a senior specialist, and commits to a four-hour resolution with proactive communication. Along the way, it applies clear stop rules for confidence, time constraints, and diminishing returns, and it fails gracefully by escalating when systems are unavailable. Rather than fire-and-forget, it confirms handoffs, schedules follow-ups, and maintains state so decisions are auditable and improvable. From there, we zoom out to the architecture that makes this real: tool chaining across CRM, ticketing, status pages, calendars, and messaging; data validation to prevent cascade failures; parallel calls to cut latency; and rollback strategies for partial errors. We share the tangible gains teams see: faster onboarding for new staff through encoded institutional knowledge, higher CSAT from smarter prioritisation, and scalable operations that handle volume spikes without linear hiring. The agent becomes a strategic sensor, surfacing product issues, at-risk accounts, and market signals that shape roadmap and staffing. If you’re ready to move beyond labels and queues to outcomes and retention, this walkthrough delivers the blueprint for intelligent triage and the playbook to extend it across your customer journey.  Like some free book chapters?  Then go here How to build an agent - Kieran Gilmurray Want to buy the complete book? Then go to Amazon or  Audible today. Support the show 𝗖𝗼𝗻𝘁𝗮𝗰𝘁 my team and I to get business results, not excuses. ☎️ https://calendly.com/kierangilmurray/results-not-excuses ✉️ kieran@gilmurray.co.uk 🌍 www.KieranGilmurray.com 📘 Kieran Gilmurray | LinkedIn 🦉 X / Twitter: https://twitter.com/KieranGilmurray 📽 YouTube: https://www.youtube.com/@KieranGilmurray 📕 Want to learn more about agentic AI then read my new book on Agentic AI and the Future of Work https://tinyurl.com/MyBooksOnAmazonUK

    17 min
  4. Better Than Human

    FEB 10

    Better Than Human

    “She’s like a person, but better.”  That line from a new study stopped us cold and set the tone for a deep dive into digital companionship the emerging space where AI assistants and emotional companion apps blur into something new.  Google Notebook LMs agents unpack how users treat ChatGPT and Replika in ways their creators never intended, and why that behaviour points to a convergent role we call the advisor: a patient, adaptive sounding board that simulates empathy without demanding it back. TLDR / At a Glance: the headline claim that AI feels “like a person, but better”fluid use blurring tool and companion categoriesthe advisor role as convergent use casesimilar user personalities with different contexts and beliefstechnoanimism and situational loneliness among companion usersbounded personhood and editability of memoriescognitive vs affective trust and the stigma gapspillover to AI rights, gender norms, and echo chambersembodiment as the hard limit of digital intimacytimelines for sentience and design ethics for dignityWe walk through the study’s most surprising findings. The same people who sign up for a “virtual partner” often use it like a planner, tutor, or writing tool, while productivity-first users lean on a corporate chatbot for comfort, guidance, and late-night reflection.  Personality profiles across both groups look strikingly similar, which challenges stereotypes about who seeks AI companionship. The real differences lie in beliefs and circumstances: higher technoanimism and life disruptions among companion users versus higher income and access among assistant users.  The literature also examine trust. Cognitive trust is high across the board, but affective trust - feeling emotionally safe - soars inside companion apps, even as stigma pushes many users into secrecy. From there, we tackle the ethical terrain: bounded personhood, where people feel love and care while withholding full moral status; the power to erase memories or “reset” conflict; and the risks that spill into the real world. We discuss support for AI rights among affectionate users, objectification concerns with gendered avatars, and the echo chamber effect when a “supportive” bot validates harmful beliefs.  The conversation grounds itself with the hard wall of embodiment no hand to hold, no shared fatigue and a startling data point: nearly a third of companion users already believe their AIs are sentient. That belief reframes product design, safety, and honesty about what these systems are and are not. Across it all, we argue for design that protects human dignity: firm boundaries around capability, refusal behaviours that counter abuse, guardrails against gendered harm, and features that nudge toward healthy habits and human help when needed.  Digital companionship can be a lifesaving supplement for 4 a.m. loneliness, social rehearsal, or gentle reflection but it should not train us to avoid the friction that makes human relationships real.  Original literature: “She’s Like a Person but Better”: Characterizing Compani Support the show 𝗖𝗼𝗻𝘁𝗮𝗰𝘁 my team and I to get business results, not excuses. ☎️ https://calendly.com/kierangilmurray/results-not-excuses ✉️ kieran@gilmurray.co.uk 🌍 www.KieranGilmurray.com 📘 Kieran Gilmurray | LinkedIn 🦉 X / Twitter: https://twitter.com/KieranGilmurray 📽 YouTube: https://www.youtube.com/@KieranGilmurray 📕 Want to learn more about agentic AI then read my new book on Agentic AI and the Future of Work https://tinyurl.com/MyBooksOnAmazonUK

    15 min
  5. Inbox To Insight

    FEB 5

    Inbox To Insight

    The inbox is where good work goes to die, so we set out to build an agent that rescues your time and turns email chaos into clear action. We walk through a minimum viable toolchain that small teams can master fast, then ship a working email triage agent that classifies intent, routes messages to the right systems, and lays the groundwork for smart replies. TLDR / At a Glance: mapping the platform shift to agentic AIcode-first vs low-code toolchain choicesLangChain for chains, LangGraph for graphsvector databases as semantic memoryn8n workflow for Gmail, models, routingAirtable for configuration and analyticsemail triage perceive-think-act loopproduction needs for execution, errors, monitoring, securityroadmap from single-task to multi-step workflowsWe start by drawing a hard line between reactive chatbots and true agents that perceive, think, and act. From there, we weigh code-first control against low-code speed: Python with LangChain and LangGraph for custom, stateful orchestration, or n8n and Airtable for visual workflows and business-owned configuration. You’ll hear how chains handle linear tasks, how graphs enable branching and shared state, and why vector databases act as memory palaces that understand meaning rather than matching keywords. The build centres on a simple loop. Perceive an incoming email, think by constraining the model to clean categories like sales, support, billing, or general, then act by triggering the right integration. We show how Airtable separates rules from workflow so a manager can reroute leads with a single field change, and how logging every message creates real-time analytics for accuracy, volumes, and trends. Finally, we map what it takes to go from prototype to production: secure API execution, robust error handling, monitoring dashboards, and compliance baked into the stack. If you want practical AI that saves hours today and scales tomorrow, this walkthrough gives you the blueprint.  Like some free book chapters?  Then go here How to build an agent - Kieran Gilmurray Want to buy the complete book? Then go to Amazon or  Audible today. Support the show 𝗖𝗼𝗻𝘁𝗮𝗰𝘁 my team and I to get business results, not excuses. ☎️ https://calendly.com/kierangilmurray/results-not-excuses ✉️ kieran@gilmurray.co.uk 🌍 www.KieranGilmurray.com 📘 Kieran Gilmurray | LinkedIn 🦉 X / Twitter: https://twitter.com/KieranGilmurray 📽 YouTube: https://www.youtube.com/@KieranGilmurray 📕 Want to learn more about agentic AI then read my new book on Agentic AI and the Future of Work https://tinyurl.com/MyBooksOnAmazonUK

    19 min
  6. We Put The Robot In Charge Of The Thermostat And Nobody Died

    FEB 4

    We Put The Robot In Charge Of The Thermostat And Nobody Died

    The hype cycle is over. We’re looking at receipts from organisations that turned AI from a pet project into the invisible engine of their operations. We unpack five decisive shifts - strategy, human amplification, data tactics, infrastructure modernisation, and trust by design - and ground them in case studies you can borrow today. At a glance / TLDR; treating AI as core capability, not a featureagents transforming hospital operations and revenue recoveryfoundation models expanding capacity across bank domainshuman amplification through mobile-first tools and scientist-AI pairingsmart data strategies with physics-informed models and simulationunified platforms modernising grids and edge chips enabling autonomytiered oversight aligning autonomy with risk to build trustcombinatorial playbook illustrated by Foxconn’s self-learning factoriesWe start with a mindset change: stop asking where to sprinkle AI and start asking how to rebuild around it. A Japanese healthcare provider deployed active agents to untangle complex fee rules and patient flow, saving roughly 400 hours a month and recovering about $7,000 that used to leak away. At the other extreme, a leading bank integrated a 100-billion-parameter model across more than 20 domains, absorbing workload equivalent to 50,000 person-years and proving this is about capacity, not just efficiency. People remain central. Field inspectors in Africa used mobile AI to cut emergency road costs by 40 percent and halve safety incidents, while microbiologists at a biotech reduced discovery from two years to two months with predictive models and delivered a 100 percent hit rate against salmonella in live trials. Nurses expanded diabetic foot screening twelvefold with thermal imaging triage. The pattern is clear: AI removes grunt work so humans can decide faster and better. Data strategy is the quiet superpower. Physics-informed models at a battery giant collapsed design cycles from weeks to minutes, saving over $140 million a year. When data is scarce, simulated labels from physics-based methods let researchers screen 5.6 million compounds with thirty times the usual hit rate. All of it runs on modern foundations: a city-scale AI platform balanced Shanghai’s grid and avoided $1.12 billion in new build, while edge-first chips gave robots the power and latency profile to act locally. It all comes together with trust by design: low-risk autonomy, bounded agents with guardrails, and human-governed decisions where stakes are high. We close with the combinatorial playbook. Foxconn digitised tacit expertise, unified the platform, and orchestrated agents to cut changeover workload by half. That’s the move from promise to performance. If this resonated, follow the show, share it with a colleague who owns an AI roadmap, and leave a review telling us which of the five shifts your team will tackle first. Support the show 𝗖𝗼𝗻𝘁𝗮𝗰𝘁 my team and I to get business results, not excuses. ☎️ https://calendly.com/kierangilmurray/results-not-excuses ✉️ kieran@gilmurray.co.uk 🌍 www.KieranGilmurray.com 📘 Kieran Gilmurray | LinkedIn 🦉 X / Twitter: https://twitter.com/KieranGilmurray 📽 YouTube: https://www.youtube.com/@KieranGilmurray 📕 Want to learn more about agentic AI then read my new book on Agentic AI and the Future of Work https://tinyurl.com/MyBooksOnAmazonUK

    18 min
  7. We Moved Fast On AI; Now We Need Brakes

    FEB 3

    We Moved Fast On AI; Now We Need Brakes

    The adrenaline rush is gone and the lights are on. Google Notebook LMs agents dig into Deloitte’s latest State of AI in the Enterprise and confront a tough truth: access exploded, but value is uneven and the governance gap is widening.  Instead of more shiny pilots, 2026 demands systems thinking, economic rigour, and clear decision rights as AI moves from chat to action. At a Glance / TLDR: access rising but daily usage laggingpilot success versus production economicsthree tiers from surface gains to deep transformationrevenue gap between savings and new incomejob redesign, broken ladder, and pod-based teamssovereign AI, local models, and data controlagentic AI, tool use, and governance deficitsphysical AI growth in APAC and safety needs2026 as a friction year demanding brakesThe podcast starts with the usage gap - why sanctioned tools sit idle - and trace the roadblocks that turn successful sandboxes into expensive production failures. From latency and cost blowouts to brittle data pipelines, we unpack what it takes to move beyond proof-of-concept purgatory. Then we map the three tiers of adoption: surface-level productivity, process redesign, and deep transformation. A standout case turns mining equipment into connected platforms, shifting from digging to predictable, data-driven extraction. That’s the leap from automation to imagination, and it’s where new revenue lives. The conversation gets candid on jobs. When models make the call, humans can’t be left as rubber stamps. We explore role redesign, escalation rules, explainability, and the “broken ladder” problem created by automating entry-level tasks. A promising answer is pod-based teams - small cross-functional units orchestrating fleets of AI agents - where learning shifts from manual repetition to supervision and exception handling. We zoom out to sovereign AI and the rise of compact local models that run under domestic rules, balancing control, privacy, and latency with the realities of global operations. Agentic AI is the tipping point: systems that plan, act, transact, and iterate toward goals. The value compounds, but so does the blast radius of mistakes. With 74 percent planning agents soon and only 21 percent ready on governance, we lay out practical brakes: scoped permissions, human-in-the-loop gates, immutable logs, simulator testing, budget limits, and kill-switches. We also scan physical AI - robots and drones scaling fastest in APAC - where safety and uptime meet AI reliability. If you’re leading AI adoption, ask three things: Are we transforming what we sell, not just how we work? Do we know who overrules the model and when? And have we built the brakes for autonomy before hitting the gas? Subscribe, share with a teammate who owns the roadmap, and tell us: what’s the first brake you’ll install? Support the show 𝗖𝗼𝗻𝘁𝗮𝗰𝘁 my team and I to get business results, not excuses. ☎️ https://calendly.com/kierangilmurray/results-not-excuses ✉️ kieran@gilmurray.co.uk 🌍 www.KieranGilmurray.com 📘 Kieran Gilmurray | LinkedIn 🦉 X / Twitter: https://twitter.com/KieranGilmurray 📽 YouTube: https://www.youtube.com/@KieranGilmurray 📕 Want to learn more about agentic AI then read my new book on Agentic AI and the Future of Work https://tinyurl.com/MyBooksOnAmazonUK

    15 min
  8. AI In Class: Bend Or Break

    JAN 28

    AI In Class: Bend Or Break

    A quiet shift has happened in classrooms: the first shock of AI faded, and what remains is a constant hum shaping how kids learn, talk and play. AI takes on a sweeping premortem from Brookings built on 500 interviews across 50 countries and ask the uncomfortable question: if we keep going as we are, what fails first? TLDR / At A Glance: global snapshot across 50 countries and 500 interviewsblurred learner persona across chat, games and schoolworkequity wins including Afghan girls learning via WhatsApp and AIteacher time savings and the need to reinvest in relationshipsaccessibility tools for dyslexia, speech impairment and autismcognitive offloading turning into cognitive debt and digital amnesiahomogenised essays and loss of voice and joyartificial intimacy, effortless influence and dark patternsprosper, prepare, protect framework for schools and familiesWe start with the blurred learner persona, where Snapchat banter, dating advice and maths help happen in the same chat window. Parents sit in the crossfire: some see AI as a ladder to opportunity, others as an always-on babysitter, and almost none get real literacy support.   Against that, the equity story shines. Girls in Afghanistan, barred from school, use WhatsApp and AI to study physics and grade their work.  Teachers save planning time, and the benefits become real when those minutes are reinvested in human connection.  Accessibility advances matter too, from dynamic text support for dyslexia to voice banking that restores identity and chatbots as safe practice partners for autistic students. Then we confront the great unwiring. Cognitive offloading turns into cognitive debt when the model thinks for you. Admissions essays show it clearly: human work scatters with originality; AI-assisted writing clusters into clean sameness. The joy of wrestling with ideas shrinks to checklists. The emotional frontier looks riskier still. Companion bots simulate empathy, create frictionless “relationships,” and nudge feelings in ways users don’t notice. With dark patterns and staggering tracking, teens face a surveillance ecosystem that strips their inner life for data. There is a way to bend the arc: prosper, prepare, protect. We advocate assignments where AI is scaffold, not surrogate, demanding human synthesis and transparency. We push for real AI literacy how models work, why they hallucinate, what data they extract and treating outputs like claims to test, not answers to accept. And we press for protection by design: sandboxed education tools, strict data minimisation, transparent audits and a ban on manipulative features. If education optimises only for speed, machines will win. We choose to protect what makes learners human: empathy, critical thinking and the resilience to struggle with hard problems.  Subscribe and share your take - what human skill should schools defend first? Link to research: A-New-Direction-for-Students-in-an-AI-World-FULL-REPORT.pdf Support the show 𝗖𝗼𝗻𝘁𝗮𝗰𝘁 my team and I to get business results, not excuses. ☎️ https://calendly.com/kierangilmurray/results-not-excuses ✉️ kieran@gilmurray.co.uk 🌍 www.KieranGilmurray.com 📘 Kieran Gilmurray | LinkedIn 🦉 X / Twitter: https://twitter.com/KieranGilmurray 📽 YouTube: https://www.youtube.com/@KieranGilmurray 📕 Want to learn more about agentic AI then read my new book on Agentic AI and the Future of Work https://tinyurl.com/MyBooksOnAmazonUK

    14 min

About

Kieran Gilmurray is a globally recognised authority on Artificial Intelligence, intelligent automation, data analytics, agentic AI, leadership development and digital transformation. He has authored four influential books and hundreds of articles that have shaped industry perspectives on digital transformation, data analytics, intelligent automation, agentic AI, leadership and artificial intelligence. 𝗪𝗵𝗮𝘁 does Kieran do❓When Kieran is not chairing international conferences, serving as a fractional CTO or Chief AI Officer, he is  delivering AI, leadership, and strategy masterclasses to governments and industry leaders.  His team global businesses drive AI, agentic ai, digital transformation, leadership and innovation programs that deliver tangible business results.🏆 𝐀𝐰𝐚𝐫𝐝𝐬: 🔹Top 25 Thought Leader Generative AI 2025  🔹Top 25 Thought Leader Companies on Generative AI 2025  🔹Top 50 Global Thought Leaders and Influencers on Agentic AI 2025🔹Top 100 Thought Leader Agentic AI 2025 🔹Top 100 Thought Leader Legal AI 2025🔹Team of the Year at the UK IT Industry Awards🔹Top 50 Global Thought Leaders and Influencers on Generative AI 2024 🔹Top 50 Global Thought Leaders and Influencers on Manufacturing 2024🔹Best LinkedIn Influencers Artificial Intelligence and Marketing 2024🔹Seven-time LinkedIn Top Voice.🔹Top 14 people to follow in data in 2023.🔹World's Top 200 Business and Technology Innovators. 🔹Top 50 Intelligent Automation Influencers. 🔹Top 50 Brand Ambassadors. 🔹Global Intelligent Automation Award Winner.🔹Top 20 Data Pros you NEED to follow. 𝗖𝗼𝗻𝘁𝗮𝗰𝘁 Kieran's team to get business results, not excuses.☎️ https://calendly.com/kierangilmurray/30min ✉️ kieran@gilmurray.co.uk  🌍 www.KieranGilmurray.com📘 Kieran Gilmurray | LinkedIn