The Kirkpatrick Podcast

Kirkpatrick Partners

Welcome to the Kirkpatrick podcast, where we bridge traditions and trends in learning and performance evaluation. Whether you're a seasoned learning professional or just starting out, join us as we dive into the Kirkpatrick Model like never before. Through stories and insights, we're fusing time-honored methods with cutting-edge innovations to help you navigate the ever-evolving world of learning and performance evaluation. Subscribe now to stay up-to-date with our weekly episodes and gain practical strategies to enhance your training programs. Don't miss out—be part of the learning revolution!

  1. 5D AGO

    The Hidden Cost of Decentralized Measurement

    Organizations rarely struggle because they lack data. They struggle because the data they have cannot tell a coherent story. Across large enterprises, teams measure success in different ways. One department tracks engagement, another measures efficiency, another focuses on operational output. Each team's metrics may be valid within its own context, yet when leaders try to interpret the organization as a whole, the pieces do not connect. The result is not bad data. It is fragmented intelligence. In this episode, Vanessa explores a common but rarely discussed problem: decentralized measurement. Many organizations intentionally give teams freedom to define their own metrics and evaluation approaches. Early on, this autonomy can create ownership, relevance, and speed. But over time, the same flexibility that drives local success can quietly undermine organizational learning. When teams measure performance using different definitions, frameworks, and interpretations, leaders cannot see patterns across the organization. Success in one area cannot easily be replicated in another. Failures do not produce transferable lessons. Dashboards multiply while trust in the data slowly declines. The conversation explores why fragmented measurement eventually becomes a leadership problem and how organizations can move toward something more powerful: shared evaluation language that preserves local relevance while enabling enterprise intelligence. Vanessa explains why the Kirkpatrick Model remains one of the most scalable frameworks for this challenge. Rather than forcing identical metrics across teams, it establishes a shared orientation to performance so that insights become comparable, portable, and actionable. Takeaways 1. Local optimization does not equal organizational learning. Teams can improve their own results without producing knowledge the organization can use. 2. Fragmented measurement erodes leadership trust in data. When dashboards conflict, leaders default to instinct, politics, or anecdotes. 3. Decentralized measurement creates structural visibility problems. This is not a people issue. It is a systems design issue. 4. Shared evaluation frameworks reduce cognitive load for leaders. When metrics follow a consistent logic, decision-making becomes faster and clearer. 5. Performance intelligence requires shared language. Organizations need common definitions of behavior, results, and impact. 6. Frameworks create alignment without removing autonomy. Teams can measure what matters locally while still contributing to enterprise insight. Listen to the episode to explore how organizations can shift from fragmented measurement to performance intelligence that scales across the business. Learn more about the Kirkpatrick Model Watch the Show on YouTube! Submit your Questions/Stories: Do you have a question you would like us to answer on the Kirkpatrick Podcast? Have a story about how the Kirkpatrick Podcast or other Kirkpatrick events/programs have positively impacted your career? We would love to hear from you! Follow this link to submit your questions and/or stories, and we may just share them on the next episode of the Kirkpatrick Podcast. #KirkpatrickPodcast #LearningImpact #PerformanceImprovement #TrainingEvaluation #CultureOfEvaluation #KirkpatrickModel

    21 min
  2. MAR 2

    A Culture of Evaluation: The Missing Link Between Strategy and Results

    Most organizations believe they have a culture of evaluation. They run surveys. They build dashboards. They report metrics. And yet performance stays flat. In this episode, we challenge a dangerous misconception: measurement volume is not the same as evaluation maturity. In fact, constant measurement without learning creates fatigue, defensiveness, and performative reporting. A true culture of evaluation is not about collecting more data. It's about how leaders respond when the data reveals something uncomfortable. Do they get curious—or defensive? Do teams reflect—or explain away? Does bad news spark learning—or silence? We explore why evaluation often becomes a justification tool instead of a decision tool, and how that shift quietly erodes trust, innovation, and organizational performance. Using the Kirkpatrick Model as a foundation, this conversation reframes evaluation as a cultural practice—not a technical function. When done correctly, evaluation reinforces learning rather than judgment. It becomes embedded in planning conversations, leadership meetings, progress reviews, and strategic decisions. Most importantly, building a culture of evaluation is not the responsibility of the learning team alone. It must be modeled and reinforced from the executive level down. If evaluation feels heavy in your organization, that is not a metrics issue. It is a cultural signal. Takeaways 1. Stop equating measurement with maturity. Collecting more data does not improve performance unless it drives different decisions. 2. Replace defensiveness with disciplined curiosity. How leaders react to uncomfortable data determines whether evaluation strengthens or weakens culture. 3. Evaluate in real time, not after the fact. Delayed evaluation increases waste and reduces your ability to course-correct. 4. Embed evaluation into leadership conversations. It should influence planning, resourcing, and strategic adjustments—not sit in a report. 5. Build shared ownership. A culture of evaluation cannot live in one department. It must be reinforced across the organization. If you are serious about connecting learning to performance, this episode will challenge how you think about evaluation—and what it truly requires. Listen now and subscribe for more conversations on performance, leadership, and results.   Learn more about the Kirkpatrick Model Watch the Show on YouTube! Submit your Questions/Stories: Do you have a question you would like us to answer on the Kirkpatrick Podcast? Have a story about how the Kirkpatrick Podcast or other Kirkpatrick events/programs have positively impacted your career? We would love to hear from you! Follow this link to submit your questions and/or stories, and we may just share them on the next episode of the Kirkpatrick Podcast. #KirkpatrickPodcast #LearningImpact #PerformanceImprovement #TrainingEvaluation #CultureOfEvaluation #KirkpatrickModel

    16 min
  3. FEB 23

    From Training Evaluation to Enterprise Performance Intelligence

    For decades, organizations have used the Kirkpatrick Model to evaluate training. And for decades, many have misunderstood what it was actually designed to do. We measured reaction surveys. We tracked completions. We reported learning scores. But somewhere along the way, evaluation became an after-the-fact reporting exercise instead of a strategic performance lens. The model became smaller than its intent. In this episode, Vanessa introduces the reintroduction of the Kirkpatrick Model—not as a replacement, not as a reinvention, but as a return to intent. The updated model makes visible what has always mattered but was often ignored: the performance environment, leadership expectations, systems, incentives, and the collaborative nature of results. Organizations today are more interconnected, more complex, and more dependent on systems than isolated interventions. If we continue treating evaluation as a post-training event, we will continue misdiagnosing performance problems and overburdening learning teams with accountability they cannot control. This episode reframes the model as what it has always meant to be: a framework for understanding whether performance is being enabled—and where it is breaking down. Takeaways 1. Stop treating the levels as steps. Use them as perspectives that reveal different performance signals. 2. Make the performance environment visible. Behavior change lives inside systems, not courses. 3. Shift from ROI to return on performance. Ask whether behavior changed and value was created—not just whether money was saved. 4. Embrace collaborative ROI. Results rarely come from one intervention or one function. 5. Move evaluation upstream. Use it to shape leadership decisions before solutions are launched. If you've ever felt constrained by how the Kirkpatrick Model has been described, this episode is your permission to let that go. Listen now and step into the new era of enterprise performance intelligence.   Learn more about the Kirkpatrick Model Watch the Show on YouTube! Submit your Questions/Stories: Do you have a question you would like us to answer on the Kirkpatrick Podcast? Have a story about how the Kirkpatrick Podcast or other Kirkpatrick events/programs have positively impacted your career? We would love to hear from you! Follow this link to submit your questions and/or stories, and we may just share them on the next episode of the Kirkpatrick Podcast. #KirkpatrickPodcast #LearningImpact #PerformanceImprovement #TrainingEvaluation #CultureOfEvaluation #KirkpatrickModel

    18 min
  4. FEB 16

    The Most Dangerous Question in Learning Evaluation

    Most learning leaders think they're asking the right question. "Did it work?" It sounds accountable. Efficient. Executive-ready. But this single question may be the very thing preventing your organization from improving performance. Binary questions create binary answers. Yes or no. Pass or fail. Keep it or cut it. But performance doesn't behave like a light switch. It behaves like a system. When we reduce evaluation to a verdict, we lose the most valuable insight: why something succeeded, where it broke down, and what conditions made the difference. Without that insight, leaders don't actually make better decisions. They just make faster ones. In this episode, we challenge the default evaluation mindset and explore how shifting from "Did it work?" to more strategic questions transforms L&D from a reporting function into a performance consultancy. We examine: Why binary thinking creates false clarity How verdict-driven evaluation shuts down improvement What executives actually need from evaluation data How the Kirkpatrick Model was designed to surface insight, not just ROI Why guidance matters more than judgment If your evaluation efforts end in a score, a dashboard, or a single ROI number, you may be providing closure—but not clarity. And when stakes are high, leaders don't need closure. They need guidance. Takeaways Stop delivering verdicts. Start delivering insight. Replace yes/no answers with analysis of what changed and why. Evaluate systems, not events. Performance unfolds over time and depends on support structures. Frame the Levels as questions, not checkboxes. Each level surfaces a different leadership decision. Expose performance breakdowns. Show where conditions supported or hindered success. Shift from learning reporter to performance advisor. Provide recommendations, not just results. Use evaluation iteratively. Let insights inform future design, investment, and execution decisions. Listen now and discover how better questions lead to better performance decisions—and stronger organizational results. Learn more about the Kirkpatrick Model Watch the Show on YouTube! Submit your Questions/Stories: Do you have a question you would like us to answer on the Kirkpatrick Podcast? Have a story about how the Kirkpatrick Podcast or other Kirkpatrick events/programs have positively impacted your career? We would love to hear from you! Follow this link to submit your questions and/or stories, and we may just share them on the next episode of the Kirkpatrick Podcast. #KirkpatrickPodcast #LearningImpact #PerformanceImprovement #TrainingEvaluation #CultureOfEvaluation #KirkpatrickModel

    21 min
  5. FEB 9

    The Real Reason Training Gets Blamed for Performance Problems

    Most organizations believe they evaluate training. In reality, they document it—after it's already too late to matter. One of the biggest misconceptions we see is the belief that evaluation happens after training. Post-program surveys, completion reports, and dashboards are treated as proof of value. But by the time those data points appear, the most important decisions have already been made: goals were defined (or not), success was loosely interpreted, metrics were chosen without context, and environmental constraints were ignored. When evaluation enters the process too late, it loses its power to influence performance. It can describe what happened—but it can't change what happens next. True evaluation is not validation. It's sense-making. It's the discipline that forces clarity about what success actually looks like in real work, what behaviors must change, what systems will enable or block that change, and what leaders must do differently to support it. Without that clarity upfront, training becomes the default solution—even when the real issue is time, leadership behavior, broken systems, or unrealistic expectations. In this episode, we challenge the industry's obsession with retrospective evaluation and make the case for moving evaluation to the beginning of the process—and wrapping it around the entire design and delivery lifecycle. We explore why activity metrics quietly erode credibility, how learning teams end up paying an "ignorance tax" for problems they didn't create, and why evaluation is the only lever learning functions truly own that can protect—and expand—their influence. Takeaways: Stop treating evaluation as proof; start using it as a decision tool. Define success in observable behaviors and business metrics before design begins. Identify environmental constraints early—or accept that performance won't change. Document recommendations on record to avoid being blamed for systemic failures. Use evaluation to influence leadership behavior, not just learner experience. If evaluation feels disconnected from performance in your organization, it's likely because it's entering the conversation far too late. 🎧 Listen to the full episode and subscribe to the Kirkpatrick Podcast to continue rethinking how learning influences results. Learn more about the Kirkpatrick Model Watch the Show on YouTube! Submit your Questions/Stories: Do you have a question you would like us to answer on the Kirkpatrick Podcast? Have a story about how the Kirkpatrick Podcast or other Kirkpatrick events/programs have positively impacted your career? We would love to hear from you! Follow this link to submit your questions and/or stories, and we may just share them on the next episode of the Kirkpatrick Podcast.     #KirkpatrickPodcast #LearningImpact #PerformanceImprovement #TrainingEvaluation #CultureOfEvaluation #KirkpatrickModel

    20 min
  6. FEB 2

    What the Kirkpatrick Model Was Never Supposed to Be—and Why That Matters Now

    For decades, many organizations have believed they were "doing Kirkpatrick." In reality, they were completing forms. In this episode, we challenge one of the most persistent misconceptions in learning and performance: that the Kirkpatrick Model is a linear, post-training evaluation checklist. That version of the model may be familiar, but it was never the intent. And more importantly, it limits the impact learning can have on real performance. The Kirkpatrick Model was designed to help organizations understand what is changing, what is not, and why. It was meant to guide inquiry, conversation, and decision-making—not validate activity after the fact. Yet over time, in the name of scalability and efficiency, the model was oversimplified. Levels became boxes. Questions became surveys. Evaluation became something we completed, not something we used. When that happens, learning teams shift from improving performance to defending programs. We measure satisfaction instead of capability. We report outputs instead of outcomes. And we miss the very insights that would allow us to design better solutions in the first place. In this episode, we explore what the Kirkpatrick Model was never meant to be—and how reclaiming its original intent can fundamentally change how we approach learning, leadership, and performance. Takeaways: Stop treating evaluation as a post-event requirement and start using it as a performance diagnostic. Replace standardized tools with intentional questions tied to real business decisions. Shift from validating effort to understanding behavior, environment, and results. Recognize that discomfort in evaluation often signals where the most valuable insights live. Use the Kirkpatrick Model as a tool for influence, not just reporting. If you've ever felt constrained by how the Kirkpatrick Model is typically taught, this conversation will feel both clarifying and freeing. 🎧 Listen now and subscribe to the Kirkpatrick Podcast for deeper conversations on evaluation, leadership, and organizational performance. Learn more about the Kirkpatrick Model Watch the Show on YouTube! Submit your Questions/Stories: Do you have a question you would like us to answer on the Kirkpatrick Podcast? Have a story about how the Kirkpatrick Podcast or other Kirkpatrick events/programs have positively impacted your career? We would love to hear from you! Follow this link to submit your questions and/or stories, and we may just share them on the next episode of the Kirkpatrick Podcast.     #KirkpatrickPodcast #LearningImpact #PerformanceImprovement #TrainingEvaluation #CultureOfEvaluation #KirkpatrickModel

    5 min
  7. JAN 30

    Pressure Doesn't Reveal Leaders—It Exposes Their Training

    Pressure doesn't create failure. It reveals it. In this episode of The Kirkpatrick Podcast, we explore a leadership truth many organizations avoid: when performance breaks down under pressure, the root cause is almost never motivation or intent—it's preparation, practice, and the absence of meaningful evaluation. Our conversation with Ray Resendez, Senior VP of Government Solutions at ELB Learning and a former Army officer, forces a reckoning with how leaders are developed—both in high-stakes environments and in modern organizations. From combat decision-making to business leadership, the throughline is clear: when leaders haven't practiced the behaviors required under pressure, instincts fail and emotions take over. We talk candidly about why most leadership training doesn't translate into performance, how organizations confuse activity with readiness, and why data—not gut instinct—is the missing link in leadership decision-making. We also challenge the assumption that learning automatically equals capability, especially in an era where AI and tools can mask skill gaps rather than close them. This episode matters because organizations today are operating in constant pressure—market volatility, talent shortages, remote work, and rapid change. Leaders are expected to perform flawlessly, yet few are evaluated on the behaviors that actually drive results. Takeaways: Stop assuming leaders will "figure it out" under pressure—unpracticed behaviors collapse when stakes are high. Training without rehearsal and feedback does not create readiness. Emotions and ego are the biggest performance risks when decisions aren't grounded in data. Behavior (Level 3) is the most overlooked—and most powerful—leading indicator of results. Performance dashboards should guide conversations, not punish people. Evaluation is not about proving success; it's about preventing failure. If you're responsible for developing leaders, improving performance, or making decisions that impact others, this conversation reframes what readiness really means. 🎧 Listen to the full episode and subscribe to The Kirkpatrick Podcast for grounded, performance-focused leadership conversations. Pressure doesn't create failure. It reveals it. In this episode of The Kirkpatrick Podcast, we explore a leadership truth many organizations avoid: when performance breaks down under pressure, the root cause is almost never motivation or intent—it's preparation, practice, and the absence of meaningful evaluation. Our conversation with Ray Resendez, Senior VP of Government Solutions at ELB Learning and a former Army officer, forces a reckoning with how leaders are developed—both in high-stakes environments and in modern organizations. From combat decision-making to business leadership, the throughline is clear: when leaders haven't practiced the behaviors required under pressure, instincts fail and emotions take over. We talk candidly about why most leadership training doesn't translate into performance, how organizations confuse activity with readiness, and why data—not gut instinct—is the missing link in leadership decision-making. We also challenge the assumption that learning automatically equals capability, especially in an era where AI and tools can mask skill gaps rather than close them. This episode matters because organizations today are operating in constant pressure—market volatility, talent shortages, remote work, and rapid change. Leaders are expected to perform flawlessly, yet few are evaluated on the behaviors that actually drive results. Takeaways: Stop assuming leaders will "figure it out" under pressure—unpracticed behaviors collapse when stakes are high. Training without rehearsal and feedback does not create readiness. Emotions and ego are the biggest performance risks when decisions aren't grounded in data. Behavior (Level 3) is the most overlooked—and most powerful—leading indicator of results. Performance dashboards should guide conversations, not punish people. Evaluation is not about proving success; it's about preventing failure. If you're responsible for developing leaders, improving performance, or making decisions that impact others, this conversation reframes what readiness really means. 🎧 Listen to the full episode and subscribe to The Kirkpatrick Podcast for grounded, performance-focused leadership conversations.   Learn more about the Kirkpatrick Model Watch the Show on YouTube! Submit your Questions/Stories: Do you have a question you would like us to answer on the Kirkpatrick Podcast? Have a story about how the Kirkpatrick Podcast or other Kirkpatrick events/programs have positively impacted your career? We would love to hear from you! Follow this link to submit your questions and/or stories, and we may just share them on the next episode of the Kirkpatrick Podcast.     #KirkpatrickPodcast #LearningImpact #PerformanceImprovement #TrainingEvaluation #CultureOfEvaluation #KirkpatrickModel

    51 min
  8. JAN 19

    From Numbers to Narratives: The New Way to Prove Learning's Value

    Financial ROI has long been the gold standard of proving learning impact—but what if the most meaningful results can't be captured in a spreadsheet? In this episode of The Kirkpatrick Podcast, Vanessa Alzate and Dr. Amy Heaton explore how to move beyond traditional ROI calculations to measure what truly matters: the human, behavioral, and cultural outcomes that shape real performance. Most ROI frameworks rely on self-reported productivity gains or financial return. But as they explain, the true impact of learning lives in stories, not just statistics. Through an integrated return model, the Kirkpatrick approach combines qualitative and quantitative research—surveys, interviews, focus groups, and behavioral observations—to uncover the full picture of learning effectiveness. From the boardroom to the battlefield, not every success can—or should—be measured in dollars. For organizations like the military, healthcare, and government, success means readiness, safety, and human outcomes. When you blend data with dialogue, you find truth in both numbers and narratives. You'll learn: Why ROI alone gives an incomplete view of learning impact. How to combine quantitative data with qualitative insight. What "triangulating truth" looks like in evaluation. Why behavior, culture, and performance tell a fuller story. How to apply this mindset in your own evaluation strategy. 🎯 Key Takeaway: ROI shows return. Evaluation shows reality. Listen now to learn how to measure what really matters—and prove your impact in the language both people and performance understand. Learn more about the Kirkpatrick Model Watch the Show on YouTube! Submit your Questions/Stories: Do you have a question you would like us to answer on the Kirkpatrick Podcast? Have a story about how the Kirkpatrick Podcast or other Kirkpatrick events/programs have positively impacted your career? We would love to hear from you! Follow this link to submit your questions and/or stories, and we may just share them on the next episode of the Kirkpatrick Podcast.     #KirkpatrickPodcast #LearningImpact #PerformanceImprovement #TrainingEvaluation #CultureOfEvaluation #KirkpatrickModel

    50 min

Ratings & Reviews

3.8
out of 5
5 Ratings

About

Welcome to the Kirkpatrick podcast, where we bridge traditions and trends in learning and performance evaluation. Whether you're a seasoned learning professional or just starting out, join us as we dive into the Kirkpatrick Model like never before. Through stories and insights, we're fusing time-honored methods with cutting-edge innovations to help you navigate the ever-evolving world of learning and performance evaluation. Subscribe now to stay up-to-date with our weekly episodes and gain practical strategies to enhance your training programs. Don't miss out—be part of the learning revolution!

You Might Also Like