EvalNetwork

EvalNetwork
Podcast de EvalNetwork

Program Evaluation and Research Consulting

Episodios

  1. 14 MAY

    The Power of Retrospective Pretests to Address Common Survey Research Challenges

    James Pann interviews Melanie Hwalek, Ph.D., a program evaluation consultant, to discuss the retrospective pretest (RPT) design, focusing on its practical applications and the findings from her recent research detailed in the paper, “Designing a Questionnaire with Retrospective PrePost Items: Format Matters.” RPT is particularly useful for evaluating changes in participants’ perceptions or self-assessments following interventions such as workshops or training sessions. It can be used to address common survey research challenges encountered by program evaluation consultants and researchers. Historical Background and Evolution of RPT Melanie traces the origins of RPT back to 1947, when it was first used to evaluate training impacts on soldiers’ attitudes. She highlights the significant milestones in RPT’s development, including its discussion in Campbell and Stanley’s seminal 1963 book on quasi-experimental designs, which solidified its methodological relevance. Advantages of Retrospective Pretest Surveys Practicality: Melanie emphasizes RPT’s practicality, particularly where pretesting is unfeasible or participants are unaware of their intervention until it happens. This method consolidates data collection at a single point, thereby simplifying logistical challenges and reducing potential biases associated with traditional pre/post-testing methods. Reduction of Response Shift Bias: A significant advantage of RPT is its ability to mitigate response shift bias. This occurs when participants’ understanding of the measured concept changes due to the intervention. For example, participants might realize they knew less than they initially thought after a training session. RPT asks participants to reassess their prior state of knowledge or attitudes postintervention, leading to potentially more accurate change measurements. This can prevent misleading outcomes like the boomerang effect, where participants report decreased knowledge or skills postintervention—not because the intervention failed, but because their enhanced understanding reveals a previous overestimation of their capabilities. Disadvantages and Limitations Despite its benefits, RPT has limitations, including reliance on autobiographical memory, which can be unreliable over long periods. It may also be unsuitable for children or certain interventions where defining a clear ‘before’ state is challenging. Insights from Dr. Hwalek’s Study on Retrospective Pretest Layouts Melanie’s recent study, as detailed in her paper, investigated the impact of different RPT questionnaire layouts on data quality. The study involved 1,941 caregivers participating in training workshops, comparing six layouts to see which minimized errors like inattentiveness and the boomerang effect. Key Findings: * Best Layout: Layout 1 was found to be the most effective. It placed questions in the center, with ‘before’ responses on the left and ‘now’ responses on the right. This layout significantly reduced inattentiveness and minimized the boomerang effect, indicating that it helped participants better understand and respond accurately to the survey. * Implications for Evaluators: These findings underscore the need to carefully consider survey design in RPTs to enhance data reliability and validity. Conclusion The interview with Dr. Hwalek provides comprehensive insights into the retrospective pretest design, reinforcing its utility in evaluating the impact of interventions and assisting a href="https://evalnetwork.

  2. 02/10/2023

    Empowering Change: David Fetterman on Using Evaluation to Build a Better World

    David Fetterman is a leading expert in empowerment evaluation, an approach that emphasizes collaboration, participation, and capacity building. He has written extensively on the topic, and his work has been used in a wide range of settings, including government agencies, non-profit organizations, and businesses. David focuses on helping people evaluate their programs and initiatives. Moreover, he supports them in learning how to use data to improve their work. He considers empowerment evaluation essential for building a more just and equitable society. Undeniably, it is critical for program evaluation consultants to be familiar with this approach. Furthermore, David is known for his innovative work, commitment to social justice, and ability to make complex ideas accessible to a wide audience. Understanding Empowerment Evaluation: A Shift in Traditional Evaluation Models Empowerment evaluation revolutionized the way we view and conduct evaluations. David explains how this approach places the evaluated individuals in control, transforming them from subjects to leaders of the evaluation process. This shift uncovers real, often overlooked issues and ensures more sustainable results by fostering a sense of ownership and self-efficacy among the participants. Integration of Empowerment Evaluation in Organizations: A Case Study David discusses the seamless integration of empowerment evaluation within organizational structures. He highlights a case where a school system successfully internalized evaluation, making it a natural and credible part of their daily operations and program implementation. This integration allowed for timely and data-driven mid-course corrections, ensuring the longevity and effectiveness of the evaluative practices within the organization. Empowering Communities through Participatory Evaluation: An Oregon Case Study This episode further explores the impact of empowerment evaluation on community development. Fetterman shares an inspiring story from Oregon, where children armed with collected cigarette butts advocated for smoke-free policies. This participatory approach, Fetterman notes, empowers communities to set and achieve their own goals, fostering independent growth and development. Microskills for Facilitating Empowerment Evaluation Meetings He delves into the essential micro-skills required for effectively facilitating evaluation meetings. From personal and communication skills to technical advising, these micro-skills ensure that the evaluation process is respectful, inclusive, and constructive, laying the foundation for meaningful and lasting change. Navigating Cultural and Political Realities in Project Facilitation The episode also touches on the challenges of navigating diverse cultural and political landscapes in project facilitation. Specifically, David shares personal experiences, emphasizing the importance of listening and adapting to local perspectives to ensure the success and acceptance of the projects abroad. Utilizing Evaluation and Empowerment for Positive Community Change David highlights the transformative power of evaluation and empowerment in enacting positive community change. For instance, he shares a compelling story of a community member’s journey from opposition to advocacy for a teenage pregnancy prevention program. The story underscores the importance of understanding and addressing underlying concerns within the community. Maintaining Self-Awareness and Reflection in Project Work In the podcast, David also discusses the critical role of self-awareness and reflection in project work. He also shares insights on receiving and learning from feedback, working closely with community partners, and ensuring the focus remains on strengthening the community rather than asserting control.

  3. 19/09/2023

    From Lecture Halls to Real-World Calls: Tiffany Berry’s Evaluation Insights

    Whether you're an educator, a student, or simply someone passionate about youth development and educational programs, this podcast episode with Tiffany Berry, PhD., promises to give you insights into the complex world of evaluation. She is the Dean and a full research professor in the Division of Behavioral & Organizational Sciences at Claremont Graduate University, where she also received her Ph.D. Her research interests include Educational Program Evaluation, Educational Curricula, Comprehensive School Reform, after-school services, social-emotional learning, and other areas. She has published over 75 technical evaluation reports and peer-reviewed articles in leading evaluation and youth development journals. See XXXXX for a detailed description of the interview. OUTLINE: 0:00 Overview of episode 2:07 Evaluation models and conceptual models she uses 8:56 Common challenges Tiffany encountered evaluating educational programs 11:59 Challenges in measuring youth development outcomes 17:36 Changing her evaluation approach due to unforeseen circumstances or changes in the program 23:00 How to stay current in the field of evaluation 28:10 Advice for those interested in becoming an evaluator 32:41 Advantages and disadvantages of getting a graduate degree in evaluation versus other disciplines 38:15 Credentialing in evaluation in the US 44:32 The uniqueness of Claremont’s evaluation program 49:40 Claremont’ Doctoral program in evaluation 55:09 Building a professional network in the evaluation field CONNECT WITH JAMES: - Subscribe to this YouTube channel: https://www.youtube.com/user/jamesmpa... - LinkedIn: https://www.linkedin.com/in/pannjames/ Please reach out with comments and questions. Thanks!

  4. 11/04/2023

    Mindfulness Meets Evaluation: Insights from Jim McDavid

    In this episode, I talk with Jim McDavid, Ph.D., about his experience with mindfulness and meditation practice, how it has influenced him, and how it affects how he views and practices evaluation. Our conversation also covers practical wisdom, Jim’s interest in the environment, and challenges associated with determining cause and effect in evaluation. Jim is Professor Emeritus at the School of Public Administration, University of Victoria, which he joined as faculty in 1980. He was a recipient of the University of Victoria Alumni Teaching Award, and he received the University of Victoria’s highest academic honor, the UVic Distinguished Professorship Award. He has contributed significantly to the field of evaluation, and the reason I reached out to him related to his work related to mindfulness and evaluation. 00:00 Introduction 01:57 Jim’s definition of mindfulness 05:20 The ethical dimension of mindfulness 08:49 How Jim’s mindfulness practice has evolved over time 11:46 Difference between Transcendental Meditation and Vipassana 15:33 The spiritual dimension of meditation 22:10 Subjectivity of cause and effect 29:34 How mindfulness opens up the mind 35:52 Mindfulness can support evaluation practice 40:32 Connection to practical wisdom 44:45 Importance of being present and really listening in evaluation contexts 49:20 Judgment and evaluation 50:46 Starting a mindfulness practice For more go to: https://evalnetwork.com/mindfulness-meets-evaluation/

  5. 02/03/2023

    Maximize Your Survey Response Rates: Expert Insights from Sheila Robinson

    In this episode, James Pann, Ph.D., interviews Sheila Robinson, Ed.D., about the topic of surveys and response rates. We focus on the significance of response rates in surveys and the steps that can be taken to maximize them. Sheila is a career educator and professional learning designer with experience in K-12 public education and higher education, a certified program evaluator, and Certified Presentation Specialist (CPS™) with particular interests in survey design, data visualization, and presentation design. Read more here: https://evalnetwork.com/maximize-your-survey-response-rates Timeline 00:40 Why Sheila is interested in survey research 03:50 Why survey response rate is critical 05:15 The optimal response rate 07:37 Ensuring accurate demographic representation of the sample 09:30 How to improve response rate 12:01 The survey invitation message is critical 13:26 Transparency in the length of time to complete the survey is important 15:19 Survey reminders can be used tactically to improve the response rate 16:43 Incentives should be used carefully 20:46 Timing of incentives and the principle of reciprocity 23:08 Building survey completion time into program activities can increase response rate 26:17 Importance of piloting the survey prior to use 28:15 Who sends the survey is important 33:20 How to reach Sheila EPISODE LINKS: - Sheila's website: https://www.sheilabrobinson.com/ - Sheila's Linkedin: https://www.linkedin.com/in/sheilabrobinson/ CONNECT WITH JAMES: - Subscribe to this YoutTube channel: https://www.youtube.com/user/jamesmpann?sub_confirmation=1 - LinkedIn: https://www.linkedin.com/in/pannjames/ Please reach out with comments and questions. Thanks!

  6. 28/03/2022

    Young Adult Cancer Survivors Increase Mindfulness and Connection During Nature Treks with David Victorson

    James Pann, Ph.D., interviews David Victorson, Ph.D., of True North Treks, a nonprofit organization whose mission is to empower young adults and caregivers affected by cancer to “find direction through connection” and mindfulness.As a child, David grew up surrounded by nature and its many restorative benefits. Therefore, when he went on to complete his postdoctoral fellowship in psychosocial oncology as a psychologist, he saw an opportunity to bring nature’s gifts to the young adult cancer patients he was seeing.In 2008, he co-founded True North Treks to fill some of the unmet needs of these cancer survivors and their caretakers and help them get their lives back on track. The reconnecting power of nature, coupled with mindfulness and meditation laid the basis for these restorative journeys.David goes on to discuss one of the most reported unmet needs: isolation. Many of the young cancer patients/survivors feel like they don’t know anybody like them. These treks allow the opportunity for deep social connection with others going through the same or similar experiences. These needs and solutions developed into three key points. True North Treks 3 Crucial Connections 1) Connection with nature (after going through something as unnatural as cancer treatment);2) Connection with peers who get it and have walked a similar path;3) Connection with oneself through mindful awareness practices, such as meditation and yoga.While it may sound like a therapy session at first, David emphasizes the lack of an explicit group therapy aspect. The guides are trained never to question the participants about their cancer and instead simply sit back and allow them to speak their minds. Often, the participants will immediately start talking about their cancer experience on their own.The guides, primarily mental health professionals, are taught to be themselves and simply bring mindfulness coaching. The participants benefit from the mindfulness and yoga experience and being with each other in the outdoors. That said, a “therapeutic” aspect tends to emerge on its own when the participants find themselves with several others just like themselves.Being one of the 3 Crucial Connections, David defines what mindfulness means on the treks. He states that it is simply the act of tuning into our present moment experience with qualities of openness, curiosity, and self-kindness.Beyond the definition, David and the coaches try to get participants to practice mindfulness regularly throughout the treks. This could start simply by observing/noticing things, talking about experiences in a group setting, and sitting with uncomfortable emotions.David indicated that there are different outcomes among participants after trying mindfulness practices. Some may be completely open-minded, while others may be very skeptical. Some may bring mindfulness to the little things in life, like a cup of coffee, while others carry the new awareness to a more significant aspect of their life, like their cancer journey. Analysis and Outcomes of the Treks When he’s not on a trek, David is at Northwestern University, where he does outcomes research and other academic activities. This has helped him develop a more focused and practical study and outcome analysis of the treks.In a recently published study of True North Treks, they used a version of the Patient-Reported Outcomes Measurement Information System (PROMIS), which he helped to develop at Northwestern University. They were even able to utilize some of the participants’ blood...

  7. 17/11/2021

    The CIPP Evaluation Framework with Guili Zhang

    James Pann interviews Guili Zhang about the Context, Input, Process, and Product (CIPP) evaluation model and other evaluation related areas. Dr. Zhang is Department Chair and Professor of Research and Evaluation at East Carolina University. She received a Ph.D. in Research and Evaluation Methodology from the University of Florida and postdoctoral advanced training in large scale data analysis from Stanford University. She has presented and published extensively, and led the evaluation of many programs and projects, funded by agencies such as the U.S. Department of Education and the National Science Foundation. Guili’s book, The CIPP Evaluation Model, coauthored with Daniel Stufflebeam, is the authoritative book on the CIPP Model, one of the most influential and widely used evaluation frameworks. Dr. Zhang is very active in the American Evaluation Association and is currently a Board of Directors Member-at-Large.Timeline00:00 – Introduction00:18 – How she got involved with the CIPP model01:43 – When she sent her evaluation report that used the CIPP model to Dan Stufflebeam02:38 – Dan asks Guili to write a book about the CIPP model with him05:53 – Collaborating on the writing of the book at a distance06:46 – Guili’s concise explanation of the CIPP model09:01 – CIPP model as an effective way to teach about evaluation in general10:30 – Unique advantages of using the CIPP model12:20 – A common sense approach to evaluation that can be used by many13:21 – CIPP model as a living, evolving framework14:44 – Updated CIPP model related checklists linked to in the book15:56 – What non-evaluation students can bring to their future work by learning evaluation18:22 – The best way to learn how to do evaluation19:35 – Evaluation resources she suggests21:50 – How evaluation can be used to improve our world23:41 – What Guili would like to accomplish while an AEA board member25:28 – Books she likes to give as a gift to friends and colleaguesEpisode LinksThe CIPP Evaluation Model: How to Evaluate for Improvement and Accountability. Western Michigan University, The Evaluation Center American Evaluation AssociationAmerican Family Education InstituteGuili’s LinkedinGuili’s TwitterConnect with JamesSubscribe to YouTube channelLinkedInTwitter Please reach out with comments and questions.Thanks!

  8. 11/07/2021

    What’s the difference between research and evaluation? with Dana Wanzer

    The difference between research and evaluation, the pros and cons of professionalization, the definition of evaluation, and other evaluation related topics with Dana Wanzer, Ph.D., interviewed by James Pann, Ph.D. Dana is an assistant professor of psychology in evaluation in the psychology department at the University of Wisconsin at Stout. Dana teaches evaluation courses to students in the MS in Applied Psychology program, as well as statistics and intro psychology. Her research focuses on the evaluation profession, including defining evaluation, data visualization practices in evaluation, the role of politics in evaluation, and more.“In one regard, that I think our research is pointing towards is, we need to professionalize to better communicate to others outside of our field, who we are, what we do, and how we are supposed to do this work. Because if we don’t have those professional boundaries, then I mean, and we see this all the time, then funders get to dictate how evaluation is done. And it doesn’t always align with our set of competencies, our ethical guiding principles, the personal frameworks and approaches that we use in evaluation, they’ll just say, no, this is what we expect. And that may not align with what you want to do as an evaluator or what you think is maybe even ethical as an evaluator.”Outline00:00 – Dana’s study on the difference between research and evaluation as perceived by evaluators and educational researchers01:01 – Why she decided to study this topic04:10 – The methodology of her research07:19 – What she found in her study10:26 – Professionalization in the field of evaluation12:14 – Common misunderstandings about what evaluation is and the impact16:32 – Dana’s explanation of evaluation depends on who she is talking to20:30 – The benefit of evaluators having subject matter expertise related to the evaluand21:25 – Agenda differences between evaluators and researchers24:53 – Do we need a better word than “evaluation” to describe what we do?28:42 – How Dana would convince a researcher that evaluation is different from research30:44 – Evaluation theories she uses33:19 – Social science theories she utilizes36:22 – What she recommends to students who want to learn about evaluation38:45 – How she hopes evaluation will benefit students who are not primarily focused on evaluation42:08 – How mindfulness can support evaluation education and practice 48:49 – Resources for beginning evaluators53:38 – How to connect with DanaEpisode LinksDana’s websiteDana’s TwitterDana’s LinkedInEvaluland PodcastDana’s October 2020 American Journal of Evaluation articlePodcast infoPodcast websiteApple PodcastConnect with JamesSubscribe to YouTube channelLinkedInTwitter Please reach out with comments and questions. Thanks!

  9. 17/12/2020

    The Importance of How We Define Evaluation with Amy Gullickson

    In this interview, I speak with Amy Gullickson, acting Co-Director and Senior Lecturer at the Centre for Program Evaluation at Melbourne Graduate School of Education. She is also Chair of the International Society for Evaluation Education. We talk about what is the best definition of evaluation and why it is important to have a clear definition. Amy also gives us some of her specific resources for people just starting to learn evaluation. Listen to the podcast episode here:  What is Amy’s definition of evaluation? Amy explained that it’s important for us to think about the implications of the definition. She does that in detail in her article titled, The Whole Elephant: Defining Evaluation. She indicates that evaluation is the generation of a credible and systematic determination of merit, worth, and/or significance of an object through the application of defensible criteria and standards to demonstrably relevant empirical facts. Amy states that it is the implications of the definitions that are important – it’s worth exploring what you (or your clients, or stakeholders) think evaluation is. That will shape what they expect you to deliver, and what may or may not be appropriate. Amy believes a definition of evaluation must include valuation. This is our task as evaluators and has been overshadowed by social science research. We’ve got much work to do to become as informed (and have as much empirical evidence about what good looks like) in our valuation practice as we are in our research practice.     Why Amy thinks it’s important to have a clear definition of evaluation People often think evaluation and research are the same things. Amy talks to me about why it is clear to understand the difference and have a clear definition. Amy gives an example, if you are trying to find the value of p (probability of a type I error), how big was the change? But evaluation asks, “so what?” Did it actually reach the people that are most important? Was it big enough to make a difference? Does that p value actually mean anything? The task defines the knowledge, skills, and attributes that are necessary to accomplish it. If evaluation is just applied social science, then there’s no need to have skills and knowledge related to valuation. Amy thinks this is a significant flaw in common evaluation practice. You might not get to summative judgment every time (and for good reasons- it might not be appropriate to do so), but if we take the valuation process out of the definition, then we are allowing the implicit values of the most powerful to determine what good is, what evidence is. Then we become complicit in upholding systems that oppress the global majority, in effect, giving our blessing to programs and systems that actually create harm. Amy explains this is exactly counter to what most people say they aspire to when they engage in evaluation.     How Amy believes evaluator competencies relate to how someone might define evaluation Most competency sets have more than 60 competencies (Amy tells us the Australian Evaluation Society has 94). Canada has decided that anyone who can demonstrate an acceptable level of skill on a percentage in each domain can be credentialed as an evaluator. But are all competencies equally important?

    1 h y 11 min
  10. 12/11/2020

    Evaluative Thinking with Thomas Archibald

    Thomas Archibald, Ph.D. is an Associate Professor and Extension Specialist in the Department of Agricultural, Leadership, and Community Education at Virginia Tech. His practice and research are focused primarily on program evaluation and evaluation capacity building. He serves as the Chief of Party/Director of the Feed the Future Senegal Youth in Agriculture project, funded by USAID/Senegal, which is increasing youth engagement in Senegal’s economic growth. Tom is also an Associate Editor for Evaluation and Program Planning, Editorial Board Member of New Directions for Evaluation, Editorial Board Member of the American Journal of Evaluation, Board Member of the Eastern Evaluation Research Society (EERS), and Program Co-Chair of the American Evaluation Association (AEA) Organizational Learning and Evaluation Capacity Building Topical Interest Group. We discuss the following areas: – Evaluative thinking and how it relates to evaluation capacity building – How valuing is an essential part of evaluative thinking – What evaluation can teach other organizations not traditionally served by evaluation and what evaluators we can learn from them – Specific steps for evaluators to become more reflective in their practice – The importance of evaluative thinking, critical thinking, practical wisdom, reflective practice, intuition, and mindfulness – Social science theories that – How theories of power and an understanding power dynamics can inform evaluation – How the study of evaluative thinking and evaluation can assist individuals in a broad range of disciplines – Can evaluation save the world or does it at least have a role in improving it? – The future of evaluation: new and emerging approaches – Books that Tom highly recommends You can connect with Tom on LinkedIn or @tgarchibald on Twitter. Enjoy!

Acerca de

Program Evaluation and Research Consulting

Para escuchar episodios explícitos, inicia sesión.

Mantente al día con este programa

Inicia sesión o regístrate para seguir programas, guardar episodios y enterarte de las últimas novedades.

Elige un país o región

Africa, Oriente Medio e India

Asia-Pacífico

Europa

Latinoamérica y el Caribe

Estados Unidos y Canadá