Psych Tech @ Work

Charles Handler

Science 4-Hire is now Psych Tech @ Work! - a podcast about safe innovation at the intersection of psychological science, technology, and the future of work. Psych Tech @ Work promotes safe technological innovation and human/machine partnerships as an essential force in creating equilibrium and between psychology and commerce.  Maintaining this balance in a time of unprecedented change is essential for ensuring that the future of work is ethical, positive, and prosperous.   Creating such a future requires an unprecedented level of interdisciplinary collaboration.  With the goal of educating, engaging, and inspiring others through thoughtful and practical discussions with guests from a wide variety of backgrounds and specialties, Psych Tech @ Work provides a smorgasbord of food for thought and practical takeaways about the issues that will make or break the future of work! charleshandler.substack.com

  1. Overcoming Obstacles to AI Adoption Through Creative Play

    قبل يومين

    Overcoming Obstacles to AI Adoption Through Creative Play

    “The problem with AI adoption isn’t just technical—it’s emotional. Creativity lowers the barrier of fear, and that opens the door to skill building.” – Jimmy Lepore Hagan Newsflash! After a much needed hiatus- Psych Tech @ Work is back with a vengeance! During the break I have been heads down in my lab- experimenting and playing with AI. SHE’S ALIVE! This episode marks the debut of my self-created AI podcast co-host Mayda Tokens.  It took me three weeks to make her and during this process I explored the human side of effectively collaborating with AI.  Making Mayda required me to flex my creativity, critical thinking, flexibility and perseverance.   My Mayda experience prepared me firsthand for a great conversation with Jimmy about creativity, AI, and the human psyche. In this episode of Psych Tech @ Work, I welcome my new friend and fellow New Orleanian Jimmy Lepore Hagan.  Together we explore why creativity is the missing link in many corporate AI readiness programs — and how it can be leveraged to help individuals and teams move from fear to fluency in a rapidly transforming world. Jimmy brings his bold, experience-driven perspective to the conversation, making the case that creative courage is not a soft skill — it's a strategic asset. Together, we discuss Jimmy’s new framework for enabling AI adoption through creativity — and my addition to the delivery of his hands-on workshop designed to help HR teams, L&D leaders, and talent professionals build AI fluency through creative exploration. Summary Creative thinking isn’t just about making art — it’s about rewiring our brains to embrace ambiguity, take risks, and explore the unknown. In this episode, we discuss how cultivating creativity can de-risk the AI learning curve, helping professionals feel more confident engaging with emerging tools. In an era of automation, the ability to experiment, play, and fail safely is what separates those who adapt from those who resist. These traits are not innate — they can be developed, and doing so can radically change how individuals approach new technology. The episode also highlights a workshop experience that puts this theory into action: a fun, safe, and high-impact program designed to build creative fluency first — and then apply it to AI. This approach helps teams lower psychological barriers to AI experimentation and open the door to real skills development. Themes We Explore * Creativity as an Onramp to AI Readiness Creativity builds the core capacities — curiosity, experimentation, and comfort with failure — that directly translate to AI learning and application. * Why Psychological Safety is a Prerequisite Without a safe space to explore, innovation doesn’t happen. We talk about how to build the cultural conditions that support real experimentation with new tech. * Learning to Play (Again) Many professionals have been conditioned out of creativity. Jimmy explains how low-stakes exercises can reawaken this muscle and prepare the brain for change. * Failure as Fuel We unpack the idea that failure is not just acceptable — it’s critical for both creative and AI development. Practicing failure makes success possible. * Designing for Transformation Hear how we’re applying these concepts in a new experiential workshop, helping HR and L&D leaders guide their organizations through tech transformation with humanity and purpose. The last word Despite the hype, many organizations struggle to operationalize AI adoption. Often, the barrier isn’t technical — it’s emotional and behavioral. Employees hesitate to engage because they fear doing it wrong or looking incompetent. This episode introduces a radical but practical solution: creativity. By focusing first on human traits like courage, curiosity, and psychological safety, organizations can build a foundation for real AI fluency and sustainable innovation. I have to give a direct and shameless plug for our workshop. Our workshop — combines science, storytelling, and hands-on exercises to help teams build the mindsets and skills needed for the future of work. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit charleshandler.substack.com

    ١ س ١٢ د
  2. Creativity Is the Gateway to AI Transformation

    ١٨ أغسطس

    Creativity Is the Gateway to AI Transformation

    My creative experience building an AI podcast co-host says it all. Hear all about it on the next episode of the Psych Tech @ Work Podcast - coming soon! AI skills are essential but daunting AI adoption is accelerating—over 70% of companies report they’re actively integrating AI tools into their workflows. But for the people expected to use those tools, it’s a different story. Most professionals say they feel unprepared or even anxious about using AI on the job. Traditional training often falls short with AI skills because it focuses on tools, not mindset. And the stakes are high: as AI becomes embedded in everyday work, careers will increasingly rely on comfort and expertise with AI. This gap and the demand for innovative strategies to close it has been top of mind for me. Good news - my fascination with AI led me to a solution! (more on this later) Creativity unlocks AI skills I recently gave a talk at a meeting of the New Orleans AI Philosopher’s group (AKA NOAI), on AI and the future of our local economy. At this event I saw a talk by Jimmy Lepore Hagan—an artist, designer and educator—who shared a fascinating approach to AI adoption that is fresh, unique, and noteworthy. Jimmy’s talk was about the value of creativity in lowering fear of AI. He demonstrated some concepts from a workshop series he has developed featuring a series of low stakes, creative exercises grounded in design thinking to help people build comfort, confidence, and curiosity when working with AI. As a workplace psychologist I immediately saw the potential for a collaboration - applying Jimmy’s hands-on educational model to my world to help people leaders solve a difficult problem. As someone who’s spent decades applying psychological science to the development and measurement of human traits in the workplace, I have experience understanding the impact of creativity on outcomes that are directly related to work performance. As I processed this stuff- I took a step back and reviewed foundational research that shaped my earlier work—this time, through the lens of AI. The connections stood out immediately. Traits like divergent thinking, cognitive flexibility, and creative self-efficacy have long been linked to performance, but they also play a critical role in how people approach new, uncertain technologies. The evidence is clear: creativity and experiential learning do more than build skills—they tap into deeply human strengths that make people more open, adaptable, and ready to thrive in the face of change. My dance with AI says it all It became pretty clear to me that a collaboration with Jimmy could really have some legs. To get the ball rolling I invited Jimmy to be a guest on my Podcast “Psych Tech @ Work”. To prepare I wanted to gain some first hand experience with using creativity to help me sharpen my AI skills. I suck at coding and the requirement to use Python for this definitely gave me some anxiety, but I knew ChatGPT could somehow have my back. Thus came the idea to challenge myself (and have some fun) building an AI podcast co-host, Mayda Tokens. Mapping out and executing a workflow to bring Mayda to life threw me plenty of curveballs. Some of ChatGPT’s more noteworthy and frustrating shenanigans included: * Multiple times ChatGPT relentlessly tried, and continually failed, to solve technical issues; but would not give up until I suggested that we were going in circles in a blind alley and maybe we should explore alternative methods. This prompt led immediately to a set of viable alternatives that would never have been explored if I hadn't decided to pull the plug. * When I backed ChatGPT into a corner I was flabbergasted when, instead of hallucinating a solution or looking for another option, it simply refused to help me. This was a head scratching result that must have exposed a ghost in the machine because its prime directive is NEVER to say NO! * As I explored different options for Mayda’s voice, my text to speech output randomly switched to Japanese and then to emoji * As we hit dead ends trying to figure out how to bring Mayda into my podcast studio, I stupidly followed its instructions to run to Best Buy and Guitar Center to buy unnecessary hardware that neither place actually sold. In the three weeks it took to bring Mayda to life, I became hyper-focused—borderline obsessed—with working through many obstacles. The dopamine hits I got each time we solved a challenge together reminds me that my brain chemistry is essential for accessing and applying uniquely human traits like creativity, critical thinking, resilience, and tolerance for ambiguity. The interplay between my human biology and psychology was essential for winning the day, and my experience building Mayda really hammered home the value of creative collaboration with AI. Our workshop is the gateway to fearless AI skills Learn how we’re helping companies build fearless, AI-ready teams. Viewing AI as a dance partner is the paradigm that serves as the foundation of our workshop. Instead of lectures, videos, and formulaic exercises; we use creative, hands-on activities that help people relate to AI in a way that feels playful, safe, and real. In our workshop participants explore AI through: * Improvised dialogue with generative models * Creative prompt challenges * Group problem-solving sprints * Human-AI art collaborations * Guided reflection and peer feedback By mapping each of these design thinking centric, hands-on exercises to psychological principles—like creative self-efficacy, openness to experience, and experiential learning—the workshop becomes more than fun. It becomes a stealth learning experience where participants not only gain essential AI skills, they undergo cognitive changes that empower them to believe in the value of partnering with AI. We believe our workshop can be a difference-maker for companies navigating AI transformation—and a real competitive advantage for those that are bold enough to think differently about AI adoption. To learn more about our workshop, the collaborative ideas behind it, and meet Mayda Tokens Visit our workshop page and be sure to listen to our conversation about it on the next edition of my Psych Tech @ Work podcast. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit charleshandler.substack.com

    ٥ من الدقائق
  3. ١٢ مايو

    Scaling AI Innovation for Hiring: Lessons from the Frontlines

    Guest: Christine Boyce, Global Innovation Leader at ManpowerGroup/Right Management “We have to stress-test innovation in the messiness of real-world hiring, not just ideal lab conditions.” -Christine Boyce In this episode of Psych Tech @ Work, I’m joined by my longtime friend Christine Boyce, Global Innovation Leader at ManpowerGroup/Right Management, to explore how innovation — especially around AI — is reshaping hiring and talent development at scale, and why solving for trust, transparency, and operational realities matters more than ever. Summary At the heart of this conversation is the reality that scaling AI innovation in hiring brings massive complexity. While AI offers incredible promise, solving for accuracy, fairness, and operational reality becomes exponentially harder when you're dealing with a large number of unique clients. Christine Boyce, through her work at ManpowerGroup & Right Management, operates at the intersection of these challenges every day. Unlike internal talent acquisition leaders who focus on one organization's needs, Christine must help innovate across a vast client portfolio. Each client presents different barriers — from data limitations, to ethical concerns, to regulatory pressures — and innovation must be modular, defensible, and adaptable to succeed. This vantage point gives Christine a unique, big-picture view of how AI adoption really plays out across industries and markets. We dive into the practical challenges of innovating responsibly: earning trust, scaling solutions across diverse environments, and balancing speed with fairness. Christine’s work at ManpowerGroup & Right Management highlights how innovation must be deeply disciplined if it is to achieve true scale and impact. The Core Challenge: Scaling Accuracy and Fairness At the heart of using AI for hiring lies the challenge of achieving accuracy and fairness at scale. AI’s true value isn’t just its ability to make individual decisions — it’s in processing vast amounts of data and automating judgment across thousands of candidates. However, scale magnifies both strengths and weaknesses: minor biases can grow into systemic problems, and small inefficiencies can snowball into major failures. Staffing firms like ManpowerGroup offer critical real-world lessons: * Scale forces discipline — Every AI tool must be rigorously vetted for fairness, transparency, and defensibility before deployment. * Real-world variation stresses the system for the better — Tools must flexibly adapt to diverse jobs, industries, and candidate pools. This makes the overall path of innovation better and drives great learnings across the board. * Speed must not erode trust — Productivity gains must still respect ethical standards and candidate experience. * External accountability keeps AI honest — Clients demand transparency, validation, and explainability before adoption. Real Barriers to AI Adoption: What Clients Are Facing Despite AI's potential, Christine identifies several persistent hurdles that she faces when serving her diverse slate of clients: * Resistance to Behavior Change: Even demonstrably valuable AI tools often struggle against entrenched workflows and distrust of automation. * Ethical and Trust Concerns: Clients demand AI systems that are transparent, explainable, and defensible, fearing reputational or regulatory risks. * Vendor Noise Overload: Saturation by "AI-washed" vendors makes it hard to differentiate true innovation from hype. * Mismatch Between Hype and Practical Needs: Clients need tools that solve today’s operational problems — not just futuristic visions disconnected from reality. * Fear of Creeping AI Adoption: Organizations worry about AI capabilities being embedded into systems without visibility or intentionality. * Compliance and Regulation Anxiety: Global and local regulations (like the EU AI Act or pending US laws) create urgency for proven, compliant AI solutions. * Talent Data Readiness: Without clean, structured internal data, even the best AI solutions struggle to deliver meaningful results. These challenges aren't isolated — they reveal the broader realities companies must manage when trying to adopt AI responsibly at scale. Ultimately, client concerns have a hand in AI innovation because they are critical for the adoption of these technologies, shaping how staffing firms and vendors must design, validate, and deploy solutions. There’s an inherent tension between the drive for scale and the need for trust, fairness, and operational reality. Christine’s experience demonstrates that true innovation in AI for hiring isn't just about introducing new tools — it’s about creating resilient, transparent systems that can adapt to real-world complexity. Managing the tension between speed, scale, trust, and fairness represents the path to a bright future. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit charleshandler.substack.com

    ٥٢ من الدقائق
  4. ١٥ أبريل

    Responsible AI In 2025 and Beyond – Three pillars of progress

    "Part of putting an AI strategy together is understanding the limitations and where unintended consequences could occur, which is why you need diversity of thought within committees created to guide AI governance and ethics."  – Bob Pulver My guest for this episode is my friend in ethical/responsible AI, Bob Pulver, the founder of CognitivePath.io and host of the podcast "Elevate Your AIQ."  Bob specializes in helping organizations navigate the complexities of responsible AI, from strategic adoption to effective governance practices.   Bob was my guest about a year ago and in this episode he drops back in to discuss what has changed in the faced paced world of AI across three pillars of responsible AI usage.   * Human-Centric AI  * AI Adoption and Readiness  * AI Regulation and Governance The past year’s progress explained through three pillars that are shaping ethical AI: These are the themes that we explore in our conversation and our thoughts on what has changed/evolved in the past year. 1. Human-Centric AI Change from Last Year: * Shift from compliance-driven AI towards a more holistic, human-focused perspective, emphasizing AI's potential to enhance human capabilities and fairness. Reasons for Change: * Increasing comfort level with AI and experience with the benefits that it brings to our work * Continued exploration and development of low stakes, low friction use cases * AI continues to be seen as a partner and magnifier of human capabilities What to Expect in the Next Year: * Increased experience with human machine partnerships * Increased opportunities to build superpowers * Increased adoption of human centric tools by employers 2. AI Adoption and Readiness Change from Last Year: * Organizations have moved from cautious, fragmented adoption to structured, strategic readiness and literacy initiatives. * Significant growth in AI educational resources and adoption within teams, rather than just individuals. Reasons for Change: * Improved understanding of AI's benefits and limitations, reducing fears and resistance. * Availability of targeted AI literacy programs, promoting organization-wide AI understanding and capability building. What to Expect in the Next Year: * More systematic frameworks for AI adoption across entire organizations. * Increased demand for formal AI proficiency assessments to ensure responsible and effective usage. 3. AI Regulation and Governance Change from Last Year: * Transition from broad discussions about potential regulations towards concrete legislative actions, particularly at state and international levels (e.g., EU AI Act, California laws). * Momentum to hold vendors of AI increasingly accountable for ethical AI use. Reasons for Change: * Growing awareness of risks associated with unchecked AI deployment. * Increased push to stay on the right side of AI via legislative activity at state and global levels addressing transparency, accountability, and fairness. What to Expect in the Next Year: * Implementation of stricter AI audits and compliance standards. * Clearer responsibilities for vendors and organizations regarding ethical AI practices. * Finally some concrete standards that will require fundamental changes in oversight and create messy situations. Practical Takeaways: What should I/we be doing to move the ball fwd and realize AI’s full potential while limiting collateral damage? Prioritize Human-Centric AI Design * Define Clear Use Cases: Ensure AI is solving a genuine human-centered problem rather than just introducing technology for technology’s sake. * Promote Transparency and Trust: Clearly communicate how and why AI is being used, ensuring it enhances rather than replaces human judgment and involvement. Build Robust AI Literacy and Education Programs * Develop Organizational AI Literacy: Implement structured training initiatives that educate employees about fundamental AI concepts, the practical implications of AI use, and ethical considerations. * Create Role-Specific Training: Provide tailored AI skill-building programs based on roles and responsibilities, moving beyond individual productivity to team-based effectiveness. Strengthen AI Governance and Oversight * Adopt Proactive Compliance Practices: Align internal policies with rigorous standards such as the EU AI Act to preemptively prepare for emerging local and global legislation. * Vendor Accountability: Develop clear guidelines and rigorous vetting processes for vendors to ensure transparency and responsible use, preparing your organization for upcoming regulatory audits. Monitor AI Effectiveness and Impact * Continuous Monitoring: Shift from periodic audits to continuous monitoring of AI tools to ensure fairness, transparency, and functionality. * Evaluate Human Impact Regularly: Regularly assess the human impact of AI tools on employee experience, fairness in decision-making, and organizational trust. Email Bob- bob@cognitivepath.io  Listen to Bob’s awesome podcast - Elevate you AIQ This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit charleshandler.substack.com

    ٥٥ من الدقائق
  5. ١٨ مارس

    The Reality of Skills-Based Hiring Rests on Three Essential Pillars- with Jason Tyszko

    “We have to move beyond the idea that a skills-based job description is enough—there needs to be validation, assessment, and a clear pathway for job seekers to prove their abilities.” -Jason Tyszko In this episode of Psych Tech @ Work, I sit down with Jason Tyszko, Senior Vice President of the U.S. Chamber of Commerce Foundation, to discuss what it really takes to make skills-based hiring a reality.  Jason oversees the Foundation’s T3 Innovation Network, a public-private initiative aimed at creating a more equitable and inclusive job market. T-3 focuses on using digital tools to improve communication between different parts of the job market, ensuring that all learning is recognized and valued.  T-3’s mission to bridge gaps between employers and workers via the advancement of skills-based hiring makes Jason one of the  world’s foremost authorities on the subject. Our conversation is a must for anyone interested in understanding the REALITIES required for true skills-based hiring.  Most conversations on the subject are more hype than substance, but not this one!  Jason takes us deeper into the reality of what it will take to make skills based hiring more than just an empty buzzword. To ground our conversation in a dose of reality, Jason boils success with skills based hiring into these three pillars. * Interoperable Skills Data * To make skills-based hiring a reality, we need standardized, structured, and widely accepted skills data that flows seamlessly across education providers, employers, and workforce systems. * Without interoperability, skills data remains fragmented, making it difficult for employers to assess candidates meaningfully. * Employer Engagement and Adoption * Employers must align job descriptions, hiring processes, and internal mobility pathways around skills rather than degrees or traditional credentials. * Many organizations support skills-based hiring in theory but fail to implement it fully due to ingrained legacy practices. * Technology Infrastructure and Ecosystem Readiness * AI, job-matching platforms, and hiring tools must be built to recognize and evaluate skills accurately, rather than simply filtering candidates based on outdated proxies like job titles or degrees. * Systems should support skills validation, assessment, and transparent career pathways to ensure fair and effective hiring decisions. Jason explains how these pillars support and enable five critical but often overlooked elements that are essential to making skills-based hiring work:  1. Learning and Employment Records (LERs) & The LER Resume Standard * What it is: LERs are digital, verifiable records of a person’s skills, training, certifications, and work experience. Instead of relying on traditional resumes or self-reported skills, LERs allow employers to see a structured, validated record of a candidate’s capabilities. * Why it matters: Today’s hiring systems don’t talk to each other. Skills data is trapped in different platforms (learning management systems, certifications, HR software). LERs allow skills-based hiring to function at scale by ensuring a candidate’s credentials are portable and universally recognized. * LER Resume Standard: This is a newly developed resume format built to process LERs, ensuring HR tech systems can read, compare, and use skills-based data more effectively. 2. Durable Skills * What it is: Unlike technical skills (which can quickly become outdated), durable skills are long-lasting, transferable skills like critical thinking, adaptability, leadership, and collaboration. * Why it matters: Most AI-driven hiring tools over-prioritize technical skills, but durable skills are what truly drive career success. Without a way to assess and validate them, companies risk hiring for short-term needs instead of long-term potential. 3. The Interoperability Layer * What it is: A technical framework that allows skills data from different platforms to connect and work together—like an API that helps job boards, HR systems, and learning platforms “speak the same language.” * Why it matters: Right now, skills-based hiring is fragmented because every company and HR tech provider uses different skills taxonomies and formats. An interoperability layer standardizes how skills data is shared, making it easier for employers to evaluate candidates based on a common skills framework. 4. Employer-Led Recognition * What it is: A system where workers’ skills are validated by their employers and colleagues, not just through certifications or formal education. This could involve peer endorsements, manager assessments, or internal training validations. * Why it matters: Most skills-based hiring focuses on externally validated credentials (e.g., certificates, degrees), but many people develop critical skills on the job. Without a structured way to recognize and verify these skills, businesses overlook talent that is already in their workforce. 5. Skills Wallets * What it is: A digital, user-controlled repository where individuals can store, manage, and share verified records of their skills, credentials, and learning experiences. * Why it matters: Unlike traditional resumes or degree transcripts, Skills Wallets give workers full ownership of their skills data, making it portable across jobs, industries, and learning platforms. This enables lifelong learning and career mobility in ways that existing hiring systems do not support. * Skills-based hiring has the potential to transform the workforce, but it won’t succeed without system-wide changes in HR technology, workforce data, and employer incentives. Jason’s insights reveal the often-ignored challenges and solutions that can make this shift truly scalable and effective. If you’re in talent strategy, workforce development, or HR technology, this episode provides a realistic roadmap for making skills-first hiring work. * Learn more about the T3 Innovation Network: t3networkhub.org * Contact Jason This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit charleshandler.substack.com

    ٥٩ من الدقائق
  6. ١٢ مارس

    Are These 4 AI Mistakes Sabotaging Your Talent Strategy?

    In our recent LinkedIn Live session my esteemed colleague, Neil Morelli, founder of Workplace Labs, and I present a philosophical but practical approach to the adoption of HR Tech tools. Check out the full video of the presentation attached to this post and our accompanying slides (found at the bottom of the post). Here is a quick overview of the ideas that form the foundation of the presentation. “The highest-level goal of the talent acquisition (TA) function is to ensure that an organization has the right people, in the right roles, at the right time, to drive business success.” -Chat GPT 4o & your hosts’ combined 50 years of experience Talent leaders are feeling the pressure to execute Modern hiring problems such as resource constraints, candidate scarcity and overload, the move to skills based hiring, and avoiding bias have talent leaders feeling the pressure to find fast solutions! Relieving these pressures often create a temptation to put tools before strategy. AI is a great example of this. The stakes are high, and AI offers a compelling solution- or does it? AI is complex and making decisions about it requires a strong foundation of knowledge and careful planning. In this presentation we discuss 4 common mistakes in the adoption of HR tech, with a focus on AI tools (are there any other types these days?). We discuss how a tools first mentality is often the root cause of these four common mistakes and offer guidance on how to avoid them. 1. Missing AI’s ‘creeping normality': As technology becomes more entrenched in your processes and vendors add new functionalities that are accessible, adoption often occurs with little oversight or consideration. When it comes to solving problems related to talent supply or overload, AI recruitment platforms are increasingly embedding “talent matching” functionalities that create risk without any substantial rewards. 2. Chasing Skills Without Definition or Direction: We can all agree that skills based hiring has merit. But it requires alignment on what a skill means to your organization and a holistic view of where they matter and why. Merely removing resumes from the evaluation process or adopting tools, AI or otherwise, that claim to support skills based hiring without a holistic strategy is a dead end street. 3. Failing to evaluate your firm’s culture and climate for adopting AI based tools: There is a maturity required for the successful adoption of AI based tools. Understanding your firm’s readiness for AI based tools, and ensuring that you are ready to go all in is essential. Education on, and knowledge of, AI across the entire organization is a big part of successful adoption. 4. Letting vendors dictate strategy and adoption: Most vendors do offer products that can have an impact, and their messages make it tempting to jump right in. Before biting on a shiny new object, adoption of any AI based tool should be pre-empted by a house made strategy. Vendors must be held to a standard evaluated by domain experts using a framework built on the principles of ethical and effective use of AI. At the end of the presentation we provide a case study that probably feels pretty relatable to any talent acquisition professional. Here we tell a story of how mistakes are made and provide insights to help create the awareness needed to avoid them. No one is perfect - but AI alone will not create perfection. Keeping things in perspective and a thoughtful and methodical process that is not driven by fear is essential to the successful adoption of AI technologies. Download our slides here This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit charleshandler.substack.com

    ٥٠ من الدقائق
  7. ٢٨ فبراير

    Recruiting Tech’s Past, Present and Future- W/Jeff Taylor: OG, Founder @ Monster.com & Boomband

    "The hiring industry is at a breaking point—AI is putting pressure on old systems that were never designed for this level of automation." –Jeff Taylor In this episode of Psych Tech @ Work, I am joined by Jeff Taylor, serial entrepreneur and founder of Monster.com, & Boomband a revolutionary new platform that is looking to turn hiring on its ear. Few people have shaped the hiring industry as profoundly as Jeff, whose vision transformed job search from a niche experiment into an industry standard. Jeff’s journey—from building the first large-scale job board to continuously innovating in the talent acquisition space—gives him a unique perspective on where hiring technology has been and where it’s headed, making him the perfect guest to explore the next big disruptions in talent acquisition and how AI is reshaping the hiring process..   In our time together we reminisce about the story behind Monster’s memorable Superbowl ads. (Who can forget the kid saying:  “I want to claw my way up to middle management!” ) and the formative impact my job at Monster (circa 2000) has had on my career.   But enough about me!  Our conversation explores the rapid acceleration of AI in recruiting, from automating sourcing and matching to the potential risks of AI-generated applications flooding hiring systems.  Jeff happily shares his candid thoughts on why hiring technology has stagnated, how AI is creating new challenges for recruiters, and what companies must do to stay ahead in an increasingly automated hiring landscape. We also discuss the core concepts behind Boomband, Jeff’s new social hiring platform. Topics Covered: * Monster.com’s origin story and how it transformed hiring and created the “job board” industry. * The shift from traditional job search to AI-driven sourcing and candidate matching and what this means for the future of hiring. * The pros and cons of AI-generated resumes and job applications—are we heading toward an overload of unqualified applicants? * The failure of legacy hiring systems to keep up with modern job-seeker behavior. * The potential for AI to create more personalized and predictive hiring experiences and Boomband Jeff’s new venture that is focused on creating a new paradigm for hiring (again!). Takeaways: * Job boards revolutionized hiring—but they haven’t evolved fast enough. The core concept of posting jobs and waiting for applications hasn’t fundamentally changed in decades. * AI is making job search more efficient but also more chaotic. Automated resume generation and mass applications are overwhelming recruiters and breaking traditional applicant tracking systems. * Legacy hiring technology is struggling to adapt. The demand for AI-powered sourcing and skills-based hiring is exposing the limitations of old-school job posting and resume-matching platforms. * The next frontier of hiring is predictive and personalized. Jeff envisions AI-driven career pathing, real-time job market intelligence, and new ways to match candidates based on abilities, not just experience. Jeff’s perspective on AI-driven hiring, the changing nature of job search, and where hiring technology must go next makes this conversation a must-listen for anyone interested in the future of work. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit charleshandler.substack.com

    ٤٨ من الدقائق
  8. ١٤ فبراير

    AI’s Role in Redefining the Future of Psychometric Assessments (and Hiring)

    “The future of assessments is about customization at scale. AI allows us to generate and adapt assessments in real-time, making them more relevant to specific job roles.” –Ben Williams Introduction: In this episode of Psych Tech @ Work, I sit down with Ben Williams, Managing Director of Sten 10, to discuss how AI is reshaping the field of psychometric assessments and hiring processes. Our conversation dives into the evolving landscape of AI-driven assessments, the ethical considerations of using AI in hiring, and the challenges of maintaining transparency and fairness while incorporating new technologies. Ben shares insights into blending AI with traditional assessment tools and how this impacts the future of selection processes. Key Topics Covered: * The role of AI in automating and customizing assessments * Emerging challenges in trust, fairness, and explainability in AI-powered hiring * The importance of designing job-specific psychometric tools that align with organizational needs * AI's potential in generating, scoring, and validating assessments * Future implications of AI on entry-level and senior hiring roles Summary: We explore AI’s role in streamlining psychometric assessments while addressing challenges in maintaining transparency and fairness. Ben describes how Sten 10 has integrated AI to make assessment processes faster and more personalized without losing the critical human oversight needed for ethical hiring practices.  We also discuss prompt engineering, AI literacy, and the limitations of AI-generated assessments. One significant takeaway is the growing importance of designing highly contextual and customized assessments using AI while ensuring they remain interpretable and meaningful. We touch on real-world examples, including how AI can generate coaching tips and personality profiles, as well as potential concerns regarding the over-reliance on AI outputs. The conversation also highlights emerging roles related to AI governance and the need for regulatory oversight to ensure fair hiring practices. Key Takeaways: * AI augments, but doesn’t replace human oversight: While AI is making assessments faster and more scalable, the need for human validation remains critical to ensuring fairness. * Custom psychometric assessments are the future: Moving beyond off-the-shelf tools, companies can develop highly specific and job-relevant assessments using AI. * Prompt engineering for assessments: Organizations can create better assessment tools by focusing on AI prompt development and optimization. * AI literacy is essential for hiring professionals: As AI becomes more embedded in hiring, HR professionals need to understand its benefits and limitations to apply it responsibly. * Trust and explainability are key: Companies must prioritize transparency to gain candidate trust and meet regulatory standards. Conclusion:AI’s role in hiring is evolving rapidly, and the opportunities for innovation are endless. However, as Ben notes, the path forward requires a careful balance between technological advances and maintaining human control. By designing psychometric tools with AI and human collaboration, organizations can achieve a fairer and more effective hiring process. Take It or Leave It? Articles: * “Ineffective Human-AI Interactions and Solutions” — Oxford Review * Summary: This article delves into the factors influencing human-AI collaboration, including cognitive load and decision control. Ben highlights how integrating AI into familiar tools like Slack and Word can reduce friction and improve adoption. * “AI and Public Perception: What Americans Really Think” — Center for Data Innovation * Summary: A survey reveals mixed feelings about AI, with curiosity decreasing and negative emotions on the rise. Ben critiques the contradictions in public attitudes toward AI and how these perceptions could shape its future adoption in hiring. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit charleshandler.substack.com

    ١ س ١ د

التقييمات والمراجعات

٥
من ٥
‫٦ من التقييمات‬

حول

Science 4-Hire is now Psych Tech @ Work! - a podcast about safe innovation at the intersection of psychological science, technology, and the future of work. Psych Tech @ Work promotes safe technological innovation and human/machine partnerships as an essential force in creating equilibrium and between psychology and commerce.  Maintaining this balance in a time of unprecedented change is essential for ensuring that the future of work is ethical, positive, and prosperous.   Creating such a future requires an unprecedented level of interdisciplinary collaboration.  With the goal of educating, engaging, and inspiring others through thoughtful and practical discussions with guests from a wide variety of backgrounds and specialties, Psych Tech @ Work provides a smorgasbord of food for thought and practical takeaways about the issues that will make or break the future of work! charleshandler.substack.com

قد يعجبك أيضًا