Product Rising

Women In Product

Meet product and technology leaders who are shaping what comes next in a world driven by AI, rapid innovation, and global complexity. Every week, we explore the ways bold women and men are navigating change, leading with impact, and redefining the future of product leadership.

  1. 4D AGO

    Leading the Transition to AI in Product with Gyanda Sachdeva

    Gyanda joins Carmen Palmer for this episode in our monthly series, In The Lead.  Gyanda Sachdeva is VP of Product Management at LinkedIn where she currently leads the Consumer Experience team. She has over 15 years of product experience across a wide variety of domains including advertising, subscriptions, marketplaces and payments.  In this episode Gyanda shares her insights into managing product teams through the transition to AI and the changing role of PM. She also shares her personal career journey and the challenges of growing from an APM through individual contributor, group manager, director and now VP. In The Lead is a monthly series on Product Rising sharing thought provoking conversations with a wide range of industry leaders hosted by Carmen Palmer, CEO of Women In Product.  Leading the Transition to AI in Product 00:00 Meet Gyanda at LinkedIn 02:34 Career Journey to Product 05:01 Long Tenure Lessons 08:11 Unlearning as a Leader 11:02 AI Shift Moment 14:35 Experimentation Over Roadmaps 19:36 Full Stack Building Culture 22:50 Trust Guardrails and Agents 24:44 Associate Product Builder Program 25:48 Early Adopters Drive ROI 26:20 Mentorship and Product University 28:26 Leaders Get Everyone In 30:42 Scaling AI Enablement 34:13 Keeping Up With Velocity 37:27 Diversity Access and Role Models 40:25 Three Day AI Jumpstart 43:30 Product Launch Gone Wrong 46:26 Daily Tools and Language Barriers 48:12 Embrace the Skill Shift 📚Resources: Statistics on job skills change by 2030: https://economicgraph.linkedin.com/research/work-change-report A guide for new grads on job trends: https://news.linkedin.com/2026/Grads-Guide-2026  ✨Where to find Gyanda: On LinkedIn 💫 Where to find Carmen: On LinkedIn  🙋🏻‍♀️Where to find Women in Product: On LinkedIn  Website https://womenpm.org/ Join the Community https://womenpm.org/wip-community/

    51 min
  2. MAY 5

    Rebecca Hinds, PhD: Avoiding the AI Productivity Trap

    In this episode, host Shannon Peavey speaks with Professor Rebecca Hinds, PhD, a Stanford-trained organizational behavior expert and author of Your Best Meeting Ever. Rebecca, who runs the Glean Work AI Institute, explains why the future of AI at work is not about squeezing more productivity out of individuals, but about strengthening organizations as a whole. She shares her system for applying design principles to create effective meetings and illuminates the ways these can help leaders decide when technology enhances collaboration and when it risks undermining human trust, creativity, and emotion. She also highlights a growing body of research on how AI can expand access to insight and make work more effective – though today, she says, many organizations are getting it wrong. Instead of imposing AI from the top down, she argues that companies should empower employees to experiment, and that they should tolerate and even celebrate failure. Importantly, she says, organizations need to establish thoughtful guardrails that allow people to discover how these tools can truly help teams become more effective. 01:56 First things first, the bad news on AI and meeting culture 03:12 The good news on AI and meeting culture 05:29 Why leaders need to think about meetings holistically 07:10 Seven design principles to appy to meeting design 10:16 The “Four-D, CEO” test for meetings 13:41 Where AI can excel 18:02 Thinking differently about human roles 19:18 Why we should worry about “digital twins” 20:45 Keeping human emotions in mind 23:05 What can happen if you deprioritize people 23:30 Psychological safety at work, and with AI at work 26:09 The critical need to have AI policies 28:30 Enabling employees to find the value in AI 30:54 Tolerating, even celebrating failure 33:48 Collaboration with AI: an individual experience 36:36 The future is managing agents 37:38 Hope for a future of unparalleled insights 40:26 Using AI to help the organization, rather than the individual 42:48 Sharing resources and research 📚Resources: Rebecca Hinds https://www.rebeccahinds.com/ Stanford University http://www.stanford.edu The Glean Work AI Institute https://www.workai.institute Rebecca’s book, “Your Best Meeting Ever” https://www.rebeccahinds.com/book Organizational psychologist Bob Sutton https://bobsutton.net/about-bob/ Harvard Business School professor Amy Edmonson https://amycedmondson.com/about/ Wharton School of Business Professor Ethan Mollick https://x.com/emollick?lang=en Charter newsletterhttps://www.charterworks.com/ 🌟 Where to find Rebecca: On LinkedIn: https://www.linkedin.com/in/rebecca-hinds/ 🙋🏻‍♀️Where to find Women in Product: On LinkedIn https://www.linkedin.com/company/women-in-product/posts/  Website https://womenpm.org/ Join the Community https://womenpm.org/wip-community/

    44 min
  3. APR 28

    Jen Gennai: Leading in AI’s Human Era

    In 2017, before generative AI became a household term, Jen Gennai drafted Google’s original AI Principles. She was part of the effort to define what responsible AI should mean inside one of the most influential technology companies in the world. Now, years later, she is asking a more uncomfortable question. While companies race to deploy AI and governments work toward regulatory frameworks, who is seriously grappling with what this technology is doing to people and how they think, learn, and communicate?  Jen argues that we are making progress on global rules and regulations. But we may NOT be moving fast enough on the human consequences. As an AI responsibility expert and consultant, she spends her time training leaders not just to adopt AI, but to build resilient cultures, to capture gains without eroding human capacities that make those gains meaningful. 02:29 How Google’s original AI Principles came about 07:13 Working cross-industry to up-level the market 08:02 History may not repeat, but it rhymes 09:35 Regulation: rules versus principles 13:08 How federal law could solve some problems 18:18 AI’s “harm categories” 24:03 Why we need to think more about human impact 26:49 Skills for the future: Resilience, analytics, communication, creative problem-solving 31:38 Why we need more focus on training programs 38:30 Should you say your business is AI-first? Maybe not. 40:43 Defining what “good” looks like 42:20 Leading means building psychological safety 📚Resources: T3 https://t3-consultants.com/ Google AI Safety Principles https://ai.google/principles/ NIST AI Risk Management Framework https://www.nist.gov/itl/ai-risk-management-framework EU AI Act https://artificialintelligenceact.eu/ Latanya Sweeney’s 2016 keynote at the Grace Hopper Celebration https://www.youtube.com/watch?v=UBzP0NouiGo Article describing Latanya Sweeney’s findings on racism in Google ads https://racismandtechnology.center/wp-content/uploads/latanya-sweeney-discrimination-in-online-ad-delivery.pdf The U.S. Government’s AI Literacy Framework https://www.dol.gov/newsroom/releases/eta/eta20260213 MIT’s Technology Review https://www.technologyreview.com/ HBR https://hbr.org/ 🌟 Where to find Jen: On LinkedIn https://www.linkedin.com/in/jen-gennai-b333933/ 🙋🏻‍♀️Where to find Women in Product: On LinkedIn https://www.linkedin.com/company/women-in-product/posts/  Website https://womenpm.org/ Join the Community https://womenpm.org/wip-community/

    45 min
  4. APR 21

    Practitioners Wanted: AI Ethics with AG Consulting's Angel Evan

    What does it actually mean to build ethical, safe, and responsible AI inside a real company with real deadlines, revenue pressure, and competing priorities? In this episode, Shannon Peavey sits down with Angel Evan, AI Ethicist and Practice Lead at AG Consulting Partners. Angel works directly with Fortune 500 companies and major technology organizations to operationalize AI ethics inside product development teams. His focus is not on abstract principles, rather, it’s process, execution and decision-making under pressure. Together, Shannon and Angel explore: Why AI ethics cannot live only in policy documents or philosophical debates How organizations can establish common starting points even when morality is subjective The role of product managers, engineers, and leaders in translating values into shipping decisions What it looks like to embed ethical thinking into roadmaps, governance, and release cycles Why learning together across teams is more powerful than waiting for perfect consensus Angel brings deep knowledge of philosophy and ethical theory, but his work is grounded in practical application. How do you turn abstract frameworks into something that actually guides a sprint? How do you navigate competing incentives? How do you move from aspiration to implementation? If you are a product leader, technologist, or executive trying to build AI responsibly without slowing innovation to a halt, this conversation offers a candid look at what it takes to move from ideas to impact. 00:00 Introduction  01:59 AI Ethics, Responsibility, Safety: Focusing on the human condition 02:50 Turning baseline principles into real products 04:13 What is fairness? 05:50 The path from data science  06:50 How we calibrate “right” and “wrong” in organizations 09:20 Finding a common starting point 12:14 Find the threshold between risk and productivity 15:42 What is AI literacy?17:05 Building your “ethical reasoning” muscles 22:30 Cautious optimism 📚Resources: AG Consulting https://agconsultingpartners.com/ The Stanford University Human-centered AI (HAI) Index https://hai.stanford.edu/ai-index Angel Evan’s Machines & Meaning podcast https://angelevan.com/podcast/ The U.S. Government’s AI Literacy Framework https://www.dol.gov/newsroom/releases/eta/eta20260213 University of Edinburgh https://www.ed.ac.uk/ 🌟 Where to find Angel: On LinkedIn https://www.linkedin.com/in/angel-evan/ 🙋🏻‍♀️Where to find Women in Product: On LinkedIn https://www.linkedin.com/company/women-in-product/posts/  Website https://womenpm.org/ Join the Community https://womenpm.org/wip-community/

    29 min
  5. Marty Cagan on AI Product Coaching

    APR 14

    Marty Cagan on AI Product Coaching

    Marty Cagan joins Carmen Palmer for this episode in our monthly series, In The Lead.  Marty is a renowned product executive and author, and is widely considered a thought leader in the field of Product Management. He is the founder of the Silicon Valley Product Group (SVPG), where he advises companies on how to create successful products using the practices of world-class tech organizations.  Marty is the author of influential books on the topic of product management and product teams including  Inspired: How to Create Tech Products Customers Love  Empowered: Ordinary People, Extraordinary Products. Transformed: Moving to the Product Operating Model  Recently Marty and the team at Silicon Valley Product Group (SVPG) have been exploring the use of AI LLMs as product coaches. As always Marty has strong views on the adoption of AI technology enabling true product builders, the increasing importance of true empowered product managers, and how AI Models can assist in the transition by coaching PMs and Product Leaders.. In The Lead is a monthly series on Product Rising sharing thought provoking conversations with a wide range of industry leaders hosted by Carmen Palmer, CEO of Women In Product.  Your AI Product Coach 00:00 Reckoning for Product 03:11 GenAI Changes the Stakes 07:11 Feature vs Empowered Teams 09:57 Build to Learn vs Earn 12:18 Prototyping Tools Boom 13:37 New PM Litmus Test 16:05 Why Coaching Fails 20:57 AI Coaching Tipping Point 23:58 Prompting for Product Model 27:30 Load Strategic Context 28:19 Strategic Context Inputs 29:23 Project Model Prompts 30:52 AI Coaching Limits 34:58 Why Humans Still Matter 38:42 What Coaching Looks Like 39:25 Building Product Sense Fast 42:01 Frameworks For Real Work 45:23 Adoption Curve Reality 49:13 Career Advice And Wrap 📚Resources: SVPG Product Coaching and AI Configuring Your Model As Product Coach - an example of how to get started using an AI Model to provide product coaching ✨Where to find Marty: On LinkedIn Silicon Valley Product Group (SVPG) Website & Newsletter sign up  💫 Where to find Carmen: On LinkedIn  🙋🏻‍♀️Where to find Women in Product: On LinkedIn  Website https://womenpm.org/ Join the Community https://womenpm.org/wip-community/

    54 min
  6. APR 7

    AI Safety in Practice with OpenAI's Tonia Osadebe

    In this episode of Women in Product, host Shannon Peavey sits down with Tonia Osadebe, AI Safety Lead at OpenAI, for a candid and practical conversation about what it really means to build responsibly in fast moving AI environments. Tonia shares her path into AI Safety, from her early work on AI Agents and machine learning fairness to stepping into a role focused squarely on evaluating and mitigating risk in frontier systems. She reflects on her time at Google, including being part of the team navigating the widely discussed “Glue on Pizza” AI search result moment, and what those high visibility incidents teach teams about iteration, accountability, and resilience. At the heart of this conversation is a simple but powerful idea: safety work is collaborative. Tonia explains how cross functional teams come together to define acceptable risk, make principled tradeoffs, and agree to improve systems over time rather than striving for perfection before launch. 02:29 Where does safety start in a project? 03:50 We’re mitigating - not eliminating - risk 08:14 Trying not to break everything 09:26 Let’s talk about glue on pizza 11:34 What we don’t know YET 14:06 How teams collaborate 19:02 We are not the “fun police” 21:35 Tension is expected 22:15 Measure where you can 24:05 Sharing ownership 25:15 Reflecting the real world 29:00 Access will change the way we dream 32:50 Building safety into roadmap 📚Resources: OpenAI - https://openai.com/ Google AI Safety Principles - https://ai.google/safety/ Google’s Pizza-Glue Scandal https://www.wired.com/story/google-cut-back-ai-overviews-before-pizza-glue/ The U.S. Government’s AI Literacy Framework https://www.dol.gov/newsroom/releases/eta/eta20260213 🌟 Where to find Tonia: On LinkedIn - https://www.linkedin.com/in/tonia-osadebe-a9b1a014/ Where to find our host: Shannon Peavey - https://www.linkedin.com/in/spmad/ 🙋🏻‍♀️Where to find Women in Product: On LinkedIn https://www.linkedin.com/company/women-in-product/posts/  Website - https://womenpm.org/ Join the Community - https://womenpm.org/wip-community/

    38 min
  7. MAR 31

    AI Ethics in Action: Alaska Airlines’ Shelby Tallent

    What does it mean to be the person responsible for AI ethics inside a 30,000-person company? Shelby Tallent lives this every day. As the leader of AI ethics, responsibility, and compliance for Alaska Airlines, Shelby works at the intersection of technology, governance, and human trust. Her career across Amazon, Nordstrom, and TeleSign has shaped a perspective that blends policy rigor with product execution. In conversation with host Shannon Peavey, Shelby shares why AI ethics is not about slowing innovation but about guiding it. She explains how ethical value systems become practical decision frameworks, how individuals can hold their ground when goals conflict, and why keeping humans in the loop is not optional. AI should not be looked at as a way to “get us out of things,” she said, rather, we should let it expand our capacity to do what once felt impossible. 00:00 Introduction  01:49 How Alaska Airlines structures the AI Safety & Compliance role 02:18 The ways responsibilities map to company values 04:45 Where foundational principles for AI implementation originate 05:50 Navigating different AI rules per country 07:32 The “9-to-5” of AI Responsibility 13:02 Types of risk and how we mitigate 16:30 A path of many hats23:00 Keeping humans in the loop 29:30 Why we should be optimistic 33:00 Shelby’s challenge to your thinking and approach 📚Resources: Alaska Airlines - https://www.alaskaair.com/ International Association of Privacy Professionals https://iapp.org/ Cloud Security Alliance https://cloudsecurityalliance.org/ The EU AI Act - https://artificialintelligenceact.eu/ GDPR - https://gdpr-info.eu/ AI.gov - https://www.ai.gov/ Microsoft CoPilot https://copilot.microsoft.com/  FigJam https://www.figma.com/figjam/ 🌟 Where to find Shelby: On LinkedIn https://www.linkedin.com/in/shetallent/ Where to find our host: Shannon Peavey - https://www.linkedin.com/in/spmad/ 🙋🏻‍♀️Where to find Women in Product: On LinkedIn https://www.linkedin.com/company/women-in-product/posts/  Website https://womenpm.org/ Join the Community https://womenpm.org/wip-community/

    34 min
  8. MAR 24

    The Future Has Arrived: AI Safety, Ethics & Responsibility

    If there is one thing we can agree on, AI is everywhere. It promises incredible gains in productivity, creativity, deep analysis, and progress in all kinds of fields from medicine to music. But - AI also brings anxiety, concern, and a great deal of unknowns.  We wanted to find out exactly who is thinking about the future of AI in terms of morality, ethics, psychological and physical safety? (Is anyone working on this? Hello?) We are happy to report that indeed, they are. And we want you to meet them.  In an exclusive new Product Rising podcast series, host Shannon Peavey explores the world of AI Ethics, Safety & Responsibility through conversations with experts working in AI policy, compliance, accountability, research and safety. She gets into the details to illuminate the ways these men and women are working hard to help shape a future we all want to live in.  Whether you’re an individual contributor, leader, founder, or advisor, listen in to learn about this incredibly important field that touches all aspects of tech and quite possibly, will have a profound impact on all of our lives from here. 00:20 What this series illuminates 01:07 The AI truth is, we don’t know it all 02:34 It’s time to come to terms with AI 03:40 Good people out there working on hairy problems 05:07 Time to work on minimizing harm 06:18 Our guests: consumer, LLMs, tech, policy 08:45 Risk mitigation takes all kinds of backgrounds 09:57 Causes for optimism 11:54 The legacy of Google’s AI principles 14:20 First guest Shelby Tallent (Alaska Airlines) 16:04 Our series starts March 24 and continues for 8 weeks: join us! 📚Resources: Women in Product - http://womenpm.org Product Rising on Apple Podcasts https://podcasts.apple.com/us/podcast/product-rising/id1584224561 Product Rising on Spotify https://open.spotify.com/show/7bcyVhpdhw0hbiRr6O4h1U Product Rising on YouTube https://www.youtube.com/playlist?list=PLijcNLDj_QE2ge6Wqeh0MLx_qMKwot--9 The U.S. government’s AI Literacy Framework https://www.dol.gov/newsroom/releases/eta/eta20260213 Thank you so much to our guests: Shelby Tallent, Alaska Airlines https://www.linkedin.com/in/shetallent/ Jen Gennai, T3 https://www.linkedin.com/in/jen-gennai-b333933/ Tonia Osadebe, OpenAI https://www.linkedin.com/in/tonia-osadebe-a9b1a014/ Rebecca Hinds, PhD https://www.linkedin.com/in/rebecca-hinds/ Angel Evan, AG Consulting https://www.linkedin.com/in/angel-evan/ Tracy Pizzo Frey, Restorative AI https://www.linkedin.com/in/tracy-frey/ Sara Tangdall, Nike https://www.linkedin.com/in/sara-tangdall/ Where to find our hosts: Shannon Peavey https://www.linkedin.com/in/spmad/ Elizabeth Ames https://www.linkedin.com/in/elizabethames/ 🙋🏻‍♀️Where to find Women in Product: On LinkedIn https://www.linkedin.com/company/women-in-product/posts/  Website https://womenpm.org/ Join the Community https://womenpm.org/wip-community/

    16 min
4.7
out of 5
14 Ratings

About

Meet product and technology leaders who are shaping what comes next in a world driven by AI, rapid innovation, and global complexity. Every week, we explore the ways bold women and men are navigating change, leading with impact, and redefining the future of product leadership.

You Might Also Like