Send a text Should AI be in the room? Not "can it take notes" — but what actually happens when an AI system is listening to your leadership meetings, hiring interviews, and performance check-ins? When is that powerful? When is it a risk? And what guardrails do you need before you invite it in? My guest is Artem Koren, co-founder and Chief Product Officer at Sembly AI, a meeting intelligence platform he founded in 2019. Artem has spent years thinking about how AI can enrich the way humans work — rather than replace the human judgment at the centre of it. We dig into the "invisible colleague" problem — what happens to trust and psychological safety when people don't know whether AI is listening. We discuss why EU data protection regulations are making works councils push back against AI note-takers in performance reviews and hiring. And we unpack the hiring question: why most managers have never been trained to interview well, how AI can reduce bias and improve decision quality, and why AI should inform decisions but never make them. 🎙️ TOPICS COVERED • The real fears leaders have about AI in meetings — and whether they're justified • The invisible colleague problem: what it means when AI attends without full transparency • GDPR, EU data protection, and AI note-takers in sensitive workplace conversations • Psychological safety by design: building trust when AI is present • AI in hiring: reducing bias, enforcing structure, and what the research shows • Who's accountable when AI sits in your interview and you still make a bad hire? • Why general LLMs (ChatGPT) are not the same as purpose-built meeting intelligence tools • What good AI-assisted leadership looks like in 3–5 years 🔑 KEY INSIGHTS → AI in a meeting is like a hidden participant on Zoom — the discomfort is valid, and the answer is transparency, not avoidance. → Trust in AI tools mirrors trust in people: you need context, track record, and clarity on who sees what. → Under GDPR, employees and candidates can request the data collected on them — which changes what "behind closed doors" means for hiring. → AI consistently outperforms untrained human interviewers at predicting performance — because structure beats instinct. But the more AI is visibly involved, the lower candidate acceptance rates become. The optimal design: AI in the background, human conversation in the foreground. → AI should never decide who gets hired, promoted, or managed out. It should surface what humans miss and organise evidence for better human decisions. → Uploading CVs into ChatGPT and asking "who should I hire?" is not meeting intelligence — it's uncontrolled data exposure with no guardrails. 💬 GUEST Artem Koren | Co-Founder & CPO, Sembly AI sembly.ai 📌 RESOURCES MENTIONED • Sembly AI — sembly.ai • SHRM: only 23% of managers have had formal interview training • EU General Data Protection Regulation (GDPR) This episode is for HR leaders, hiring managers, executives, and anyone deciding where AI belongs in high-stakes people conversations. Keywords: AI meeting intelligence, Sembly AI, AI in hiring, interview bias, psychological safety AI, GDPR AI meetings, AI note-taker, structured interviews, AI hiring tools, meeting AI, invisible colleague, AI leadership, works council AI, EU data protection AI, AI accountability Follow Konstanty Sliwowski on LinkedIn https://www.linkedin.com/in/sliwowskik/ For more insights check out www.schoolofhiring.com and newsletter.schoolofhiring.com