Human x Intelligent

Madalena Costa

In a world where technology transforms faster than our environment, we can make sense of it. Human × Intelligent invites you to pause, think and design the future with intention. We explore the intersection of humanity and intelligence: how leaders, creators and systems can co-create meaningful impact. Conversations, frameworks and ideas that unite purpose, ethics and innovation. The future of product is human × intelligent.

  1. Figma → Claude → Figma: The AI workflow product designers should know

    3 DAYS AGO

    Figma → Claude → Figma: The AI workflow product designers should know

    This episode was originally going to be about something else, but a conversation over the weekend reminded me of a workflow I use quite often when developing and designing applications. So instead, I decided to share one Human × Intelligent workflow I keep coming back to: Figma → Claude → Figma Rather than treating AI as a chatbot outside the workflow, this setup connects Claude directly to the design environment using Model Context Protocol (MCP). That means the model can analyze interfaces, reason about product systems and help accelerate design thinking. In this episode, I talk about: What changes when AI connects directly to design toolsWhy context makes AI much more useful for product designReal workflows I use: UX audits, design systems extraction, dashboard analysis and component generationThe difference between Official Figma MCP and Figma Console MCPWhat worked well, what didn’t work so well and what I’m still experimenting withWhy I think the real shift is AI becoming part of the workspaceThis is not about replacing designers, but actually, it’s about building better collaboration between human judgment and intelligent systems. If you’re exploring AI workflows for product design, design systems or complex SaaS products, this episode should give you a practical mental model for where things are heading. Example workflows mentioned in the episode UX audit of a flow: Analyze selected screens for hierarchy, cognitive load, accessibility, spacing consistency, CTA clarity and user flow friction. Design system extraction: Analyze selected UI and identify typography scale, color tokens, spacing tokens, component patterns and layout grid. Reusable component generation: Convert layouts into base components, variants and nested structures optimized for scale. Dashboard refactoring:  Audit dashboards for information hierarchy, data density, scanning patterns, visual grouping and progressive disclosure. Retention system mapping: Map a product UI to triggers, actions, rewards, feedback loops and habit formation patterns. Setup steps Sign up for a Figma Pro seat and Claude Pro or MaxInstall NodeInstall Claude CodeCreate a Figma tokenEnable Figma Dev MCP modeConfigure the Figma MCP serverInstall Figma Console MCP locallyInstall the design systems MCP assistantInstall the Desktop Bridge pluginInstall the Figma MCP server in Claude DesktopRestart Claude DesktopRun 'check Figma status'-- Links: Episode page:  Madalena on LinkedIn: /madalenafigueirasdacosta Subscribe: https://substack.com/@humanxintelligent — 🎙️ Human × Intelligent explores how humans and intelligent systems evolve together, across product, behavior and culture. --- #AIAdoption #EnterpriseAI #HumanInTheLoop #ResponsibleAI #AIGovernance #AIWorkflows #AITrust #AILeadership

    19 min
  2. AI adoption in teams: The #1 sign you’re moving too fast (trust breaks here) | Krystel Leal

    3 MAR

    AI adoption in teams: The #1 sign you’re moving too fast (trust breaks here) | Krystel Leal

    'Most AI pilots don’t fail in the demo. They fail inside the workflow.' In this episode of Human × Intelligent, Madalena Costa speaks with Krystel Leal, a fractional AI deployment lead working at the intersection of enterprise AI implementation, customer success and real-world AI adoption. Krystel shares a simple signal that reveals when teams are moving too fast with AI: If no one can explain why the AI produced an output, the team doesn’t understand the guardrails, the workflow or the problem being solved. We explore what actually changes when AI starts working inside a team, where AI trust breaks and why human judgment and ownership still matter in AI-driven organizations. The conversation also breaks down one of the most common mistakes teams make today: delegating decisions to AI instead of delegating tasks. --- In this episode, we explore: - What changes first when AI works in a team: behavior vs mindset - Why enterprise AI pilots often fail after the demo - The difference between delegating tasks and delegating decisions - The biggest signal that a team is moving too fast with AI - Why human-in-the-loop is an ownership problem and not a checkbox - How fear and misconceptions appear when teams start using AI daily - Why companies must become AI education systems - How human communication principles apply to AI prompting - Why 'made by humans' may become a differentiator in an AI-driven world --- Key takeaway AI does not replace judgment. The most successful teams use AI as a thinking partner and not as a decision maker. --- About the guest Krystel Leal is a fractional AI deployment lead who spent years working in Silicon Valley tech startups before specializing in enterprise AI implementation. She works with organizations to turn stalled AI pilots into real production systems, redesigning workflows, ownership structures and verification processes so AI adoption actually delivers value. Her core belief: Most AI investments fail not because of the technology, but because the system around them was never built. Connect with Krystel on LinkedIn. --- 🎙️ Human × Intelligent explores how humans and intelligent systems evolve together, across product, behavior and culture. Hosted by Madalena Costa. --- Links: - Episode page: https://humanxintelligent.com/episodes/if-you-cant-explain-why-ai-output-happened-youre-moving-too-fast - Krystel on LinkedIn: https://www.linkedin.com/in/krysteleal/ - Subscribe for more Human × Intelligent: https://substack.com/@humanxintelligent

    37 min
  3. The agentic leader: how leadership changes when your 'team' is a mix of humans and agents

    11 FEB

    The agentic leader: how leadership changes when your 'team' is a mix of humans and agents

    Episode 11 (season finale) - The agentic leader: How organizational design changes when your team is a mix of humans and agents AI is no longer just transforming products. It’s transforming organizations, leadership and professional identity. In the Season 1 finale of Human × Intelligent, Madalena introduces the concept of the agentic leader, a new model of leadership for a world where your team is no longer fully human. As organizations adopt autonomous systems, agents and AI-enabled workflows, leadership shifts from managing tasks to designing environments. In this episode, you’ll hear: The full arc of Season 1: agency, autonomy, multi-agent systems, intent, and verifiability The Agentic Governance Framework and its three pillars:The Decision Boundary MatrixLegibilityReversibilityHow leadership changes across Product, Engineering, Marketing, and OperationsWhy Human × Intelligent companies are built on accountability, not automationWhat becomes more valuable as intelligence becomes a commodityThis is the most reflective episode of the season. It's a synthesis, a manifesto and a threshold. Season 2 begins at the end of the month and will feature guests and short perspectives on what it means to be a Human × Intelligent company and why it matters. 🎙 If this season helped you think differently about AI, leadership and systems design, share it with someone building the future of work. --- 💬 Join the conversation Have something to say about AI, creativity or what it means to stay human in an intelligent world, we would love to hear from you. 👉 Join the conversation: https://forms.gle/HLAczyaxqRwoe6Fs6 👉 Visit the website: humanxintelligent.com 👉 Connect on LinkedIn: /humanxintelligent 👉 Follow on Instagram: @humanxintelligent 📩 For collaboration or guest submissions: madalena@humanxintelligent.com

    7 min
  4. The verifiability gap: How trust survives when systems act without asking

    4 FEB

    The verifiability gap: How trust survives when systems act without asking

    As AI-powered products become more autonomous, intelligence is no longer the hard part. Trust is. In this episode of Human × Intelligent, Madalena explores the verifiability gap, the invisible space between: 1. what AI systems do 2. what users understand 3. what product teams can actually observe and validate. You’ll learn: Why trust breaks before AI systems failThe 3 control layers inside every agentic product (professionals, users and AI)Why 'human-in-the-loop' should be a workflow and not an approval stepHow trust, transparency, explainability and feedback work together as system infrastructurePractical UX and product strategy patterns to retain users in autonomous systemsThis episode connects the dots between signals, personalization, retention and agency. It gives teams concrete ways to design AI systems that are fast and trustworthy. Next week: the season finale, Episode 11: The agentic leader, on how leadership and organizational design change when your team is a mix of humans and agents. Season 2 starts at the end of the month. 🎙 If this episode helped you think differently about trust in AI-powered products, share it with someone building systems that act on behalf of humans. --- 💬 Join the conversation Have something to say about AI, creativity or what it means to stay human in an intelligent world, we would love to hear from you. 👉 Join the conversation: https://forms.gle/HLAczyaxqRwoe6Fs6 👉 Visit the website: humanxintelligent.com 👉 Connect on LinkedIn: /humanxintelligent 👉 Follow on Instagram: @humanxintelligent 📩 For collaboration or guest submissions: madalena@humanxintelligent.com

    8 min
  5. The interface of intent: How humans stay in control when systems act

    29 JAN

    The interface of intent: How humans stay in control when systems act

    AI systems no longer just respond. They plan, decide and act and often without asking. In this episode of Human × Intelligent, we explore a critical question for the age of agentic AI: How do humans stay in control once systems can act on our behalf? The answer isn’t more prompts, smarter models or bigger Dashboards. It’s the interface of intent, the layer that makes autonomy understandable, predictable and governable. In this episode, we cover: Why do prompts stop working once systems become autonomousThe difference between instructions and delegationWhy Dashboards explain the past but fail the futureHow visibility before action builds trustWhere designers must decide that autonomy stopsThis episode connects the dots between: The age of agencyDesigning autonomy without losing controlMulti-agent systems and coordinationIf you’re designing, building or leading AI-powered products, this episode will change how you think about control, trust and human agency. 🎧 Next episode: The verifiability gap --- 💬 Join the conversation Have something to say about AI, creativity or what it means to stay human in an intelligent world, we would love to hear from you. 👉 Join the conversation: https://forms.gle/HLAczyaxqRwoe6Fs6 👉 Visit the website: humanxintelligent.com 👉 Connect on LinkedIn: /humanxintelligent 👉 Follow on Instagram: @humanxintelligent 📩 For collaboration or guest submissions: madalena@humanxintelligent.com

    7 min
  6. 22 JAN

    The multi-agent organization: From agentic drift to systemic coherence

    Autonomy scales intelligence. But without coordination, it creates conflict. In this episode of Human × Intelligent, we explore the shift from single-model AI to multi-agent systems and why intelligence at scale starts to behave less like software and more like an organization. We break down what happens when multiple autonomous agents work together, where things go wrong and how to design for coherence instead of chaos. You’ll learn: Why the 'single model' era breaks under complexityHow task decomposition enables distributed intelligenceWhat agent drift is and why it’s a structural risk and not a bugA real travel app case study where agents competed instead of collaboratingThe hidden token costs of multi-agent systemsA five-layer orchestration blueprint for coordinated intelligenceAutonomy without coordination creates conflict. Coordination without intent creates noise. Intent turns systems into teams. 🎧 Next episode: how we move beyond the chat box and design the interface of intent. --- Show notes/links > Follow Human × Intelligent for weekly episodes > Subscribe on your favorite podcast platform > Share this episode with someone building intelligent products 📬 Follow the Substack for diagrams, orchestration blueprints and deep dives into multi-agent systems 💬 Join the conversation Have something to say about AI, creativity or what it means to stay human in an intelligent world, we would love to hear from you. 👉 Join the conversation: https://forms.gle/HLAczyaxqRwoe6Fs6 👉 Visit the website: humanxintelligent.com 👉 Connect on LinkedIn: /humanxintelligent 👉 Follow on Instagram: @humanxintelligent 📩 For collaboration or guest submissions: madalena@humanxintelligent.com Together, we’re shaping a new way of working, one reflection, one insight and one conversation at a time.

    8 min
  7. Autonomy is not freedom: How intelligent systems should act

    16 JAN

    Autonomy is not freedom: How intelligent systems should act

    Autonomy is no longer optional in intelligent systems. But without clear boundaries, it quickly turns from helpful to harmful. In this episode of Human × Intelligent, we explore what autonomy means in product design, why it’s often misunderstood and how to design systems that act with purpose rather than unpredictability. You’ll learn: Why autonomy is not freedom, but structured initiativeThe 4 levels of autonomy and how to choose the right oneThe biggest risks of poorly designed autonomous systemsPractical principles to design autonomy that feels like a partnership and not a takeoverAutonomy without alignment creates chaos. Autonomy with alignment creates flow. 🎧 Next episode: how multi-agent systems coordinate, compete and collaborate and why coherence is the next frontier of intelligent product design. Show notes / links Follow Human × Intelligent for weekly episodesSubscribe on your favorite podcast platformShare this episode with someone building intelligent productsYouTube video I discussed during the episode: https://youtu.be/UdsFMJFuopg?si=Rk2qp8iGCN47_Vaw 💬 Join the conversation Have something to say about AI, creativity or what it means to stay human in an intelligent world, we would love to hear from you. 👉 Join the conversation: https://forms.gle/HLAczyaxqRwoe6Fs6 👉 Visit the website: humanxintelligent.com 👉 Connect on LinkedIn: /humanxintelligent 👉 Follow on Instagram: @humanxintelligent 📩 For collaboration or guest submissions: madalena@humanxintelligent.com Together, we’re shaping a new way of working, one reflection, one insight and one conversation at a time.

    8 min

Trailer

About

In a world where technology transforms faster than our environment, we can make sense of it. Human × Intelligent invites you to pause, think and design the future with intention. We explore the intersection of humanity and intelligence: how leaders, creators and systems can co-create meaningful impact. Conversations, frameworks and ideas that unite purpose, ethics and innovation. The future of product is human × intelligent.