Artificial Insights: How To Do AI Right

Daniel Manary

Candid conversations and real-world stories about building AI into products and businesses. Made for executives, founders, and product leaders who want to do AI right... from strategy to launch to scale. Each Friday, host Daniel Manary talks with CEOs, CTOs, CAIOs, product managers, researchers, and founders about bringing AI ideas to market, separating hype from lasting impact. He explores the How's, What's, and Why's of Artificial Intelligence and digs into how this technology is changing the landscape of modern work and life. Guests share how they’ve worked with complex, messy data, built trust into automation, and launched AI-powered products customers rely on. Whether you’re starting a new AI initiative or scaling an existing product, you’ll hear insightful, hype-free lessons you can apply to your own business.

  1. Embodied AGI: Reimagining AI Through Robotics w/ Adeel Zaman, Founder in Stealth out of HF0, previously CTO & Co-Founder of DOZR

    1 天前

    Embodied AGI: Reimagining AI Through Robotics w/ Adeel Zaman, Founder in Stealth out of HF0, previously CTO & Co-Founder of DOZR

    What happens when AI doesn’t just live in text and screens, but begins to reason and act in the physical world? Adeel Zaman, CTO and Co-Founder of DOZR, has spent his career moving from deep learning research to startups, scaling companies and tackling cold-start problems with machine learning. Now, backed by the HF0 residency, his focus is on "Embodied Intelligence" and how foundation models can learn physical tasks, adapt through feedback, and interact with humans in real time. In this conversation, Daniel and Adeel explore why embodied AGI may be a prerequisite for true general intelligence, how voice interaction could change human-machine collaboration, and what it means to give individuals, not just big labs, the ability to teach and shape their own AI models. 🔑 What You'll Learn in This Episode ✅ Why embodied intelligence could be a prerequisite for reliable long-horizon agents and true "AGI"✅ How real-time "reasoning out loud" can change human-robot collaboration on job sites✅ Why reinforcement learning from language feedback is key when rewards aren't cleanly verifiable✅ The case for individually owned AIs vs. globally shared weights🔗 Resources & Links 📄 Google DeepMind’s RT-2 research overview: https://robotics-transformer2.github.io🤝 Connect with Adeel on LinkedIn: https://www.linkedin.com/in/adeelzam/📩 Subscribe to the Artificial Insights newsletter for key takeaways: https://manary.haus/podcast/#haus👉 Have a guest in mind? Reach out to Daniel at daniel@manary.haus💬 Inspired by the idea of embodied AI? Share this episode with someone else who would be interested in the conversation!

    37 分鐘
  2. From Tinkerers to Teams: Adopting AI with Aydin Mirzaee, Co-Founder & CEO @ Fellow.ai

    9月19日

    From Tinkerers to Teams: Adopting AI with Aydin Mirzaee, Co-Founder & CEO @ Fellow.ai

    Adopting AI inside an organization is rarely smooth. Most people are not natural tinkerers, and it takes more than enthusiasm to change how teams prepare, run, and follow up on meetings. For Aydin Mirzaee, CEO and co-founder of Fellow, the turning point was realizing that AI could take the work only the most organized people were willing to do and make it accessible to everyone. He calls this an AI “chief of staff”, a system that prepares meaningful follow-ups, captures decisions and actions, and threads context across meetings so teams can focus on higher-value work. In this episode, Daniel and Aydin explore what it looks like to move from manual meeting hygiene to AI-first workflows. They discuss reasoning-driven pre-meeting briefs, role-specific templates, and writing back into systems like Salesforce and Jira without extra human effort. Aydin also reflects on adoption realities: how to create room for experimentation, why surfacing high-value workflows matters more than tinkering, and what it means to build for models that don’t exist yet. 🔑 What You Will Learn This Episode ✅ Why summaries are table stakes and the real gains come from workflow design✅ How meeting-type templates help you capture the right signals✅ What it takes to drive adoption when most people are not tinkerers✅ Why builders need to build toward a model’s future capabilities, rather than current ones🔗 Resources & Links ✨ Fellow — AI meeting assistant: https://fellow.ai🤝 Connect with Aydin on LinkedIn: https://ca.linkedin.com/in/aydinmirzaee🎧 This New Way with Aydin Mirzaee: https://www.youtube.com/@aydin.mirzaee📩 Subscribe to the Artificial Insights newsletter for key takeaways: https://manary.haus/podcast/#haus👉 If this conversation sparks an idea, share it with a colleague! It’s a grounded look at how AI reshapes work inside real teams.

    43 分鐘
  3. Season 4 of Artificial Insights: The Big Questions Behind Doing AI Right

    第 4 季預告

    Season 4 of Artificial Insights: The Big Questions Behind Doing AI Right

    Welcome to Artificial Insights where we talk to leaders and thinkers in AI about how to do AI right. On this podcast, we sit down every other Friday with people who build things with, and write things about, AI, and talk to them about what they do and why they do it. We've been doing this for just over a year now, and a core pattern has emerged: Why you build with AI matters as much as how you build with AI. Join Daniel this season as he meets with and learn from a great lineup of guests from awesome companies like Canva, Fellow.ai, and Waha. We can't wait to introduce them all to you! In the meantime, check out some of our favorite episodes! 🎧 Season 1, Episode 8 with Kris Braun on why human expertise and judgement remain essential, even with AI in the loop: https://rss.com/podcasts/manaryhaus/1788572/🎧 Season 2, Episode 1 with Patrick Belliveau on how curiosity, persistence, and early GPT models kickstarted an unexpected AI journey: https://rss.com/podcasts/manaryhaus/1875537/🎧 Season 2, Episode 6 with Mike Kirkup on what it’s like to demo an AI system you can’t fully predict: https://rss.com/podcasts/manaryhaus/2061508/🎧 Season 3, Episode 2 with Dr. Christopher Watkin on what “infinite efficiency” forces us to rethink about work, effort, and meaning: https://rss.com/podcasts/manaryhaus/2167772/🔗 Resources & Links 🎧 Listen to more episodes: https://rss.com/podcasts/manaryhaus/🤝 Connect with Daniel on LinkedIn: https://www.linkedin.com/in/dmanary/📩 Subscribe to the Artificial Insights newsletter for key takeaways: https://manary.haus/podcast/#haus👉 Have a guest in mind? Reach out to Daniel at daniel@manary.haus🚀 Share this trailer with a friend who’s just starting their AI journey. Thanks for listening!

    8 分鐘
  4. AI Ethics Conversations That Shape How We Build: Artificial Insights Season 3 Recap

    9月5日

    AI Ethics Conversations That Shape How We Build: Artificial Insights Season 3 Recap

    From ethics to student builders, this past summer season of Artificial Insights dug into how AI shapes what it means to be human and how the next generation is already learning to use it. Guests shared warnings about convenience, reflections on human worth, and hands-on lessons from shipping early projects. In this recap episode, Daniel looks back at highlights from every conversation: 1️⃣ Sheldon Fernandez on the “instantaneous friend,” useful friction, and raising wise humans. Listen to the full episode: https://rss.com/podcasts/manaryhaus/2157598/2️⃣ Dr. Christopher Watkin on efficiency, meaning, and why friction still matters. Listen to the full episode: https://rss.com/podcasts/manaryhaus/2167772/3️⃣ Dr. K on the wisdom gap, the tyranny of convenience, and protecting agency. Listen to the full episode: https://rss.com/podcasts/manaryhaus/2178635/4️⃣ Dvir Zagury on aligning personalization with privacy and control. Listen to the full episode: https://rss.com/podcasts/manaryhaus/2182186/5️⃣ Aleks Santari on commoditized intelligence and adaptive, human-centred design. Listen to the full episode: https://rss.com/podcasts/manaryhaus/2185753/6️⃣ Aasha Khan on safe spaces for students to learn AI and building with purpose. Listen to the full episode: https://rss.com/podcasts/manaryhaus/2185718/It’s a chance to revisit the season’s biggest insights, and a reminder that doing AI right starts with protecting human worth, agency, and learning. 🔑 What You’ll Learn in This Episode ✅ Why adding friction can protect judgment and trust✅ How to decide which decisions must remain human✅ Patterns for privacy, consent, and editable context✅ Lessons from the next generation about how AI should be and can be used🔗 Resources & Links 🤝 Connect with Daniel on LinkedIn: https://www.linkedin.com/in/dmanary/📩 Subscribe to the Artificial Insights newsletter for key takeaways: https://manary.haus/podcast/#haus👉 Have a guest in mind? Reach out to Daniel at daniel@manary.haus🎧 Looking back sharpens how we move forward. Share this episode with someone reflecting on AI’s role in their own work.

    15 分鐘
  5. Back to School Special: When Schools Ban AI but the Job Market Demands It w/ Aasha Khan, Grade 12 Student at Cameron Heights Collegiate Institute, Founder of Youth Tech Labs

    8月29日

    Back to School Special: When Schools Ban AI but the Job Market Demands It w/ Aasha Khan, Grade 12 Student at Cameron Heights Collegiate Institute, Founder of Youth Tech Labs

    What happens when students are told not to use AI, but also told they’ll need it for their careers? For Aasha Khan, a Grade 12 student at Cameron Heights and founder of Youth Tech Labs, that tension defined her first encounters with AI. At school, the message was clear: avoid AI or risk suspension. At home, her father, a Chief AI Officer, encouraged her to explore the technology. The mixed signals left her, like many of her peers, caught between fear and curiosity. Aasha decided to create a safe space where high schoolers could learn AI together. Youth Tech Labs has since grown into a community that draws more than a hundred students, runs hands-on workshops, and hosts demo days where participants present their AI projects. Along the way, Aasha launched AskEve, an AI chatbot designed to break stigma around menstruation and open up conversations often kept silent. This is the third and final part of our Back to School Special. If you missed them, check out episode four with Dvir Zagury on how curiosity led him from quantum foundations to health tech and personalized AI here: https://rss.com/podcasts/manaryhaus/2182186/ and episode five with Aleks Santari on how AI can fundamentally change the way interfaces are designed here: https://rss.com/podcasts/manaryhaus/2185753 🔑 What You’ll Learn in This Episode ✅ Why unclear rules around AI leave students confused and divided✅ How Youth Tech Labs helps students build real projects in a supportive environment✅ Why empathy and creativity are central to student-led AI initiatives✅ How projects like Ask Eve show AI’s potential for social good✅ Why parents and teachers need to create safe spaces for youth to explore AI🔗 Resources & Links 🤝 Connect with Aasha on LinkedIn: https://www.linkedin.com/in/aasha-khan-3a2294250/🌐 Check out Youth Tech Labs: https://youthtechlabs.ca🌐 Check out Gambit Changemakers: https://gambitco.io/#changemakers📩 Subscribe to the Artificial Insights newsletter for key takeaways: https://manary.haus/podcast/#haus👉 Have a guest in mind? Reach out to Daniel at daniel@manary.haus🎒 Inspired by this Back to School Special? Share it with a parent, teacher, or student curious about how AI will shape the classroom and beyond.

    32 分鐘
  6. Back to School Special: Why the Next Generation Is Asking What AI Should Do w/ Aleks Santari, Student @ Johns Hopkins University, Founder, & Philosopher

    8月27日

    Back to School Special: Why the Next Generation Is Asking What AI Should Do w/ Aleks Santari, Student @ Johns Hopkins University, Founder, & Philosopher

    The next generation of builders isn’t just asking what AI can do, but what it should do. For Johns Hopkins student Aleks Santari, the most striking change AI brings is the commoditisation of intelligence. When capabilities once reserved for experts become widely available, it reshapes education, work, and even how people see themselves. Aleks is exploring that reality firsthand through three projects: Flow, a health app that adapts to each person’s context; a snake-like surgical robot for eye surgery in a lab at Johns Hopkins; and an autonomous rover so he can get first-hand experience studying autonomous behavior. In this conversation, Daniel and Aleks discuss why people treat AI like a companion, the risks of doing so, and how dynamic user interfaces could make data more accessible and users less reliant on their devices. They also reflect on what happens when intelligence is no longer scarce, and why that might push us to rediscover the value of being human. This is episode two of our three-part Back to School Special, featuring students experimenting at the edge of AI. In case you missed it, check out Daniel's conversation with Dvir Zagury-Grynbaum on taking the leap from quantum physics to AI here: https://rss.com/podcasts/manaryhaus/2182186/ 🔑 What You’ll Learn in This Episode ✅ Why commoditized intelligence challenges how we define human value ✅ How students are building AI projects with real-world applications ✅ The risks of treating AI as a friend instead of a tool ✅ Why dynamic user interfaces could simplify health data ✅ What it means to apprentice in robotics while thinking like a philosopher 🔗 Resources & Links 🤝 Connect with Aleks on LinkedIn: https://www.linkedin.com/in/aleksantari/📩 Subscribe to the Artificial Insights newsletter for key takeaways: https://manary.haus/podcast/#haus 👉 Have a guest in mind? Reach out to Daniel at daniel@manary.haus 💬 Feel inspired? Share this episode with someone asking how commoditized intelligence might reshape their work and identity.

    29 分鐘
  7. Back to School Special: From Quantum Physics to Personalized AI w/ Dvir Zagury-Grynbaum, Physics Undergrad @ University of Waterloo

    8月25日

    Back to School Special: From Quantum Physics to Personalized AI w/ Dvir Zagury-Grynbaum, Physics Undergrad @ University of Waterloo

    How do you bridge worlds as different as quantum research and AI product building? For Dvir Zagury-Grynbaum, the answer lies in curiosity. Still an undergraduate in physics at the University of Waterloo, Dvir has already worked at the Perimeter Institute, led AI design teams, and built tools that personalize decision making. His project thersona.com learns from its users to help with everything from remembering birthdays to suggesting the right restaurant. He’s also applying causal inference and AI to diabetes management, helping people run “what if” simulations of their blood glucose hours into the future. In this special Back to School episode of Artificial Insights, Daniel and Dvir explore how quantum foundations connect with causal inference, why personalization raises important privacy questions, and how AI can be designed to reflect the way humans actually operate. 🔑 What You’ll Learn in This Episode ✅ How quantum foundations overlap with causal inference✅ How causal inference can power better health tech✅ What Dvir learned building a personalized AI tool✅ Why privacy and control shape user trust in AI tools🔗 Resources & Links 🤝 Connect with Dvir on LinkedIn: https://www.linkedin.com/in/dvirzagury/✨ Explore Persona: https://thersona.com💻 Learn about Project Goose (agent orchestration): https://github.com/block/goose🌐 Perimeter Institute for Theoretical Physics: https://perimeterinstitute.ca✨ Gluroo (diabetes management startup): https://gluroo.com/📩 Subscribe to the Artificial Insights newsletter for key takeaways: https://manary.haus/podcast/#haus👉 Have a guest in mind? Reach out to Daniel at daniel@manary.haus💬 Thinking about the future? Share this Back to School Special with someone who’s curious about how today’s students are shaping tomorrow’s AI.

    17 分鐘

預告片

簡介

Candid conversations and real-world stories about building AI into products and businesses. Made for executives, founders, and product leaders who want to do AI right... from strategy to launch to scale. Each Friday, host Daniel Manary talks with CEOs, CTOs, CAIOs, product managers, researchers, and founders about bringing AI ideas to market, separating hype from lasting impact. He explores the How's, What's, and Why's of Artificial Intelligence and digs into how this technology is changing the landscape of modern work and life. Guests share how they’ve worked with complex, messy data, built trust into automation, and launched AI-powered products customers rely on. Whether you’re starting a new AI initiative or scaling an existing product, you’ll hear insightful, hype-free lessons you can apply to your own business.