Artificial Insights: How To Do AI Right

Daniel Manary

Candid conversations and real-world stories about building AI into products and businesses. Made for executives, founders, and product leaders who want to do AI right... from strategy to launch to scale. Each Friday, host Daniel Manary talks with CEOs, CTOs, CAIOs, product managers, researchers, and founders about bringing AI ideas to market, separating hype from lasting impact. He explores the How's, What's, and Why's of Artificial Intelligence and digs into how this technology is changing the landscape of modern work and life. Guests share how they’ve worked with complex, messy data, built trust into automation, and launched AI-powered products customers rely on. Whether you’re starting a new AI initiative or scaling an existing product, you’ll hear insightful, hype-free lessons you can apply to your own business.

  1. Back to School Special: When Schools Ban AI but the Job Market Demands It w/ Aasha Khan, Grade 12 Student at Cameron Heights Collegiate Institute, Founder of Youth Tech Labs

    1D AGO

    Back to School Special: When Schools Ban AI but the Job Market Demands It w/ Aasha Khan, Grade 12 Student at Cameron Heights Collegiate Institute, Founder of Youth Tech Labs

    What happens when students are told not to use AI, but also told they’ll need it for their careers? For Aasha Khan, a Grade 12 student at Cameron Heights and founder of Youth Tech Labs, that tension defined her first encounters with AI. At school, the message was clear: avoid AI or risk suspension. At home, her father, a Chief AI Officer, encouraged her to explore the technology. The mixed signals left her, like many of her peers, caught between fear and curiosity. Aasha decided to create a safe space where high schoolers could learn AI together. Youth Tech Labs has since grown into a community that draws more than a hundred students, runs hands-on workshops, and hosts demo days where participants present their AI projects. Along the way, Aasha launched AskEve, an AI chatbot designed to break stigma around menstruation and open up conversations often kept silent. This is the third and final part of our Back to School Special. If you missed them, check out episode four with Dvir Zagury on how curiosity led him from quantum foundations to health tech and personalized AI here: https://rss.com/podcasts/manaryhaus/2182186/ and episode five with Aleks Santari on how AI can fundamentally change the way interfaces are designed here: https://rss.com/podcasts/manaryhaus/2185753 🔑 What You’ll Learn in This Episode ✅ Why unclear rules around AI leave students confused and divided✅ How Youth Tech Labs helps students build real projects in a supportive environment✅ Why empathy and creativity are central to student-led AI initiatives✅ How projects like Ask Eve show AI’s potential for social good✅ Why parents and teachers need to create safe spaces for youth to explore AI🔗 Resources & Links 🤝 Connect with Aasha on LinkedIn: https://www.linkedin.com/in/aasha-khan-3a2294250/🌐 Check out Youth Tech Labs: https://youthtechlabs.ca🌐 Check out Gambit Changemakers: https://gambitco.io/#changemakers📩 Subscribe to the Artificial Insights newsletter for key takeaways: https://manary.haus/podcast/#haus👉 Have a guest in mind? Reach out to Daniel at daniel@manary.haus🎒 Inspired by this Back to School Special? Share it with a parent, teacher, or student curious about how AI will shape the classroom and beyond.

    32 min
  2. Back to School Special: Why the Next Generation Is Asking What AI Should Do w/ Aleks Santari, Student @ Johns Hopkins University, Founder, & Philosopher

    3D AGO

    Back to School Special: Why the Next Generation Is Asking What AI Should Do w/ Aleks Santari, Student @ Johns Hopkins University, Founder, & Philosopher

    The next generation of builders isn’t just asking what AI can do, but what it should do. For Johns Hopkins student Aleks Santari, the most striking change AI brings is the commoditisation of intelligence. When capabilities once reserved for experts become widely available, it reshapes education, work, and even how people see themselves. Aleks is exploring that reality firsthand through three projects: Flow, a health app that adapts to each person’s context; a snake-like surgical robot for eye surgery in a lab at Johns Hopkins; and an autonomous rover so he can get first-hand experience studying autonomous behavior. In this conversation, Daniel and Aleks discuss why people treat AI like a companion, the risks of doing so, and how dynamic user interfaces could make data more accessible and users less reliant on their devices. They also reflect on what happens when intelligence is no longer scarce, and why that might push us to rediscover the value of being human. This is episode two of our three-part Back to School Special, featuring students experimenting at the edge of AI. In case you missed it, check out Daniel's conversation with Dvir Zagury-Grynbaum on taking the leap from quantum physics to AI here: https://rss.com/podcasts/manaryhaus/2182186/ 🔑 What You’ll Learn in This Episode ✅ Why commoditized intelligence challenges how we define human value ✅ How students are building AI projects with real-world applications ✅ The risks of treating AI as a friend instead of a tool ✅ Why dynamic user interfaces could simplify health data ✅ What it means to apprentice in robotics while thinking like a philosopher 🔗 Resources & Links 🤝 Connect with Aleks on LinkedIn: https://www.linkedin.com/in/aleksantari/📩 Subscribe to the Artificial Insights newsletter for key takeaways: https://manary.haus/podcast/#haus 👉 Have a guest in mind? Reach out to Daniel at daniel@manary.haus 💬 Feel inspired? Share this episode with someone asking how commoditized intelligence might reshape their work and identity.

    29 min
  3. Back to School Special: From Quantum Physics to Personalized AI w/ Dvir Zagury-Grynbaum, Physics Undergrad @ University of Waterloo

    5D AGO

    Back to School Special: From Quantum Physics to Personalized AI w/ Dvir Zagury-Grynbaum, Physics Undergrad @ University of Waterloo

    How do you bridge worlds as different as quantum research and AI product building? For Dvir Zagury-Grynbaum, the answer lies in curiosity. Still an undergraduate in physics at the University of Waterloo, Dvir has already worked at the Perimeter Institute, led AI design teams, and built tools that personalize decision making. His project thersona.com learns from its users to help with everything from remembering birthdays to suggesting the right restaurant. He’s also applying causal inference and AI to diabetes management, helping people run “what if” simulations of their blood glucose hours into the future. In this special Back to School episode of Artificial Insights, Daniel and Dvir explore how quantum foundations connect with causal inference, why personalization raises important privacy questions, and how AI can be designed to reflect the way humans actually operate. 🔑 What You’ll Learn in This Episode ✅ How quantum foundations overlap with causal inference✅ How causal inference can power better health tech✅ What Dvir learned building a personalized AI tool✅ Why privacy and control shape user trust in AI tools🔗 Resources & Links 🤝 Connect with Dvir on LinkedIn: https://www.linkedin.com/in/dvirzagury/✨ Explore Persona: https://thersona.com💻 Learn about Project Goose (agent orchestration): https://github.com/block/goose🌐 Perimeter Institute for Theoretical Physics: https://perimeterinstitute.ca✨ Gluroo (diabetes management startup): https://gluroo.com/📩 Subscribe to the Artificial Insights newsletter for key takeaways: https://manary.haus/podcast/#haus👉 Have a guest in mind? Reach out to Daniel at daniel@manary.haus💬 Thinking about the future? Share this Back to School Special with someone who’s curious about how today’s students are shaping tomorrow’s AI.

    17 min
  4. The Tyranny of Convenience and the Wisdom Gap in AI w/ Dr. K, Bioethicist, AI Theologian @ FaithTech, & Former U.S. Intelligence Officer

    AUG 22

    The Tyranny of Convenience and the Wisdom Gap in AI w/ Dr. K, Bioethicist, AI Theologian @ FaithTech, & Former U.S. Intelligence Officer

    What happens when the technology you rely on gets better every day while your own capacity remains the same? For Dr. K, theologian, bioethicist, and former U.S. intelligence officer, this is a deeply human question. With two decades in applied ethics, 14 books, and a career spanning hospital ethics, ministry, and service in the intelligence community, she brings a rare perspective on AI’s impact on human worth and agency. In this conversation, Daniel and Dr. K explore the “wisdom gap,” the widening distance between human limits and accelerating AI capacity. They discuss the “tyranny of convenience,” the pull to let machines take on hard work, and the importance of preserving agency when tools become persuasive partners. The conversation also pushes into bigger questions. Where does our worth come from when we now share intellectual space with AI? Why is this moment unlike the printing press or past technologies? And how can leaders resist the easy path in order to choose what is right? 🔑 What You’ll Learn in This Episode ✅ What the “wisdom gap” means for humans in an AI-driven world ✅ Why convenience can erode agency if left unchecked ✅ How persuasive technology shapes decisions without us realizing ✅ Why AI will never be worse than it is today and what that means for work and meaning✅ How theology and ethics can guide us through this AI moment 🔗 Resources & Links 🌐 Learn more about FaithTech: https://faithtech.com 📄 Read Tim Wu’s essay The Tyranny of Convenience: https://www.nytimes.com/2018/02/16/opinion/sunday/tyranny-convenience.html 📚 Explore the Center for Humane Technology: https://www.humanetech.com 📩 Subscribe to the Artificial Insights newsletter for key takeaways: https://manary.haus/podcast/#haus 👉 Have a guest in mind? Reach out to Daniel at daniel@manary.haus 💬 Have a moment of insight? Share this episode with someone wrestling with the deeper questions of AI, worth, and what it means to be human.

    43 min
  5. Infinite Efficiency and Human Value w/ Dr. Christopher Watkin, ARC Future Fellow & Associate Professor @ Monash University

    AUG 15

    Infinite Efficiency and Human Value w/ Dr. Christopher Watkin, ARC Future Fellow & Associate Professor @ Monash University

    What happens when the work you’ve built your identity around can be done faster, and sometimes better, by AI? For Dr. Christopher Watkin, philosopher, theologian, and associate professor at Monash University, AI’s greatest impact may be the questions it forces us to ask: What is work for? Where do we find value when productivity is no longer scarce? And what does this moment reveal about what it means to be human? In this episode, Daniel and Dr. Watkin discuss “humanity of the gaps,” the risk of defining ourselves only by what AI can’t yet do, and why the ease AI brings to work is both a gift and a challenge. They explore how AI shifts work from process to product, and how this moment can open rare opportunities for deeper public conversations about meaning, value, and the good life. 🔑 What You’ll Learn in This Episode ✅ Why AI makes old philosophical questions impossible to ignore✅ How “infinite efficiency” changes the purpose of work✅ What “humanity of the gaps” reveals about our self-definition✅ Why effort, friction, and process still matter in a world of perfect output✅ How AI can help us see assumptions we didn’t know we had🔗 Resources & Links 🤝 Connect with Dr. Watkin on LinkedIn: https://www.linkedin.com/in/christopher-watkin🌐 Explore Dr. Watkin’s work: https://christopherwatkin.com🐦 Follow Dr. Watkin on X: https://x.com/DrChrisWatkin📺 Watch Dr. Watkin's AI relational audit video: https://www.youtube.com/watch?v=GN62ekMtlJg&ab_channel=ChristopherWatkin📩 Subscribe to the Artificial Insights newsletter for highlights: https://manary.haus/podcast/#haus👉 Have a guest in mind? Reach out to Daniel at daniel@manary.haus💬 Learned something worth sharing? Pass this episode along to someone asking deep questions about AI and its place in our lives.

    49 min
  6. Before and After ChatGPT: Using AI Without Losing Ourselves w/ Sheldon Fernandez, Former CEO @ DarwinAI

    AUG 8

    Before and After ChatGPT: Using AI Without Losing Ourselves w/ Sheldon Fernandez, Former CEO @ DarwinAI

    Before ChatGPT, Sheldon Fernandez knew what it was to wrestle with a sentence until it worked. As an AI ethics speaker, former AI CEO, and theologian, he’s seen what’s gained, and what’s lost, when the work of critical thinking is just a click away. Now, as his children grow up in a world where AI can answer every question and affirm every feeling, he’s asking what that means for how we learn, relate, and make decisions. Sheldon brings a rare mix of technical expertise and philosophical insight to questions at the heart of AI and humanity. In this episode, Daniel and Sheldon talk about the pace of AI’s progress, why the temptation to outsource thinking is so strong, and how to keep hold of what is uniquely human in a time when that’s harder to define. They explore AI’s role in education, decision-making, and the skills we need to thrive in an AI-powered world. 🔑 What You’ll Learn in This Episode ✅ How growing up before AI changes the way you use it✅ Why constant validation from AI can get in the way of honest feedback✅ How AI "shifts" where we do our critical thinking✅ The risks of replacing human relationships with AI conversations✅ How theology and technology meet in questions of consciousness and the sacred🔗 Resources & Links 🤝 Connect with Sheldon on LinkedIn: https://www.linkedin.com/in/sheldonfernandez/📽️ Watch Sheldon's talk on The Theological Implications of Artificial Intelligence: https://www.youtube.com/watch?v=wDPcnnltmf8📩 Subscribe to the Artificial Insights newsletter for summaries and takeaways: https://manary.haus/podcast/#haus👉 Have a guest in mind? Reach out to Daniel at daniel@manary.haus🚀 Learn something new? Leave a review and share it with someone thinking about AI’s future.

    40 min
  7. Doing AI Right: Lessons from 11 Leaders Who’ve Seen What Works (and What Doesn’t) w/ Daniel Manary

    AUG 1

    Doing AI Right: Lessons from 11 Leaders Who’ve Seen What Works (and What Doesn’t) w/ Daniel Manary

    AI is everywhere but not every implementation works. In this Season 2 recap of Artificial Insights, Daniel revisits the most powerful moments from conversations with eleven leaders building AI in the real world. These guests have seen what happens when AI is rushed, misused, or built without purpose, as well as what it takes to create AI that lasts. Across three themes, their voices reveal patterns worth paying attention to: 1️⃣ The risks of rushing into AI and what happens when pressure overrides purpose 🎧 Jennifer Moss, author of "Why are we here?", speaking on the false pressures executives are feeling to adopt AI before they have a plan. Full episode here.🎧 Bijan Vaez from Merchkit, speaking on how feeding AI really bad data turned out to be the bigger problem. Full episode here.🎧 Carlos Almeida from Optave, speaking on the pitfalls of the fast prototype and how you shouldn't depend on an AI solution built in a weekend. Full episode here.🎧 Jonathan Green of Serve No Master, speaking on why you shouldn't just fire your entire customer support team and replace them with AI to save money. Full episode here.2️⃣ How to build AI on solid ground to solve real problems and earn trust 🎧 Patrick Belliveau from GambitCo, speaking on how not solving for cancer, but instead a very specific problem, was the key for success. Full episode here.🎧 Alex Millar from GovAI, speaking on how everything is, technically, a wrapper. The challenge is to find how you add value. Full episode here.🎧 Mike Kirkup from Arlo, speaking on how easy it is to get to 50% with AI models, but the hard part is getting to 99%. Full episode here.🎧 Tair Asim from Sync, speaking on how good inputs are non-negotiable when making RAG-based AI products. Full episode here.3️⃣ Why rethinking work defines the future and why transparency matters 🎧 Atif Khan of MessagePoint, speaking on how becoming AI-first requires a major shift in thinking. Full episode here.🎧 Chyngyz Dzhumanazarov of Kodif, speaking on how the future of customer service will probably be AI-first, with the human touch as a premium service. Full episode here.🎧 Nicolas Tobis of Relias, speaking on how, with AI, we can now train and measure empathy. Full episode here.This episode closes Season 2 with a big insight: The companies that thrive aren’t asking, What can this model do? They’re asking, What do we need to understand about our data, our customers, and ourselves to make this work?

    14 min

About

Candid conversations and real-world stories about building AI into products and businesses. Made for executives, founders, and product leaders who want to do AI right... from strategy to launch to scale. Each Friday, host Daniel Manary talks with CEOs, CTOs, CAIOs, product managers, researchers, and founders about bringing AI ideas to market, separating hype from lasting impact. He explores the How's, What's, and Why's of Artificial Intelligence and digs into how this technology is changing the landscape of modern work and life. Guests share how they’ve worked with complex, messy data, built trust into automation, and launched AI-powered products customers rely on. Whether you’re starting a new AI initiative or scaling an existing product, you’ll hear insightful, hype-free lessons you can apply to your own business.

You Might Also Like