ABCs for Building The Future

Robert Ta

If we don't fully understand ourselves, then how can AI understand us? Bootstrapping epistemicme.ai to solve this in the open, and giving you the nitty gritty behind-the-scenes details of startup life. We feature founders, entrepreneurs, researchers, scientists, and builders interested in building a better future together. People doing big things with big stories to tell, from the frontlines. And we share our own story in real-time with radical transparency, of building this global open source venture in public. Join. abcsforbuildingthefuture.substack.com

  1. You are the alignment problem

    MAR 6

    You are the alignment problem

    Introducing Epistemic Me: Open Source AI for Personalized Belief Systems In this episode, the founders of Epistemic Me discuss the inception and development of their project—a tool designed to deeply understand users' beliefs and optimize personalized recommendations. They delve into philosophical concepts like the self and interconnectedness, the role of an open-source approach, and their vision for the future. Key highlights include the project's potential applications in health, mental well-being, and science, and the importance of epistemology in evolving AI systems. The conversation also touches on the next steps for launching the open-source GitHub repository and community involvement. 00:00 Are We Living in a Simulation? 01:20 Introducing the Podcast and Hosts 01:59 What is Epistemic Me? 04:51 The Origin Story of Epistemic Me 01:09 Explaining Epistemic Me to a 5-Year-Old 21:22 Personal Journeys and Philosophical Insights 40:43 Purpose and Identity 46:36 The Process of Epistemology 47:12 Operating System Analogy 47:29 Maslow's Hierarchy and Self-Discovery 48:13 Future of Epistemic Me: 5 to 10 Years 49:01 Personal Recommendation Engines 50:32 Data-Driven Science 52:36 Success in 5 to 10 Years 57:38 Open Source and Ethical Considerations 01:12:59 Long-Term Vision: 500 Years Ahead 01:26:33 Community and Contributor Call 01:33:28 Closing Thoughts and Future Plans Get full access to ABCs for Building The Future at abcsforbuildingthefuture.substack.com/subscribe

    38 min
  2. The Philosophy That'll Change How You See Reality

    JAN 23

    The Philosophy That'll Change How You See Reality

    Introducing Epistemic Me: Open Source AI for Personalized Belief Systems In this episode, the founders of Epistemic Me discuss the inception and development of their project—a tool designed to deeply understand users' beliefs and optimize personalized recommendations. They delve into philosophical concepts like the self and interconnectedness, the role of an open-source approach, and their vision for the future. Key highlights include the project's potential applications in health, mental well-being, and science, and the importance of epistemology in evolving AI systems. The conversation also touches on the next steps for launching the open-source GitHub repository and community involvement. 00:00 Are We Living in a Simulation? 01:20 Introducing the Podcast and Hosts 01:59 What is Epistemic Me? 04:51 The Origin Story of Epistemic Me 01:09 Explaining Epistemic Me to a 5-Year-Old 21:22 Personal Journeys and Philosophical Insights 40:43 Purpose and Identity 46:36 The Process of Epistemology 47:12 Operating System Analogy 47:29 Maslow's Hierarchy and Self-Discovery 48:13 Future of Epistemic Me: 5 to 10 Years 49:01 Personal Recommendation Engines 50:32 Data-Driven Science 52:36 Success in 5 to 10 Years 57:38 Open Source and Ethical Considerations 01:12:59 Long-Term Vision: 500 Years Ahead 01:26:33 Community and Contributor Call 01:33:28 Closing Thoughts and Future Plans Get full access to ABCs for Building The Future at abcsforbuildingthefuture.substack.com/subscribe

    1h 35m
  3. Your Cells Are Goal-Seeking Agents (And What That Means for AI Alignment)

    JAN 9

    Your Cells Are Goal-Seeking Agents (And What That Means for AI Alignment)

    Introducing Epistemic Me: Open Source AI for Personalized Belief Systems In this episode, the founders of Epistemic Me discuss the inception and development of their project—a tool designed to deeply understand users' beliefs and optimize personalized recommendations. They delve into philosophical concepts like the self and interconnectedness, the role of an open-source approach, and their vision for the future. Key highlights include the project's potential applications in health, mental well-being, and science, and the importance of epistemology in evolving AI systems. The conversation also touches on the next steps for launching the open-source GitHub repository and community involvement. 00:00 Are We Living in a Simulation? 01:20 Introducing the Podcast and Hosts 01:59 What is Epistemic Me? 04:51 The Origin Story of Epistemic Me 01:09 Explaining Epistemic Me to a 5-Year-Old 21:22 Personal Journeys and Philosophical Insights 40:43 Purpose and Identity 46:36 The Process of Epistemology 47:12 Operating System Analogy 47:29 Maslow's Hierarchy and Self-Discovery 48:13 Future of Epistemic Me: 5 to 10 Years 49:01 Personal Recommendation Engines 50:32 Data-Driven Science 52:36 Success in 5 to 10 Years 57:38 Open Source and Ethical Considerations 01:12:59 Long-Term Vision: 500 Years Ahead 01:26:33 Community and Contributor Call 01:33:28 Closing Thoughts and Future Plans Get full access to ABCs for Building The Future at abcsforbuildingthefuture.substack.com/subscribe

    1h 43m
  4. Why Your Product Has Too Many Features (Ex-Atlassian Engineer Explains)

    10/27/2025

    Why Your Product Has Too Many Features (Ex-Atlassian Engineer Explains)

    What if the biggest threat to your product isn’t what you haven’t built yet, but everything you already have? I sat down with Rich—early at Atlassian, who spent years in the trenches building products used by millions—and we went deep on something most founders don’t want to hear. Your users don’t want more features. They want clarity. They want tools that solve real problems without drowning them in options they’ll never touch. This conversation hit me hard because I’m building Clarity right now, and Rich’s insights forced me to question everything. Are we building for users, or are we building to feel productive? Are we creating value, or are we creating noise? If you’re a founder, product leader, or engineer who’s ever felt the pressure to “just ship one more thing,” this is your intervention. Let’s get into it. 1. The Feature Bloat Trap: How Success Becomes Your Burden Every product team faces this paradox. You build something people love. They start using it. Then come the requests: “Can you add this? What about that?” Before you know it, your elegant solution has morphed into a Swiss Army knife that nobody knows how to use anymore. Rich lived this at Atlassian. “You’re not building for your existing customers anymore. You’re building for the imagined customer that might come in the future,” he told me. “And that’s when you lose focus. The psychology here is brutal, and I think about it every single day while building. Adding features feels like progress. It feels like growth. It gives your team something to do, your sales team something to pitch, and your investors something to point to in their decks. But here’s the truth: each new feature is a cognitive tax on every user who has to navigate around it just to get to what they actually need. I keep asking myself: Are we building this because users need it, or because we need to feel like we’re moving forward? Application for builders: Before adding your next feature, pause. Ask yourself that question. The best product decisions often involve saying no, not yes. And saying no is f*****g hard when everyone around you is screaming for more. 2. The Oracle Years: When Engineering Excellence Meets Corporate Bloat Rich spent fourteen years at Oracle and PeopleSoft. Fourteen years watching what happens when feature accumulation becomes institutional religion. Large enterprise software companies don’t just add features—they acquire them, bolt them on, create labyrinths of functionality that require armies of consultants to navigate. What struck me most was this moment of reflection: “I tended to be in the back seat most of the time,” Rich said. “I would probably suggest moving up to the front, maybe even getting towards the driver’s seat and figuring out how to control and do something more meaningful with your career.” This wasn’t just career advice. This was about ownership. When you’re deep in a large organization, it’s easy to become a feature factory worker—executing someone else’s roadmap, optimizing someone else’s metrics, building toward someone else’s vision. You lose sight of the why behind what you’re building. And here’s the irony that kills me: many of these organizations are drowning in talented engineers who know exactly what needs to be cut, simplified, and reimagined. But the incentive structures reward addition, not subtraction. Application for builders: If you find yourself building features you don’t believe in, it’s time to either influence the roadmap or find a different seat. Your best work happens when you care about the outcome, not just the output. I learned this the hard way at my first startup in crypto gaming when I was just executing without conviction. Never again. 3. The Clarity Insight: What Journaling Teaches Us About Product Design One of the most validating moments in our conversation came when Rich reflected on what I’m building with Clarity—my tool for personal performance management through journaling and self-reflection. “The thing that appealed to me when you were talking about it is like, it’s really like a personal performance management app,” Rich said. “If you write in a journal, the only time a journal is really useful is if you’re deliberate about it. You have discipline in writing in it regularly, and then your mind will clarify.” But here’s the brutal truth about most journals: they become write-only systems. You pour your thoughts onto pages, close the book, and never look back. All that valuable reflection becomes historical noise. Gone. Useless. Rich compared Clarity to using Granola for meeting notes—having the ability to search through past conversations, find patterns, extract insights months or years later. That transforms raw data into actionable knowledge. That’s the whole f*****g point. This is the opposite of feature bloat. Instead of adding more ways to capture information, we’re focused on making existing information more valuable. Instead of building forty different input methods, we’re building one great way to retrieve insight when you need it most. Application for builders: Before you write a single line of code for that next feature, stop. Ask yourself one question: “Does this help users find clarity, or does it add to their cognitive burden?” The best products reduce mental overhead. They don’t increase it. And if we’re honest with ourselves, most features we ship increase it. We add complexity in the name of capability, and our users pay the price in confusion. 4. The Jobs-to-be-Done Lens: What Are You Really Solving? Midway through our conversation, I pushed Rich to articulate the core job that Clarity is solving. I love using Bob Moesta and Teresa Torres’s frameworks for this. What’s the fundamental outcome users are hiring the product to deliver? His answer stopped me in my tracks: “The promise of AI is about truly being your assistant over your life. Things fail—our memories are going to fail. We forget things over time. We’re not perfect. If you have some history of everything you’ve done or things you’ve talked about, and if you could make use of that somehow... it’s like a living autobiography.” A living autobiography. Holy s**t. This is what great product thinking looks like. Not “we’re building a journaling app” or “we’re adding AI features.” But rather: “We’re solving the fundamental problem that human memory is unreliable, and valuable experiences disappear into the void unless we have systems to preserve and surface them.” When you understand the job at this level, feature decisions become obvious. Does this help create a living autobiography? Does it surface forgotten insights? Does it reduce the burden of remembering? If not, it doesn’t belong in the product. What I’m taking from this: I wrote down our one job on a sticky note and put it above my desk: Create a living autobiography of who you’re becoming. Every feature request now gets evaluated against that job. If it doesn’t serve that core purpose, it’s noise. It’s a distraction. 5. The Experience Over Output Paradox: Why We Build the Wrong Things The most philosophical moment came when Rich talked about his relationship with hobbies, particularly photography. He confessed something I deeply relate to: he often feels the need to see progress, validation, recognition for the things he creates—even when he knows he should just enjoy the process. “I feel like I’m wired that way where like, I need to see some sort of progress, some validation, some recognition for the things that you do,” he admitted. “I know every human wants to feel confident. They want to feel like they belong.” F**k, I felt that. This is the trap that destroys products. We build for validation—from users, investors, the market, ourselves—rather than building for genuine utility. We add features because we want to show progress. We complicate interfaces because simplicity doesn’t feel like enough work. But the best products respect that value comes from experience, not from feature count. A journal is valuable because of the clarity you gain while writing, not because of the pages you fill. A camera is valuable because of how it changes what you see, not because of megapixel counts. Rich’s self-awareness here is the mark of a mature product thinker: recognizing the impulse to optimize for the wrong metrics and constantly pulling yourself back to what actually matters. I’ve been thinking about this a lot lately. How much of what I build is for me to feel like I’m making progress versus what actually serves the user? How often do I add complexity because I need external validation that I’m “doing something”? Application for builders: We need to build cultures where “we decided not to build that” is celebrated as much as “we shipped this new feature.” The best product teams are disciplined about what they say no to. That’s the real flex. Key Takeaways: The Anti-Feature Manifesto Your product doesn’t need more features. It needs more focus. 1. Question Every Feature Request Most come from imagined future customers, not actual needs. Ask: “Who is this really for?” 2. Value Subtraction Over Addition Removing confusion beats adding functionality. Every feature taxes user attention. 3. Design for Retrieval, Not Capture Information without insight is noise. Make the past useful, not just stored. 4. Understand the Core Job If you can’t say it in one sentence, you’re building on shaky ground. Ours: living autobiography. 5. Build for Experience, Not Validation Stop building for your ego. Build for user value. 6. Take the Driver’s Seat Own the direction. Don’t be a passenger in your own life. The pressure to add more never stops. The market always demands more features. Competitors always tout expanding capabilities. Resist. The products that endure stay

    2h 3m
  5. Building AI Products Right

    10/13/2025

    Building AI Products Right

    What if the biggest mistake in AI development isn’t what you’re building—but how you’re measuring it?In this episode of ABCs for Building the Future, host Robert sits down with Hamel Husain — machine learning engineer with 25+ years across GitHub, Airbnb, and his own consulting practice, and creator of the AI Evals course that’s trained nearly 50 OpenAI employees. Hamel shares his evolution from pioneering code understanding models at GitHub to breaking free from corporate America — and makes a compelling case that we’re measuring AI products all wrong. We don’t need more automated eval vendors or hallucination scores. We need teams who know how to look at their data, count their failures, and iterate like investigative journalists. If you’re a founder, engineer, or product leader drowning in eval tooling demos and wondering why your AI product still feels broken — this conversation is a masterclass in cutting through the hype, escaping the matrix of engineering elitism, and building products that actually work. 1. Error Analysis Before Automation: The 30-Minute Practice That Beats Every Vendor Tool “The first step in doing eval is first understand what is wrong with your system. So doing some data analysis of your system to figure out what is broken.” Hamel’s career-defining insight didn’t come from a research paper or a vendor pitch. It came from watching client after client get stuck in the same place: building AI products that “work” but don’t work well. The pattern was identical every time. Teams would glue together RAG pipelines, add tool calling, get excited about the demo—then hit a wall. “How do we make it actually good?” they’d ask. And Hamel would ask back: “How are you measuring what’s improving?” The breakthrough: Most teams jump straight to evals (or worse, eval vendors) without understanding their actual failure modes. They’re solving the wrong problem. What actually works: * Pull 100 traces from your production system * Take notes on what breaks (be specific: “interrupts user mid-thought” not “bad UX”) * Categorize your notes into failure types * Count them That’s it. No LLM judges, no automated hallucination scores, no vendor dashboards. Reflection: Hamel shared a story about auditing a recruiting email platform. The AI-generated messages were generic LinkedIn spam: “Based on your background at Epistemic Me...” When he pointed out he’d immediately delete these, the team said “everything works.” But they weren’t actually trying to recruit anyone. They weren’t measuring conversion. They had convinced themselves the product worked because they hadn’t looked at the data. If you’ve ever felt like your AI product is “almost there” but can’t articulate why it’s not—this 30-minute practice will tell you more than three months of tooling evaluation. 2. The Dogfooding Delusion: Why Most Teams Are Just Testing (Not Using) “Are you dog fooding on that level? Where you are the expert. You are using it in anger all the time... Its output affects your livelihood.” When Anthropic revealed that Claude Code’s success came partly from intensive internal use, Twitter exploded: “See, you don’t need evals!” But Hamel caught what everyone missed—a crucial distinction between real dogfooding and performance theater. Real dogfooding (Anthropic engineers with Claude Code): * They ship production code using it daily * When it breaks, their work stops * The output affects their livelihood * They’re both the domain expert AND the user Fake dogfooding (most teams): * “Using” the product occasionally to “test it” * Not actually trying to achieve the goal * Wouldn’t pay for it themselves * Building an AI health coach but not trying to lose weight with it The gap is everything. In fake dogfooding, you convince yourself the product works because you’re not feeling the pain of it failing. Reflection: Hamel’s litmus test is disarmingly simple: Would you pay for this with your own money? Does using it save or cost you real time? If not, you’re not dogfooding—you’re testing. And that’s a fundamentally different feedback loop that lets broken products feel functional far longer than they should. The dangerous corollary: Everyone thinks they’re dogfooding. The teams building recruiting tools, fitness coaches, coding assistants—they all believe they’re using their products “in anger.” But ask them about conversion rates, about whether they’d recommend it to a friend, about whether it’s actually solving their problem—and the story changes. 3. Data Science as Investigative Journalism: The Mindset That Changes Everything “Good data science in big businesses looks like investigative journalism. You’re figuring out, teasing out the story of what exactly happened here and what can I learn.” This reframe completely shifted how I think about AI product work. It’s not about running queries or computing metrics—it’s about investigation. The parallel: Investigative journalist: * Conducts interviews (qualitative insight) * Analyzes public records and data (quantitative patterns) * Connects dots others miss * Synthesizes into a narrative that drives action Effective AI product developer: * Examines traces and user conversations (qualitative) * Measures failure patterns and metrics (quantitative) * Identifies root causes, not symptoms * Communicates what’s broken and why in ways that mobilize teams Why this matters: Most engineers are trained to dismiss storytelling as “soft skills” or “non-technical fluff.” But when you’re working with stochastic systems that can’t be reduced to deterministic tests, your ability to investigate, synthesize, and communicate becomes the core competency. You can’t fix what you can’t explain. And you can’t explain what you haven’t investigated. Reflection: Hamel describes how at companies experiencing margin leakage or churn spikes, the best data scientists operate exactly like reporters: they gather evidence from multiple sources (qualitative interviews, quantitative metrics), look for patterns, rule out false leads, and ultimately tell a coherent story about what happened and why it matters. The person who can explain why your AI health coach sounds paternalistic is worth infinitely more than someone who can compute its “tone score.” If you’re hiring for AI product roles, look for people who can tell stories with data—not just run statistical analyses. 4. Silicon Valley’s Status Games: Why Engineers Defend the Matrix (And How to Escape) “Engineers will defend the matrix. They will defend the matrix to the death... You kind of have to decide, like you have to see that. If you want to get outside the matrix.” This might be the most uncomfortable truth Hamel shared and the most liberating. The Silicon Valley hierarchy (unspoken but real): * VC-backed unicorn founder (peak status) * Bootstrapped product founder * Senior engineer at FAANG * Course creator / educator (low status, sometimes openly mocked as “course bro”) The absurd reality: Hamel’s first course on fine-tuning LLMs generated 5x his consulting income while he slept. More critically, it gave him leverage and freedom—the very things most engineers claim to want but are trained to dismiss. “I would go to sleep and then wake up in the morning and be like, all these sales while I was sleeping. That totally rewired my brain. I can never go back to a job.” Why this matters: Engineers are systematically taught to mock anything “non-technical”—marketing, communication, teaching, writing. If you’re good at explaining concepts or building community, you’re dismissed as “not a real engineer” or “the marketing guy.” But these are exactly the skills that enable you to escape trading time for money. The trap Hamel identifies: At companies, if you’re a strong developer who’s also good at writing, communication, or devrel—you’ll often get lower pay, hit career ceilings faster, and face subtle elitism from peers. The message is clear: spike on coding only, or you’re not serious. But if you want to run your own business? Those “soft skills” are suddenly the highest leverage activities you can do. Reflection: Hamel traces this back to his consulting days at Accenture, where consultants would laugh at their clients over dinner: “Can you believe they don’t know what they’re doing?” It’s a different matrix—the consulting matrix—where you’re convinced you could never work at “those companies.” He only broke free after going to law school (which he hated), resetting his brain, and realizing: there are multiple matrices. And most engineers defend theirs without even seeing it. The litmus test: If you’re good at writing, teaching, or explaining—that’s not weakness. The question isn’t whether it has “social status” in your team’s Slack. The question is: does it give you freedom? 5. When Evals Actually Matter (And When They’re Just Theater) “You shouldn’t do evals unless you’re getting some value out of it. If you do an eval you get some immediate value out of it. Otherwise you shouldn’t do it.” After all the discussion about what evals aren’t, Hamel is crystal clear about when they become genuinely valuable—and when they’re just performance. Use evals when: * You’ve identified a specific, recurring failure through error analysis * The fix isn’t obvious (you’ll need to iterate) * You have enough examples to validate your measurement approach * The eval provides a signal you trust to guide rapid iteration Skip evals when: * You found a simple bug you can just fix (wrong tool in API call, syntax error) * You’re trying to automate away understanding your product * You’re chasing generic metrics without product context (hallucination scores, toxicity) * You haven’t looked at your actual data

    1h 22m
  6. Every Job Can Be A Climate Job With Louisa Henry

    09/08/2025

    Every Job Can Be A Climate Job With Louisa Henry

    What if the most powerful lever for saving the planet isn’t a new technology — but a mindset shift at your current job? In this transformative episode of ABCs for Building the Future, host Robert sits down with Louisa Henry — executive coach, former product leader at Gusto and Airtable, and creator of the podcast Any Job Can Be a Climate Job. Louisa shares her personal evolution from Fortune 100 product roles to climate activism — and makes a compelling case that we don’t need every job to be in climate tech. We need every worker, in every role, to think like a climate changemaker. If you’re a founder, technologist, or executive searching for purpose in your work — this conversation is a masterclass in reimagining your role, your rituals, and your ripple effect. 1. From Chase to Change: Louisa’s Climate Awakening “I was looking at my annual plan and realized — this isn’t aligned with my values. I’m not doing what matters most to me.” Louisa's career was a blueprint of tech success: Airtable, Gusto, JPMorgan Chase. But the pandemic, a family tragedy, and a growing internal dissonance pulled her in a new direction. After walking away from a promising health tech job offer, Louisa leaned into uncertainty and grief — and emerged with clarity. Climate wasn’t just a cause. It was her cause. Reflection: If you’ve ever felt misaligned with your work, Louisa’s pivot is a reminder: discontent can be a compass, not a curse. 2. Why Any Job Can Be a Climate Job “You don’t have to work at a climate startup to be a climate leader. Every company has an impact — and every employee has influence.” The central thesis of Louisa’s new podcast — and mission — is disarmingly simple: climate change isn’t someone else’s job. Whether you’re in finance, marketing, product, or HR, there are always leverage points: policies, vendors, product energy consumption, company culture. She shares a powerful story of one ad tech employee who asked, “Where is our biggest energy waste?” The answer led to a new, greener, faster product — and a shift in company direction. Reflection: Don’t underestimate your domain knowledge. Climate action isn’t just about values — it’s about understanding systems, incentives, and change from within. 3. The Power of Presence: Rituals That Rewire the Mind “If you can be present in every moment — that’s the secret to life.” Louisa’s leadership transformation is inseparable from her spiritual one. A key turning point came when she embraced mindfulness practices — daily journaling, 5:15 AM meditations, walking meetings, and "stop and be" tattoo-level clarity. Meditation gave her something the corporate world rarely does: meta-attention. The ability to notice what she was paying attention to — and decide where it should go. Reflection: Founders and execs often optimize everything but their own mind. Presence is a performance enhancer — and an ethical compass. 4. Building Climate-Conscious Community from the Inside Out “We don’t need every company to be a climate company. But we do need every company to care.” Through her podcast and in-person events, Louisa is building a growing ecosystem of climate-conscious professionals — not activists outside the system, but intrapreneurs reshaping it from within. At SF Climate Week, her founder circles were oversubscribed within hours — a sign that leaders are hungry for connection, clarity, and courage. She also calls out the tension between capitalism and sustainability — and how meaningful change often begins with "non-obvious" allies inside the system. Reflection: The climate movement isn’t just about information. It’s about belonging. If you’re lonely in your convictions, find your circle — or create one. 5. Your Micro-Choices Create Macro Change “The world is soft clay. You can mold it.” Louisa offers practical pathways to climate action: * Audit your team’s resource consumption * Advocate for greener vendor policies * Influence product energy usage * Shift budgets — personal or professional — toward regenerative systems * Practice intentional consumption (wait 24 hours before every online purchase) * Remember: eco-anxiety is real. Be kind to yourself. You don’t need to be perfect. You just need to start. Reflection: Real impact isn’t one heroic act. It’s consistent, aligned micro-decisions that compound into culture. 🎧 Resources and Further Listening * Follow Louisa on LinkedIn * Follow Louisa on Substack * Louisa’s Podcast: Any Job Can Be a Climate Job * Listen on Apple * Listen on Spotify * Are you a senior leader or founder navigating pressure, growth, and culture strain? Louisa can help! * Her episode on climate action from within ad tech (with Gabe): Listen here * 30 Day Free Trial for the Meditation app she recommends: Waking Up * Book on attention and meditation: Emotional Intelligence by Daniel Goleman * Foundational environmental insight: Braiding Sweetgrass by Robin Wall Kimmerer Get full access to ABCs for Building The Future at abcsforbuildingthefuture.substack.com/subscribe

    1h 13m
  7. 13 year old immigrant to AI Thought Leader

    07/24/2025

    13 year old immigrant to AI Thought Leader

    Opening Hook: What if the key to high-impact leadership in the AI age isn’t in your credentials — but in your ability to listen to your body, trust your joy, and design your own definition of success? 🎧 Episode Context: In this episode of ABCs for Building the Future, host Robert Ta sits down with Dr. Serena Huang — former Data Scientist turned founder, global speaker, and passionate advocate for inclusive, data-driven well-being at work. Serena’s journey, from a 13-year-old immigrant to a corporate leader and entrepreneur, is a masterclass in growth, self-awareness, and courageous change. If you’re an entrepreneur, tech leader, or visionary navigating reinvention in the age of AI, this episode is packed with tools, reflections, and frameworks to help you thrive. 🔍 Key Themes & Insights: 1. Leaving the Ladder: Why Joy Beats Job Titles After climbing the corporate ranks, Serena found herself spending her days “stakeholder-managing every single minute” — far from the data work and public speaking she loved. A reflective moment in a 1:1 meeting with the CHRO marked her unexpected pivot. “I wondered… could I make a living doing just things that bring me joy?” – Dr. Serena Huang 2. Disciplined Freedom: Rebuilding Structure as a Founder Serena traded corporate discipline for entrepreneurial chaos — and then returned to structure on her own terms. “I needed that initial freedom… but at the core, I’m a structured person. I block time now for workouts, reflection, and uninterrupted lunches.” Freedom isn’t the absence of structure. It’s the ability to create your own — based on what energizes and grounds you. 3. The Myth of Overnight Success Serena’s post-corporate career took off fast — 30 cities, keynotes, global clients — but it wasn’t luck. It was momentum built over years of unpaid speaking, content creation, and behind-the-scenes resilience. “There’s no such thing as an overnight success. People didn’t see the 10 years of work before I made my first sale.” What slow, invisible work are you investing in today that will compound later? 4. AI, Inclusion, and Wellbeing: The Missing Link Through her research and new book, Serena explores how AI and data can be used to measure — and improve — inclusion and wellbeing at work. Spoiler: they’re deeply connected. “You can’t have wellbeing without inclusion.” Using AI to analyze meeting invites, calendar patterns, and Slack sentiment can reveal exclusion risks and burnout signals long before surveys do. 🔗 Explore the intersection of DEI, wellbeing, and AI 5. Rewriting Worth: From Achievement to Authenticity Serena shares how years of therapy helped her separate identity from output — and choose self-compassion over self-optimization. “You are worthy simply as you are. You don’t need to achieve a thing.” Success is not a substitute for self-worth. Build your life around your values — not just your goals. 🔗 Resources and Links: * Serena’s company: Data with Serena * Serena’s book: The Inclusion Equation * Pie of Life framework * Sidebar Summit: sidebar.com Get full access to ABCs for Building The Future at abcsforbuildingthefuture.substack.com/subscribe

    1h 17m
  8. AI Context Engineering = AI Personalization Engineering

    07/15/2025

    AI Context Engineering = AI Personalization Engineering

    Can an AI truly understand you if it doesn’t understand your beliefs? That’s the provocative question guiding Episode 23 of ABCs for Building the Future. In a sweeping conversation that blends cognitive science, product design, and philosophical rigor, hosts Robert and Jonathan introduce a bold new framing for personalizing AI: Epistemic Evals. This blog distills the episode’s most compelling insights for founders, developers, and health-tech innovators who want to move beyond token-level tuning and toward true understanding—at scale. 🎙️ Context: Building AI That Understands People—Not Just Prompts This week’s episode is a build-in-public deep dive into Robert and Jonathan’s latest developments on their open-source SDK, Epistemic Me. Their mission? Make belief systems a first-class citizen in AI alignment and personalization. Core themes include: * The emerging category of “epistemic evals” * The three layers of memory in personalized AI agents * How beliefs determine alignment and behavior change * A live product demo of their self-modeling evaluation system * Mapping philosophical complexity to practical AI architecture 🧭 1. Why AI Needs to Start With Belief Systems “We don’t think you can solve hyper-personalization or AI alignment without modeling a user’s belief system.” — Robert The team starts with a powerful claim: personalization begins at the belief level. If an AI doesn’t understand what you believe—about health, money, relationships, or yourself—it cannot make meaningful recommendations. And without alignment on beliefs, there can be no trustworthy AI. Robert and Jonathan argue that the most effective AI agents will be belief-adaptive, not just data-reactive. This goes beyond tone and formatting preferences; it’s about modeling how a user sees the world and shaping responses accordingly. 🔍 Application: AI agents in health, education, or coaching domains should model user belief systems over time—not just answer questions on demand. 🧠 2. Introducing Epistemic Evals: The Top Layer of Alignment “Epistemic evals happen at the belief and self-model level. They’re how we measure if the AI actually understands the user.” — Jonathan Traditional application-level evaluations (e.g., “Did the agent return the right recipe?”) aren’t enough for deeply personal domains. Enter epistemic evals—a new category for evaluating how well an AI models a user’s worldview, belief system, and internal logic. Inspired by user-centric evaluation papers and grounded in neuroscience (e.g., Friston’s free energy principle), epistemic evals look at: * The agent’s representation of a user’s belief states * How beliefs affect perceived recommendation efficacy * Whether the agent's suggestions align with the user's internal model of causality 📊 Application: Use epistemic evals to unlock truly personalized agents for longevity, mental health, or financial coaching. 🧱 3. Building With Memory: Working, Episodic, Semantic “You can’t personalize without memory—and not just one kind of memory.” — Jonathan In a standout section, Jonathan lays out their AI architecture, adapted from human cognition: * Working Memory: Recent chat turns, current session inputs. * Episodic Memory: Personal user events and past experience (e.g., “last time I felt like this”). * Semantic Memory: General knowledge + structured belief systems (e.g., “I believe fasting boosts clarity”). By treating belief systems as dynamic, timestamped objects within this memory stack, they’re able to surface beliefs that are relevant to the current user query. The result: responses that feel less canned, more attuned. 🧠 Application: Personalization doesn’t scale unless you have a structured memory framework. Start there. 🎯 4. From Evaluation to Recommendation: A New Loop for Personalization “It all comes down to recommendations. And those have to match the user’s causal worldview.” — Jonathan Using a live demo of their self-management agent, the team shows how epistemic evals map directly to behavior change. They’ve built a layered feedback loop: * Input (user query) * Context (beliefs + past states) * Output (response + recommendation) * Evaluation (did it fit the user’s belief model?) This evaluation loop isn’t just for accuracy—it’s for empathy. By understanding what users value and how they think change happens, agents can recommend plans users are more likely to follow. ⚙️ Application: Whether you're building a coaching bot or customer success agent, measure success by belief-congruent recommendations, not just click-through rates. 🔗 Resources & Further Reading * User-Centric Evals Paper * Friston’s Free Energy Principle (Wikipedia) * Inside Out (Pixar) as an agentic metaphor * LLM AI Evaluation Course * Open Source SDK: Epistemic Me Why Epistemic Me Matters “How can AI understand us if we don’t fully understand ourselves?” We solve for this by create programmatic models of self, modeling belief systems, which we believe are the basis of defense against existential risk. In the longevity tech space, we create tools that meet users where they are, helping them make better decisions, form healthier habits, and align with their deepest values. ABCs for Building The Future is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber. Get Involved Epistemic Me is building the foundational tools to make this vision a reality—and we’re doing it in the open. Here’s how you can join the movement: * Check out the GitHub repo to explore our open-source SDK and start contributing. * Subscribe to the podcast for weekly insights on technology, philosophy, and the future. * Join the community. Whether you’re a developer, researcher, or someone passionate about the intersection of AI and humanity, we want to hear from you. Email me anytime! FAQs Q: Why does this matter for AI?A: Because without shared values, we can’t align AI. Belief systems that scale and unify are essential to building tools that serve humanity, not destroy it. Q: What is Epistemic Me? A: It’s an open-source SDK designed to model belief systems and make AI more human-aligned. Q: Who is this podcast for? A: Entrepreneurs, builders, developers, researchers, and anyone who’s curious about the intersection of technology, philosophy, and personal growth. If you’ve ever wondered how to align AI with human values—or just how to understand yourself better—this is for you. Q: How can I contribute? A: Visit epistemicme.ai or check out our GitHub to start contributing today. Q: Why open source? A: Transparency and collaboration are key to building tools that truly benefit humanity. Q: Why focus on beliefs in AI?A: Beliefs shape our understanding of the world. Modeling them enables AI to adapt to human nuances and foster shared understanding. Q: How does Epistemic Me work? A: Our open-source SDK uses predictive models to help developers create belief-driven, hyper-personalized solutions for applications in health, collaboration, and personal growth. Think of it as a toolkit for understanding how people think and making better tools, apps, or decisions because of it. Q: How is this different from other AI tools? A: Most AI tools are about predictions and automation. Epistemic Me is about understanding—building models that reflect the nuances of human thought and behavior. And it’s open source! Q: How can I get involved? A: Glad you asked! Check out our GitHub. Q: Who can join? A: Developers, philosophers, researchers, scientists and anyone passionate about the underpinnings of human beliefs, and interested in solving for AI Alignment. Q: How to start? A: Visit our GitHub repository, explore our documentation, and become part of a project that envisions a new frontier in belief modeling. Q: Why open-source? A: It’s about harnessing collective intelligence for innovation, transparency, and global community involvement in shaping belief-driven solutions. P.S. If you haven’t already checked out my other newsletter, ABCs for Growth—that’s where I have personal reflections on personal growth related to applied emotional intelligence, leadership and influence concepts, etc. P.S.S. Want reminders on entrepreneurship, growth, leadership, empathy, and product? Follow me on.. YouTube Threads Twitter LinkedIn Get full access to ABCs for Building The Future at abcsforbuildingthefuture.substack.com/subscribe

    1h 28m

About

If we don't fully understand ourselves, then how can AI understand us? Bootstrapping epistemicme.ai to solve this in the open, and giving you the nitty gritty behind-the-scenes details of startup life. We feature founders, entrepreneurs, researchers, scientists, and builders interested in building a better future together. People doing big things with big stories to tell, from the frontlines. And we share our own story in real-time with radical transparency, of building this global open source venture in public. Join. abcsforbuildingthefuture.substack.com