3reate

3reate

In a world of distractions; creativity and innovation change the direction of society. The future isn’t built by the loudest voices in the room; it’s built by doers: the artists who code, the scientists who sculpt, and the technologists who dream. 3reate goes beyond headlines providing the blueprint for the future. We bring you weekly deep dives and curated interviews with the hidden architects of the innovation economy. Creativity is your ultimate advantage. Support the pod: https://ko-fi.com/3reate https://patreon.com/3reate Listen on YouTube: https://www.youtube.com/@3reate Listen on Apple Podcasts: https://podcasts.apple.com/us/podcast/3reate/id1723426314 Listen on Spotify: https://open.spotify.com/show/48Y2M7Ppja43Uq2wlyUtPF Our intro music: “Into the night” by @prazkhanal | Our outtro music: “Filler Drop” by @keyframeaudio

  1. Why AI Destroyed Social Media and How You Fight Back – Devin Gaffney of Graze Social

    1D AGO

    Why AI Destroyed Social Media and How You Fight Back – Devin Gaffney of Graze Social

    As AI-generated noise floods our social feeds, big platforms are effectively throwing up their hands and declaring "slop bankruptcy." Today, we explore the friction between technology and society with Devin Gaffney, CEO of Graze Social, who is fundamentally rewiring the attention economy. We dive into the architecture of outrage, the illusion of "credible exit" in federated networks, and how his company, Graze Social, acts as a "Photoshop for algorithms." You'll learn why platforms are designed to trigger "forest fires" of engagement, and how empowering individual users to design and monetize their own custom feeds might be our best defense against the coming wave of AI slop. Stop consuming the hype and start understanding the mechanism. Devin Gaffney is CEO and Co-founder of Graze Social, a platform for custom algorithms on the open social web that has served 10M+ users and delivered 28B+ posts. His career spans civic tech and social platforms — from misinformation research at Northeastern and Oxford to ML infrastructure at Meedan — with work in WIRED, The Atlantic, and PLOS One. Find Devin: https://www.graze.social https://www.devingaffney.com/ LinkedIn: https://www.linkedin.com/in/devin-gaffney/ Watch us on YouTube: https://youtu.be/WmQA7uNHj54 Time Stamps: (00:00) Episode Preview: Automating behavior on social platforms. (01:47) Devin's background in tech and academia. (08:23) Exploring Blue Sky. (11:00) Understanding the decentralized social networking model. (16:45) How outrage fuels the attention economy. (25:55) Graze: Building custom algorithms without code. (34:15) Developing an open ad network economy. (45:40) AI's impact on social media algorithms. (51:10) Big tech platforms declaring slop bankruptcy. (58:55) Cognitive bias and the outrage machine. Support the pod: https://3reate.com https://ko-fi.com/3reate https://patreon.com/3reate Listen: https://podcasts.apple.com/us/podcast/3reate/id1723426314 https://open.spotify.com/show/48Y2M7Ppja43Uq2wlyUtPF https://youtu.be/2wEMD8EvB9I?si=G3iUBE-z4Mx0Ng-Y

    1h 5m
  2. The AI Mindset Mistake: Why 90% Fail (And How To Win)

    3D AGO

    The AI Mindset Mistake: Why 90% Fail (And How To Win)

    We tackle the elephant in the room: Data Governance. Andrew and Nathan sit down to unpack the real "AI Mindset" necessary for the modern creator, developer, and executive. We move beyond the hype of flawless AI automation and dig into the messy reality of the software development lifecycle. From fixing memory management crashes caused by AI-written code to understanding why an LLM needs you to hold its hand through every context shift, we explore what it actually takes to build reliable tools alongside artificial intelligence. Is your proprietary data actually as sacred as you think it is? We deconstruct the hoarding mentality that paralyzes companies and offer actionable frameworks for exposing your data models securely. Whether it's safely utilizing foundational models or bridging the friction between gatekeeping IT departments and eager product managers, this episode provides the blueprint for scaling AI responsibly. AI Data Governance Executive Summary: AI Data Governance is currently misunderstood as a strictly technical challenge when it is primarily a cultural and management problem. Organizations artificially throttle their own AI potential by treating all internal data as sacred, highly proprietary, and untouchable. True AI governance requires taking a realistic inventory of your data's actual value, dismantling internal IT gatekeeping, and finding secure ways to empower non-technical teams. By exposing data schemas rather than raw PII and fostering an environment of psychological safety, companies can securely leverage foundational models to multiply their workforce's productivity. Key Points: Reevaluate Data Sanctity: Companies default to hoarding data, but executives must ask hard questions: Is this data actually unique? What happens if it leaks? Do we even need to be collecting this PII in the first place? Expose Schemas, Protect Raw Data: You don't always need to feed sensitive data into an LLM to get value. Empower employees by exposing the data model or schema to the AI, allowing it to write queries and build reports without ever touching the underlying raw data. The "Build vs. Buy" Trust Factor: If you already trust third-party enterprise vendors with your cloud hosting or IT security, you can likely trust foundational AI model providers by implementing proper enterprise agreements and boundaries. Governance is a Management Issue: Employees hoard data and block AI integration when they lack psychological safety. If your culture punishes people for making mistakes or breaking things during experimentation, they will refuse to adopt the AI tools necessary to scale the business. The AI Mindset Executive Summary: The "AI Mindset" requires a fundamental shift away from expecting perfection or "magic" from generative AI. Because generative AI is inherently non-deterministic, it will inevitably hallucinate or introduce bugs—much like traditional software development. To succeed with AI, creators and engineers must treat the technology like a highly capable but completely uncontextualized collaborator. This means embracing an iterative loop of prompting, applying critical thinking to manage edge cases, and focusing on the massive productivity gains of "what could go right" rather than being paralyzed by what could go wrong. Key Points: Embrace Non-Deterministic Outputs: Generative AI is not a deterministic calculator; it operates on statistics. If you spend all your time trying to force it into rigid deterministic filters, you defeat the purpose of using it. The Context Deficit: Unlike humans who carry vast amounts of implied cultural and institutional knowledge, AI only knows exactly what you tell it in its current context window. You must explicitly set the stage, outline contraindications (what not to do), and explain the "why." Master the Iterative Loop: Building with AI requires a constant cycle of zooming in and zooming out. You must focus the AI on a narrow, specific problem (like a login screen), and then zoom out to critically think about how that fix impacts the broader system. Critical Thinking is the Ultimate Skill: AI cannot self-prompt effectively. It requires a human in the loop who can anticipate edge cases, ask hard questions, and steer the creative or developmental process. Watch on YouTube: https://www.youtube.com/live/IEb1_aAHo9I Time Stamps: (00:00:00) Pre-show banter and minor technical difficulties (00:01:45) Why Gen AI fails customer-facing products (00:05:30) Transitioning AI proof of concepts into production (00:10:00) Debugging AI code and unexpected edge cases (00:15:45) Giving up the expectation of AI perfection (00:17:40) Focusing on what can go right instead (00:22:00) Understanding why AI lacks human implicit context (00:24:45) Mastering the iterative loop of AI prompting (00:36:05) Reevaluating the true value of internal data (00:41:30) How to expose data models to AI safely (00:45:40) Why data governance is a management problem (00:51:00) Using AI tools to multiply worker productivity (00:55:45) Wrapping up with fun May Day triviaAI Mindset and AI Data Governance? Support the pod: https://3reate.com https://ko-fi.com/3reate https://patreon.com/3reate Listen: https://podcasts.apple.com/us/podcast/3reate/id1723426314 https://open.spotify.com/show/48Y2M7Ppja43Uq2wlyUtPF https://www.youtube.com/@3reate

    57 min
  3. The End of the LLM and What’s Next – Ian Hamilton CEO of Synthetic Cognition

    APR 6

    The End of the LLM and What’s Next – Ian Hamilton CEO of Synthetic Cognition

    Right now, the tech world is caught in an endless loop of throwing massive compute power at Large Language Models, hoping brute force will magically spark Artificial General Intelligence (AGI). But what if the foundational computing architecture is entirely wrong? In this episode, we sit down with Ian Hamilton, CEO of Synthetic Cognition Labs, who is walking away from standard models to build true AGI. Ian dismantles complex ideas, detailing why current AI is essentially faking memory and why the path forward lies in hyperdimensional computing. By exploring the friction between biology and technology, we examine how mapping the neural networks of a fruit fly provides a better roadmap for continuous learning than a billion-dollar GPU cluster. You'll learn the critical difference between LLM tokenization and human "analogy-making," and why breaking the AI scale monopoly might require us to nuke everything we know about computing and start over. If you are tired of the AI buzzword salad and want to decode the future, this is your blueprint. Follow Ian: https://www.linkedin.com/in/ianchamilton1/ Check out: https://syntheticcognitionlabs.com/ Watch us on YouTube: https://www.youtube.com/watch?v=Rd0SpOb5gMo Time Stamps: (00:00) Preview (00:58) Ian's introduction (04:15) Why static LLMs fail at continuous learning (07:14) The coding loop and AI memory walls (16:30) Hyperdimensional computing and non-Von Neumann architecture (17:44) Biological inspiration from fruit fly neural networks (24:00) Sparse distributed memory and human-like analogy (39:20) Bridging the hardware gap with LLM emulation (51:40) The danger of the AI scaling monopoly Support the pod: https://3reate.com https://ko-fi.com/3reate https://patreon.com/3reate Listen: https://podcasts.apple.com/us/podcast/3reate/id1723426314 https://open.spotify.com/show/48Y2M7Ppja43Uq2wlyUtPF https://youtu.be/2wEMD8EvB9I?si=G3iUBE-z4Mx0Ng-Y

    1h 3m
  4. Why Hiding Your Mistakes Destroys Innovation

    APR 3

    Why Hiding Your Mistakes Destroys Innovation

    What do a broken toilet on a lunar spacecraft and the catastrophic Chernobyl nuclear disaster have in common? They both serve as ultimate masterclasses in how we handle complex systems and critical information. In this episode, Andrew and Nathan dive into a recent NASA launch, highlighting the fascinating reality of troubleshooting space plumbing on a live, global broadcast. While it might seem embarrassing, that baseline of absolute transparency is exactly why humanity can successfully reach the moon. We juxtapose NASA's open problem-solving with the fatal secrecy of the Soviet Union's nuclear program, where ego, covered-up design flaws, and siloed data led to one of the worst human-made disasters in history. Whether you are writing code, leading an interdisciplinary team, or building the technologies of the future, hoarding information guarantees failure. We explore why the corporate "cover-up" culture halts progress, the undeniable power of open-source development, and how publicly owning our mistakes is the only way to build true collective wisdom. Listen now to uncover why humility, integrity, and honesty remain the most important tools in any creator's toolkit. Stop consuming the hype and start understanding the mechanisms of progress. Subscribe to 3reate for more deep dives into the friction between science, technology, and art! Watch on YouTube: https://www.youtube.com/live/vOOvv3xtsTQ?si=RQxnU4dq7XdbXMZL Time Stamps: Support the pod: https://3reate.com https://ko-fi.com/3reate https://patreon.com/3reate Listen: https://podcasts.apple.com/us/podcast/3reate/id1723426314 https://open.spotify.com/show/48Y2M7Ppja43Uq2wlyUtPF https://www.youtube.com/@3reate

    28 min
  5. Social Media is Legally Bad Now

    MAR 29

    Social Media is Legally Bad Now

    Has your casual scrolling turned into an unbreakable habit? You're not alone, and it's certainly not by accident. Recent landmark court rulings have declared that platforms like Meta and YouTube intentionally design their systems to be highly addictive. The era of innocent social media is officially over. In this episode, we dive deep into the legal and psychological reckoning currently facing the tech industry. We explore the hidden architecture of recommendation algorithms, detailing how they trap users in endless echo chambers, prioritize watch-time over truth, and fuel bizarre conspiracy theories. We also tackle the terrifying rise of AI-generated content and deepfakes. With the "uncanny valley" rapidly disappearing, we are entering a digital landscape where seeing is no longer believing. How do we navigate a world where digital trust is fundamentally broken? We discuss the urgent, counter-intuitive need to return to physical, analog verification systems to combat fraud. Finally, we provide a practical blueprint for breaking free from the infinite scroll. Learn how to handle digital withdrawals, set intentional boundaries, and replace toxic platform engagement with meaningful routines. Understand the invisible forces fighting for your attention. Hit play to decode the algorithm, and subscribe for more deep dives into the friction between technology and human psychology! Watch on YouTube: https://www.youtube.com/live/6oQ9wGegnec Time Stamps: (00:00) We're Live?! (00:10) Social media legally ruled intentionally addictive. (01:55) Questioning YouTube's role as social media. (05:25) Recommendation algorithms push users toward conspiracies. (11:20) Science versus belief in modern conspiracies. (14:50) AI disinformation and the shrinking uncanny valley. (16:15) Why we desperately need analog trust systems. (20:05) The real danger of gating powerful AI. (23:45) Actionable strategies to break digital addictions. Support the pod: https://3reate.com https://ko-fi.com/3reate https://patreon.com/3reate Listen: https://podcasts.apple.com/us/podcast/3reate/id1723426314 https://open.spotify.com/show/48Y2M7Ppja43Uq2wlyUtPF https://www.youtube.com/@3reate

    28 min
  6. The AI Layoff Lie And What’s Actually Happening

    MAR 21

    The AI Layoff Lie And What’s Actually Happening

    We dismantle the AI layoff excuse. We explore why executives who are separated from the daily work are using artificial intelligence as a smokescreen for over hiring and poor strategy. From the differences between conversational and agentic AI to a breakdown of the abrupt Digg.com beta shutdown, we reverse-engineer the realities of the modern tech ecosystem. We go beyond the headlines to provide a blueprint for the future. You'll learn why AI is an amplifier that requires rigorous human processes, not a magic bullet that can run a company on autopilot. We explain why the practitioner is more valuable than ever in catching AI hallucinations, refining that final 20% of complex code, and building exceptional products. If you want to survive the hype and learn how to actually leverage AI as a 10x tool without losing your mind, hit play. Time Stamps: (00:00) The absurdity of tech's AI layoff excuse. (01:28) Generative versus agentic AI workflow differences. (04:21) Scaling AI code versus traditional software engineering. (06:40) Why non-technical managers misunderstand AI capabilities. (12:00) Burning it down: The paradigm shift reality. (14:30) AI isn't magic; human validation remains essential. (18:48) Digg.com's abrupt shutdown and the myth of AI moderation. (26:00) How to actually build software and PRDs using AI. (30:50) AI as an amplifier for developer productivity and noise. Watch on YouTube:https://www.youtube.com/watch?v=reWfqWk85mo Support the pod: https://3reate.com https://ko-fi.com/3reate https://patreon.com/3reate Listen: https://podcasts.apple.com/us/podcast/3reate/id1723426314 https://open.spotify.com/show/48Y2M7Ppja43Uq2wlyUtPF https://www.youtube.com/@3reate

    31 min
  7. MAR 14

    When Do We Burn It All Down?

    We’ve engineered a world of unparalleled comfort, but at what cost? From the stagnation of the US healthcare system to the fragility of our agricultural crops, our societal resistance to "deviance" is quietly setting us up for failure. In this episode, we unpack the dangerous illusion of monopolies and monocultures. We trace how systems originally designed for massive "zero-to-one" growth—like the national power grid or the tech dominance of Intel and AMD—become vertically integrated traps that stifle true innovation. When the switching costs feel too high, it is easy to stay on a sinking ship just because it's familiar. But you don't necessarily have to burn it all down to build something better. We explore the architectural blueprints for change: why building parallel systems beats waiting for collapse, the historical necessity of sharing innovation dividends, and why true diversity is the ultimate survival strategy against systemic failure. Step out of the comfort zone and start building the future. Subscribe for weekly deep dives into the hidden mechanisms running our world, and join the conversation in the comments below. Time Stamps: (00:00:00) Debate origins and the current global state. (00:02:44) Personality distributions and their societal impacts. (00:10:00) Why the US healthcare system resists change. (00:13:54) Monocultures fail: the danger of zero deviance. (00:23:43) Building parallel systems instead of burning down. (00:29:31) Utility monopolies and the Cuba power crisis. (00:34:16) Tech duopolies: the Intel and AMD stagnation. (00:38:58) Agricultural monocrops and losing natural diversity. (00:44:00) Titanic analogy: taking uncomfortable leaps for survival. Watch on YouTube: https://youtu.be/HvMpRPyEpRs?si=xdoTfUHCZp-o0pZm Support the pod: https://3reate.com https://ko-fi.com/3reate https://patreon.com/3reate Listen: https://podcasts.apple.com/us/podcast/3reate/id1723426314 https://open.spotify.com/show/48Y2M7Ppja43Uq2wlyUtPF https://youtu.be/2wEMD8EvB9I?si=G3iUBE-z4Mx0Ng-

    46 min
  8. MAR 9

    Bombing on Stage: The Ultimate Resilience Blueprint with Michelle Plante

    Today, we reverse-engineer the mind of Michelle Plante, a hidden operator turning life's raw observations into standup comedy. From finding her comedic voice after seven years of sobriety to facing the brutal, instant feedback of a live audience, she reveals the raw human experiment of real-time performance. We also dive deep into the collision of analog habits and modern focus. You'll learn how the tactile friction of writing on physical paper engages different neural pathways than typing, forcing us out of digital loops and into the present moment. We discuss the structural blueprints for building resilience when things fail instantly, why corporate top-down mandates always backfire against human nature, and how to harness observational humor as a tool to navigate the changing landscape of creativity. This isn't just a loose conversation; it's an actionable guide. Plus, Michelle shares her simple, screen-free morning journaling protocol to ground your day before the world demands your attention. Watch us on YouTube: https://www.youtube.com/watch?v=tAEx8jVDXCo Listen to Michelle's last 3reate podcast: https://3reate.com/podcast/why-we-need-more-discomfort-and-less-ai-with-michelle-plante-ep-44-3reate/ Find Michelle: https://www.michelleplante.com/ Instagram: https://www.instagram.com/michelleplantecomedy/ Time Stamps: (00:00:00) Hello! (00:01:52) Discovering standup comedy through sobriety. (00:06:17) The neurological power of physical writing. (00:11:41) Daily journaling habits for being present. (00:16:03) Real-time feedback and building resilience. (00:26:07) Why human behavior resists top-down force. (00:30:11) Reading the room and intentional pacing. (00:37:34) The tactile art of roasting coffee. Support the pod: https://3reate.com https://ko-fi.com/3reate https://patreon.com/3reate Listen: https://podcasts.apple.com/us/podcast/3reate/id1723426314 https://open.spotify.com/show/48Y2M7Ppja43Uq2wlyUtPF https://youtu.be/2wEMD8EvB9I?si=G3iUBE-z4Mx0Ng-Y

    43 min

Ratings & Reviews

5
out of 5
4 Ratings

About

In a world of distractions; creativity and innovation change the direction of society. The future isn’t built by the loudest voices in the room; it’s built by doers: the artists who code, the scientists who sculpt, and the technologists who dream. 3reate goes beyond headlines providing the blueprint for the future. We bring you weekly deep dives and curated interviews with the hidden architects of the innovation economy. Creativity is your ultimate advantage. Support the pod: https://ko-fi.com/3reate https://patreon.com/3reate Listen on YouTube: https://www.youtube.com/@3reate Listen on Apple Podcasts: https://podcasts.apple.com/us/podcast/3reate/id1723426314 Listen on Spotify: https://open.spotify.com/show/48Y2M7Ppja43Uq2wlyUtPF Our intro music: “Into the night” by @prazkhanal | Our outtro music: “Filler Drop” by @keyframeaudio