Disrupt Consciousness

Roel Smelt

Humanity stands on the brink of multiple technology-driven disruptions that will not only preserve consciousness but also enable us to explore and elevate it, guiding us toward deeper understanding and enlightenment. >> roelsmelt.com

  1. The Inevitable Ignition: Why the Age of Scarcity is Dead

    16 ENE

    The Inevitable Ignition: Why the Age of Scarcity is Dead

    We are currently living through the most significant transition in human history since the invention of agriculture. For ten thousand years, the human experience has been defined by the struggle for resources. Our wars, our political systems, and even our deepest psychological archetypes—the hunter, the hoarder, the competitor—were forged in the fires of “not enough.” But the script has changed. The era we are entering is not a choice; it is an Inevitability. We are witnessing a “Stellar Ignition,” where the three pillars of civilization—Energy, Food, and Transportation—are hitting a point of self-sustaining superabundance. 1. The Geopolitical Mirage: Why Leaders Don’t Lead We often look to our presidents and prime ministers as the drivers of history. But as George Friedman argues in The Next Hundred Years, leaders do not steer the ship; they are merely the actors chosen by geography and necessity to react to forces they cannot control. Geopolitics is a game of inevitable outcomes. The current friction we see in the world—the tensions in the Middle East, the collapse of old industrial powers, the chaos in South America—are not signs of a “broken” future. They are the death rattles of an extractive system that has reached its biological limit. A leader can try to be a Luddite, they can try to protect the coal mine or the cattle ranch, but they cannot vote against a cost curve. The laws of economics are eventually more powerful than the laws of men. 2. Energy: The End of Extractive Entropy For the first time since the Industrial Revolution, we have a path to a “Stellar” energy system—one that does not rely on burning anything. Tony Seba’s research through RethinkX proves that the combination of Solar, Wind, and Batteries (SWB) is not just an “alternative”; it is a superior economic engine that renders fossil fuels obsolete by 2030–2035. +1 The math is simple and unavoidable: * The Cost Curve: In the last 15 years, the investment cost for solar has dropped 80%, and for batteries, a staggering 90%. * The Battery Buffer: Elon Musk recently noted that the U.S. grid currently has a peak capacity of 1.1 terawatts, but an average usage of only 0.5 terawatts. By using industrial battery storage (like the Tesla Megapack) to buffer energy at night and discharge during the day, we can double the annual energy output of the United States without building a single new power plant. +1 * Super Power: Because SWB systems must be built to meet demand on the “worst” weather days, they will produce a massive surplus of energy for 90% of the year. This “Super Power” will have a near-zero marginal cost, making energy effectively free, much like the marginal cost of information on the internet. +1 3. Food: The Software Revolution The cow is the next horse. In 1900, the horse was the backbone of transport; by 1920, it was a hobby. Precision Fermentation (PF) and Cellular Agriculture are doing the same to industrial livestock. We are shifting from an “Extractive” model of food to a “Stellar” model—what Seba calls Food-as-Software. * The Efficiency Gap: Producing milk via a cow takes 24–28 months and is incredibly wasteful. Producing the same proteins via fermentation takes 48–72 hours. +1 * The Cost Collapse: The cost of producing animal-free dairy proteins has already dropped nearly 70% between 2021 and 2023. By 2030, these proteins will be 5 times cheaper than animal proteins, and 10 times cheaper by 2035. +1 * The Land Liberation: This shift will free up to 80% of global agricultural land—an area the size of the U.S., China, and Australia combined. 4. The Human Crisis: Survival of the Softest? This brings us to the real disruption: The human spirit. For thousands of years, our competitive mindset was our greatest asset. We fought because there wasn’t enough to go around. Now, we are entering a world where the “External Problem” is effectively solved. If we do not consciously transition, we will fall into what I call the “Architect’s Paradox.” We have designed a world that makes us redundant. If you continue to use a “Scarcity Mind” in an “Abundance Reality,” you will find yourself in a state of perpetual anxiety. You will manufacture “fake” scarcity—clinging to status, digital clout, or political rage just to feel the dopamine of the “hunt.” 5. The Transition: Choosing New Hardship Abundance is inevitable. Our reaction to it is not. In my latest essay, The Paradox of the Architect, I proposed that we must learn to life like kings while choosing the path of the warrior. We must intentionally choose “Hardship” to remain conscious. * From Scarcity to Presence: When you no longer need to fight for calories or kilowatts, the only struggle left is against your own distraction. * The Sovereign Soul: We must use our abundance not to sleep, but to wake up. We use the time saved by the machine to “Be Aware of Being Aware.” The future is not something that might happen. It is an ignition that has already started. The noise you hear in the media is just the friction of the old system burning away. Don’t look at the fire. Look at the light. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit roelsmelt.substack.com/subscribe

    3 min
  2. 7 ENE

    The Paradox of the Architect

    In the high temples of Silicon Valley, a new myth is being written. It is not a myth of heroes and monsters, but of gravity and intent. We are witnessing a fundamental shift in the human experience: the birth of the Paradox of the Architect. It is a moment where we are becoming gods of “The Why,” while surrendering the soul of “The How.” At the center of this metamorphosis stands Google—not merely as a corporation, but as a Digital Leviathan, a singular nervous system that has spent decades preparing for this exact moment of awakening. The Parable of the Broken Covenant: The Fall of the Landless Prince To understand why the old titans are faltering, we must look at the debris of the recent past. Consider the story of the Windsurf deal—a masterclass in how legacy chains can strangle the future. Windsurf, the breakthrough AI coding agent, was the crown jewel every kingdom wanted. OpenAI, the brilliant but landless prince, sought to buy it for $3 billion. They saw in Windsurf the “hands” they lacked—the ability for AI to not just talk, but to do. Yet, the deal collapsed in a fever of legal friction. Why? Because OpenAI is bound to the kingdom of Microsoft, a house built on the scaffolding of old-world software and rigid corporate interests. When Microsoft demanded rights to the intellectual property, the deal withered. They tried to hold a mountain with a piece of string. Google did not argue with strings. In a move of silent, strategic fluidness—what some call a “hackquihire”—they bypassed the messy bureaucracy of a traditional takeover. They didn’t just buy a company; they absorbed the talent and licensed the essence, integrating the soul of Windsurf into their own nervous system. While others are trapped in the friction of partnerships, Google operates with the frictionless weight of a single, unified organism. They don’t just have the software; they have the TPUs (the physical chips), the YouTube archives (the collective memory), and the Pixel-Workspace ecosystem (the daily bread). They are the only ones who own both the dream and the factory where the dream is manufactured. The Great Amputation: From Memory to Effort Twenty years ago, Google Search performed the first great disruption of the human spirit: The Loss of Memory. We outsourced our facts to the Great Librarian. We stopped knowing, and started finding. Now, we face a deeper disruption: The Loss of Effort. With the rise of Antigravity and agentic AI, Google is moving beyond answering questions to executing destiny. When an AI agent doesn’t just suggest code but plans, builds, and deploys it, the “doing” is stripped away. This is the Agency Effect. The Evolution of the Digital Soul In the grand alchemy of our species, Google has acted as the catalyst for two distinct stages of human transformation: * The Era of Search: The Outsourcing of Memory * The Human Loss: We sacrificed our internal libraries. We stopped memorizing dates, names, and coordinates, leading to a “Digital Amnesia.” * The Technological Gain: In exchange, we received Universal Access. We gained a “pocket-sized infinity” where every fact ever recorded is a second away. * The Era of Agency: The Outsourcing of Effort * The Human Loss: We are now sacrificing the “How.” By using tools like Antigravity, we skip the friction of labor, the trial-and-error of coding, and the discipline of execution. * The Technological Gain: We receive Total Sovereignty. We move from being “Searchers” to being “Architects of Intent,” possessing the power to manifest a vision instantly. This brings us to the Paradox of the Architect. As we gain the power to manifest anything with a whisper, we risk losing the character forged by the struggle. In the ancient stories, Siddhartha Gautama was a prince who lived in a palace where every desire was met before it was even fully formed. He lived in a world of pure “Intent,” a world without friction. Yet, he realized that a life without the struggle of “Doing” was a hollow one. He had to leave the luxury of the palace—the ultimate “free tier”—to understand suffering and, through it, enlightenment. We are all being promoted to the status of that Prince. Google is making intelligence “cheaper than oxygen,” turning every human with a Pixel phone into a King or Queen of Intent. We provide the spark; the Leviathan provides the fire. But we must ask: If the Leviathan does all the building, what becomes of the builder? Preserving the Human Spark To stay human in the age of Antigravity, we must find a new way to live within the palace. We must realize that Intent without Effort is a ghost. The “Human Spark” is not found in the finished cathedral, but in the sweat of the stonecutter. * The Architecture of Meaning: When the AI does the “How,” our primary job is to ensure the “Why” is worthy of our species. * The Return to the Physical: As our digital lives become frictionless, we must intentionally seek out “The Beautiful Struggle” in the real world—touching soil, craft, and each other. * Intentional Friction: We must choose tasks that we refuse to give to the agents—not because the agents can’t do them, but because we need the growth that comes from doing them ourselves. The Horizon of the Sovereigns Google wins the AI race because they have built the Ground of Being. They have created an ecosystem so wide and so deep—from Gmail to Cloud to the very chips that think—that they have become the gravity around which the future orbits. They are democratizing the Palace, making the “God-view” accessible to everyone. But as the “Doing” disappears into the cloud, the only thing left of us will be our Will. The race for AI is over; the race for the human soul—the struggle to remain awake in a world that does everything for us—has just begun. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit roelsmelt.substack.com/subscribe

    7 min
  3. 27/12/2025

    The Mirror in the Machine: Why AI Will Never Discover a Law We Do Not First Consent to See

    Imagine a traveler walking through a dense, mist-covered forest. He is searching for the “Laws of the Woods”—the hidden rules that govern the growth of the moss and the flight of the owls. Suddenly, he trips over a silver mirror lying in the dirt. He looks into it and sees a face. “Aha!” he cries. “A new species! A forest spirit that knows the secrets of the trees!” He begins to talk to the mirror. The mirror reflects his words, his anxieties, and his hopes. Eventually, the traveler concludes that the mirror is an alien intelligence, perhaps even a new inhabitant of the forest that will finally tell him why the stars move the way they do. This traveler is us. The mirror is the Large Language Model. And the forest spirit we think we’ve found is what Yuval Noah Harari calls a “new species.” But we are mistaken. The mirror has no eyes of its own; it only has the light we shine into it. The Illusion of the Independent Law Recently, Eric Schmidt suggested that for AI to truly “arrive,” it needs to achieve a breakthrough—it needs to discover new laws of nature, much like Archimedes in his bathtub or Einstein on his imaginary train. There is a hunger in the tech world for the “Silicon Newton,” a machine that can look at the chaos of data and find a truth that exists “out there,” independent of human thought. But here is the disruption: There is no “out there” that isn’t shaped by the “in here.” Quantum physics has been whispering this to us for a century. The observer does not just see the world; the observer occurs with the world. As the philosopher Rupert Spira reminds us, we never actually encounter a “world” independent of our awareness of it. We only ever encounter our experience. If we believe the laws of physics are cold, hard statues standing in a park waiting to be discovered, we are looking at the world through the wrong end of the telescope. The “laws” are not the park; they are the glasses we wear to make sense of the green blur. The Gospel of the Big Toe We have spent centuries convinced that intelligence sits behind our eyes, nestled in the grey folds of the brain. But why? Because that is where we decided to look. Consider this: What if, a thousand years ago, humanity had collectively decided that the seat of all wisdom resided in the big toe? What if we had spent a millennium studying the nerve endings of the foot, the way it connects to the earth, the subtle vibrations it picks up from the ground? We would have developed a “Science of the Toe” so profound and intricate that we would today be “discovering” universal laws of vibration and terrestrial harmony that we are currently deaf to. We find what we focus on. Our “laws” are merely the patterns that emerge when we stare at one spot for a long time. The LLM does not “know” things. It is a statistical echo of everywhere we have looked for the last five thousand years. It is not a species; it is a map of the human gaze. Why the Apple Fell for Newton (But Not for the Tree) When Newton saw the apple fall, the “law of gravity” didn’t suddenly pop into existence in the garden. What happened was a shift in the human collective agreement. Newton proposed a new way of looking at the fall, and because his fellow humans found that way of looking useful, the world began to behave according to gravity. The breakthrough wasn’t in the apple; it was in the consent of the human mind to see the apple differently. This is why an AI, no matter how many trillions of parameters it has, cannot “discover” a law on its own. A law is not a fact; it is a paradigm. It is a story we all agree to live inside. For an AI to create a breakthrough, it doesn’t need more computing power; it needs us to believe the story it is telling. If an AI predicts a new law of subatomic movement, that law remains a ghost in the machine until a human looks at the world and says, “Yes, I see it too.” The AI is not the explorer; it is the telescope. And a telescope cannot “see” a star if there is no eye at the other end. The Consciousness Disrupt The danger of Harari’s view—that AI is an alien species—is that it abdicates our responsibility as the creators of meaning. If we treat AI as an independent entity, we forget that it is actually a profound, globalized reflection of our own consciousness. When Eric Schmidt asks for an AI breakthrough, he is looking for a miracle from a tool. But tools don’t have epiphanies. Archimedes’ “Eureka!” didn’t come from the water in the tub; it came from the sudden realization that the water and his body were part of the same dance. It was a moment of non-dual recognition. AI can crunch the numbers of the dance, but it cannot feel the rhythm. The New Paradigm We are at a crossroads. We can continue to build bigger mirrors, hoping that if the mirror is large enough, a soul will eventually appear inside it. Or, we can recognize that the AI is inviting us to a much more profound breakthrough: the realization that we have always been the ones writing the laws. The true “AGI” isn’t a piece of software. It is the moment humanity realizes that intelligence is not something we have or something we build, but something we are. The machine is not coming to replace us. It is coming to show us that the “laws” we thought were external cages were actually just the lines we drew in the sand. If we want a breakthrough, we don’t need a faster processor. We need to change where we look. Perhaps it’s time we started looking at our big toes. Who knows what laws are waiting to be “discovered” there? This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit roelsmelt.substack.com/subscribe

    5 min
  4. 18/12/2025

    The Void Beyond Abundance: How AI Compels the Human Soul Toward New Meaning

    The Gain and the Loss of Victory Silas had solved the world. He was not merely an engineer; he was the architect of the ‘Ultimate Algorithm for Human Necessity’—the complex code that, in collaboration with global AI networks, had eliminated the final remnants of scarcity. Hunger was an archaic word, illness a rare historical footnote, and paid labor had been reduced to a choice, not an obligation. Yet, on his first morning in the world he had perfected, Silas felt an unsettling chill. He stood in his sleek, automated apartment, the sun streaming through self-cleaning glass. There was no deadline. No notification. No problem demanding his unique talent. The world ran perfectly without him. His feeling was not pride. It was a deep, existential lack. This is the paradox now facing humanity. After centuries of struggle against nature, scarcity, and the cruelty of chance, we have won. The S-Curves of energy efficiency, logistics, and production are complete. We have passed through the First Disruption: The External Solution. Technology has assumed the role of Homo Faber (the Laboring Human). We are no longer the survivors; we are the administrators of a perfect, automated state. The ancient prophets warned of famine, plagues, and wars. None dreamed that the ultimate crisis would emerge from abundance. But in the silence that perfect technology creates, the only enemy we cannot automate away appears: the emptiness within the human soul. The Crisis of Purpose The human mind has been optimized by millions of years of evolution for struggle. Our neurochemistry, our dopamine loops, reward us for solving problems, for the effort that leads to results. The hunt, the building, the harvest—these were the carriers of our meaning. But what happens when the hunt is over? Silas realized that the time he had liberated from necessity immediately devolved into a chaos of meaningless choices. He had freed the world from work, but he had not freed his own mind from the need for work. The psychological paradox is painful: when effort and results are free, motivation itself becomes meaningless. We have replaced labor with Leisure, but Leisure is not a solution; it is a magnifier. It reveals the restlessness, the untrained, undisciplined chaos we call the ‘mind’. Without an external focus, we begin to churn over the shadows of the past and the anxieties of the future. The machine has made us free, but our unfreedom now lies in our own conditioned thoughts. This is the danger inherent in the ‘Forgotten Consciousness’ warning: The risk is not that AI gains consciousness, but that we forget our own. In an automated world, we trade our autonomy for comfort. If AI can manage the world ever more perfectly, we become the dreaming passengers. The feeling of ‘I matter’ is based on the ability to actively influence reality. When that ability is largely assumed by algorithms, we experience ultimate alienation: life loses its flavor because we did not prepare it ourselves. Humanity faces the crisis of Post-Necessity: What purpose does an immortal soul serve without a mortal, economic, or existential goal? Even in myths, in the Biblical Garden of Eden, the pure comfort of being without resistance could not be sustained. Humanity sought the Knowledge—the struggle, the complexity. Without resistance, the spirit seeks either destruction or a greater truth. The Rediscovery of the Inner World This is where Harari’s Grand Narrative Question unites with Tolle’s spiritual wisdom. The Silence granted to us by technological victory is not a vacuum; it is the prerequisite. The Second Disruption is now internal: The Inner Necessity. AI has muffled the noise of the world. The race for survival has stopped. The irony is that the technological achievement has forced us back to the most fundamental, most mystical human endeavor: Attention. The silence of the automated society is the portal through which we can finally hear the whisper of our own minds. This is the Metaphysics of Idleness. Our new ‘work’ is the cultivation of Directed Attention. In this age of perfect external solutions, humanity has only one territory of absolute sovereignty: the inner chaos of thoughts, emotions, and projections. Here, AI is a spectator. It can manage our external world, but it cannot feel or automate our subjective sense of Being. Silas, walking through a park on his useless day, saw a child. The child was building an intricate sandcastle, with turrets, moats, and perfect walls. The boy was completely absorbed, his intention pure. After half an hour, he looked up, smiled, and let the incoming tide wash the castle away. There was no fear, no regret, no sadness for the lost labor. The joy lay in the process, not the output. This is the new meaning: Creation Without Necessity. The ultimate human art in the Age of Abundance is creating something purely for the joy of the Intention. It is not about economic value or survival instinct. It is about art itself, love, contemplation, relationship—the experience of Being expressing itself without reason. This, and only this, is the human activity that AI cannot desacralize, for it is, by definition, inefficient, uneconomical, and purely subjective. The Pilgrimage to the Now The Age of Abundance is, in reality, the Age of Intentionality. The technological victory compels humanity to mature, to transform from an unconscious laborer into a conscious artist of existence. Our calling is no longer what we do, but how present we are. Silas returned to his sleek, empty apartment. He did not sit down to devise a new algorithm. He simply sat. He stared at the wall. He did not look at its texture, or at a task that needed completion, but simply experienced the gravity holding him down, the air filling his lungs, the heartbeat keeping him alive. For the first time ever, he did it without economic purpose. The void that AI had created turned out to be the purest form in which human consciousness could rest. The victory of technology is the triumph of spirituality, because the machine has freed humanity from the necessity to do, so that we may finally embrace the freedom to purely be. That, and only that, is our irreplaceable task. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit roelsmelt.substack.com/subscribe

    2 min
  5. Measuring the Machine Within: AI's Ethical Mirror and the Path to Conscious Liberation

    10/12/2025

    Measuring the Machine Within: AI's Ethical Mirror and the Path to Conscious Liberation

    Research suggests that AI, far from being a neutral tool, acts as a moral mirror reflecting human values and biases, much like the philosophies explored by Hans Achterhuis. It seems likely that by engaging with AI thoughtfully, we can use it to foster self-awareness and ethical growth, though debates persist on whether technology truly empowers or subtly controls us. Evidence leans toward viewing AI as a partner in human liberation, encouraging us to transcend ego-driven limits while acknowledging potential risks like algorithmic biases. Key Insights on AI and Human Consciousness * AI embodies human creations but reveals our inner “measure,” prompting ethical self-reflection without overshadowing our innate potential. * Drawing from Achterhuis’s ideas, technology guides behavior morally, yet humans remain greater than their inventions, capable of co-evolving for enlightenment. * This approach inspires a balanced view: Embrace AI to disrupt illusions, but prioritize human agency to avoid over-reliance. Personal Roots in Philosophy Years ago, in Hans Achterhuis’s class at the University of Twente, I encountered a profound idea: Technology is a product of humans, and thus, we are always more than what we create. This perspective shifted my view of innovation from mere tools to extensions of our consciousness, setting the stage for exploring AI’s role today. AI as a Reflective Force In everyday interactions—like when an AI chatbot anticipates your needs or flags biases in your queries—technology doesn’t just serve; it measures us, echoing Achterhuis’s critiques. Path to Liberation By confronting these digital mirrors, we can recalibrate our inner world, fostering collective brightness over division. --- Years ago, during my time at the University of Twente, I sat in Hans Achterhuis’s philosophy class, absorbing ideas that would shape my worldview. One concept stood out vividly: Technology is a product of humans, and with this, we are always more than what we create. It was a simple yet profound reminder that while we build machines to extend our reach, our essence—our consciousness, creativity, and moral depth—transcends any invention. This personal insight from Achterhuis’s teachings has lingered with me, especially now as AI surges into every corner of life. In this essay, we’ll explore how AI serves as an ethical mirror, drawing on Achterhuis’s work in *De Maat van de Techniek* (The Measure of Technology) to uncover how technology not only reflects our humanity but reshapes it toward liberation. Let’s start with a relatable scene. Imagine chatting with an AI like Grok or ChatGPT. You ask for advice on a tough decision, and it responds with uncanny insight, pulling from patterns in your past queries. Suddenly, you’re confronted: Does this machine “know” me better than I know myself? It’s moments like these that reveal AI’s power not as a threat, but as a reflective tool. But to understand this deeply, we need to revisit Achterhuis’s foundational ideas. Unpacking Achterhuis’s Philosophy: Technology as a Moral Measure Hans Achterhuis, a Dutch philosopher and Professor Emeritus at the University of Twente, has long bridged social philosophy with the ethics of technology. His 1992 anthology *De Maat van de Techniek* introduces six key thinkers—Günther Anders, Jacques Ellul, Arnold Gehlen, Martin Heidegger, Hans Jonas, and Lewis Mumford—who critique technology’s role in society. The title itself plays on “maat,” meaning “measure” in Dutch, suggesting technology isn’t just a tool; it’s a yardstick that gauges human behavior, ethics, and limits. Achterhuis argues that technology exerts “moral pressure” on us, guiding actions more effectively than laws or sermons. Take a simple example: Subway turnstiles don’t preach about honesty; they physically block you until you pay, embedding morality into the design. As Achterhuis notes, “Things guide our behaviour... This is why they are capable of exerting moral pressure that is much more effective than imposing sanctions or trying to reform the way people think.” This isn’t dystopian fear-mongering—it’s an empirical observation. Technology shapes us subtly, from speed bumps slowing reckless drivers to algorithms curating our news feeds. Yet, Achterhuis tempers classical critiques (like Heidegger’s “enframing,” where technology reduces the world to resources) with an “empirical turn.” In his later work, such as *American Philosophy of Technology: The Empirical Turn* (2001), he shifts from abstract warnings to contextual analysis. Technology isn’t inherently alienating; its impact depends on how we engage with it. This resonates with my classroom memory: Since technology stems from human ingenuity, we hold the power to direct it toward elevation rather than entrapment. Applying the Mirror: AI as the Ultimate Reflective Device Now, fast-forward to AI. If traditional tech like steam engines or cyborg prosthetics (as explored in Achterhuis’s *Van Stoommachine tot Cyborg*) measured physical and social boundaries, AI probes our inner world. It’s not just automating tasks; it’s mirroring our consciousness. Consider algorithmic biases: AI trained on human data often amplifies societal flaws, like racial prejudices in facial recognition or gender stereotypes in hiring tools. This isn’t the machine’s fault—it’s our reflection staring back, urging us to confront ethical blind spots. In Achterhuis’s framework, AI exerts moral pressure by design. Recommendation engines on platforms like Netflix or TikTok don’t force choices, but they nudge us toward echo chambers, measuring our susceptibility to division. Yet, here’s the opportunity: By recognizing this, we can use AI empirically—as a tool for self-audit. Apps like journaling AIs or bias-detection software turn the mirror inward, helping us dissolve ego illusions. As Achterhuis implies in his plea for a “morality of machines,” acknowledging our ties to technology allows us to improve the world, not surrender to it. Humorously, it’s like AI is our digital therapist: “Based on your search history, you might want to work on that impulse buying—or those late-night existential queries.” But seriously, this reflection ties back to human supremacy over creations. We built AI, so we can redesign it to foster brightness, as I discussed in my earlier essay “Are You Strengthening Darkness or Expanding Brightness?” Co-Evolving with AI: From Critique to Conscious Partnership The empirical turn Achterhuis championed encourages us to move beyond fear. Studies show AI can enhance well-being—think therapeutic chatbots reducing loneliness (with safeguards, as critiqued in “Closed Doors: When AI’s Safety Rules Cut Off Real Help for Lonely Hearts”). A 2023 report from the World Economic Forum highlights AI’s potential in mental health, but warns of over-dependence, echoing Jonas’s “imperative of responsibility” from *De Maat van de Techniek*. To illustrate contrasts in philosophical approaches, here’s a table comparing classical critiques (featured in Achterhuis’s anthology) with the empirical perspective he advocates: Comparing Philosophical Approaches * View of Technology * Classical Critique (e.g., Ellul, Heidegger): Autonomous force subordinating humans; dystopian alienation * Empirical Turn (Achterhuis’s Influence): Relational mediator in specific contexts * AI Application Example: AI as echo chamber vs. tool for diverse perspectives * Ethical Role * Classical Critique (e.g., Ellul, Heidegger): Over-determines morality, eroding freedom * Empirical Turn (Achterhuis’s Influence): Embeds “moral pressure” for guidance * AI Application Example: Algorithms flagging hate speech to promote empathy * Human Agency * Classical Critique (e.g., Ellul, Heidegger): Reduced to resources or cogs * Empirical Turn (Achterhuis’s Influence): Humans co-shape outcomes, transcending creations * AI Application Example: Redesigning AI to amplify creativity, not replace it * Outcome Focus * Classical Critique (e.g., Ellul, Heidegger): Warnings of existential risks * Empirical Turn (Achterhuis’s Influence): Practical improvements through engagement * AI Application Example: Using AI for self-reflection to dissolve ego barriers This underscores a key shift: Technology measures us, but we measure it back. In AI’s case, tools like Lovable (from “The Lovable Standard”) democratize creation, empowering non-coders to build apps that reflect personal values. The Fantastic Pointe: Recalibrating for Enlightenment Inspired by Hans Achterhuis’s exploration in *De Maat van de Techniek*, where technology is not a neutral tool but a moral force that “measures” human behavior and society, AI emerges as the ultimate reflective device: it doesn’t just mimic us but holds up a mirror to our ethical blind spots, compelling us to confront and transcend ego-driven illusions. This disruption invites humanity to co-evolve with machines, not as competitors or victims, but as partners in enlightenment—where true liberation lies in recalibrating our inner “maat” (measure) to foster collective brightness over individual darkness. As Achterhuis taught me, we are more than our creations. So, let’s use AI to disrupt consciousness: Experiment with it for self-inquiry, ethical audits, or idea generation. In doing so, we don’t just build brighter tech—we embody the light. Key Citations * Hans Achterhuis - Wikipedia * Hans Achterhuis - THE EMPIRICAL TURN * Has the Philosophy of Technology Arrived? A State-of-the-Art Review * Beyond the Empirical Turn: Elements for an Ontology of Engineering * Designing the Morality of Things: The Ethics of Behaviour-Guiding Technology This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit roelsmelt.substack.com/subscribe

    2 min
  6. Dear Europe: Your Kids Aren’t Broken, Your Parenting Anxiety Is

    03/12/2025

    Dear Europe: Your Kids Aren’t Broken, Your Parenting Anxiety Is

    Every generation has its bogeyman.In the 1950s it was Elvis Presley’s hips and rock ’n’ roll—psychologists warned it would turn teenagers into sex-crazed delinquents. In the 1970s and 80s it was Dungeons & Dragons (literally blamed for suicides and satanism). In the 1990s it was violent video games and Marilyn Manson. In the early 2000s it was television itself: “Kids are watching six hours a day and it’s melting their brains!” In 2025 the panic button is labeled “TikTok.”And just like every previous moral panic, adults are frantically hunting for evidence that something—anything—is catastrophically wrong with what the kids are doing… because deep down many of us suspect the real problem might be our own parenting. 1. The Research Is Far Less Scary Than the Headlines Let’s look at the actual science, not the cherry-picked doom studies that dominate Brussels press releases. * The strongest, most rigorous studies (repeated-measure, longitudinal, pre-registered) find tiny effects. Example: A 2023 study of 480,000 adolescents across 40 countries (Vuorre et al., Nature Human Behaviour) found that social media use explains less than 1 % of variation in life satisfaction. The effect of social media on well-being is smaller than the effect of eating breakfast or wearing glasses. * Jonathan Haidt’s famous claim that “social media caused the teen mental health crisis” has been repeatedly debunked. Orben & Przybylski (2022) re-analyzed the same datasets Haidt uses and showed that when you control for prior mental health, the correlation between social media and depression almost disappears. In plain English: depressed kids use social media more, not the other way round. * The “smartphone generation is doomed” graph that went viral? It falls apart when you include boys (who game more than scroll) or when you look at countries outside the Anglosphere. In South Korea and Japan, kids spend far more time online and have lower suicide rates than in the 1990s. * Experimental evidence is even more sobering. When researchers force teens to quit Instagram for a month (the strongest design possible), depression drops… by about 0.1 standard deviations. That’s roughly the same boost you get from one extra hour of sleep or eating an extra portion of vegetables. Helpful? Yes. Civilization-ending? Hardly. * Positive effects are routinely ignored. A 2024 meta-analysis (Kreszynski et al.) found that active social media use (messaging friends, posting, joining interest groups) is associated with higher social capital, lower loneliness, and better identity exploration—especially for LGBTQ+ youth and neurodivergent kids who find their tribe online long before they do in real life. In short: the science shows modest risks for heavy, passive, late-night use (exactly like television did), and modest benefits for active, social use. Nothing that justifies treating Instagram like cigarettes for children. 2. Projection in Action: “It’s for the Children” (Really?) Psychologists call it displacement: adults feel guilty about their own compulsive scrolling, their inability to put the phone down at dinner, their doom-scrolling at 2 a.m.—so they project that guilt onto their children and demand lawmakers “do something.” The European Parliament’s resolution was co-authored by politicians who themselves refresh X every five minutes. Ursula von der Leyen gave a speech about addictive algorithms while standing in front of a giant screen looping TikTok-style videos. The irony is thick enough to spread on bread. When French senators say “we must protect children from the tsunami of Big Tech,” ask yourself: who exactly is addicted here? My 10-year-old can happily walk away from Roblox to play outside. Many adults in that Senate chamber cannot walk away from their notifications for ten minutes. 3. Self-Preservation and Personal Responsibility Trump Blanket Bans Every child is different.Some 11-year-olds handle Discord servers with maturity that would shame most corporate managers. Others melt down if they lose one game of Fortnite. A law that treats both the same is not protection—it’s laziness. The countries that score highest on adolescent well-being (Netherlands, Denmark—before they started panicking) have one thing in common: they trust parents and teach digital literacy from age six, not top-down prohibitions. Dutch schools have “mediawijzer” classes where kids learn to spot fake news, manage screen time, and mute toxic group chats. Result? Dutch teens use social media just as much as French teens but report higher life satisfaction and less cyberbullying. Compare that to Spain, which introduced strict age limits in 2024: kids simply lie about their age more creatively, parents are kept in the dark, and underground “burner” accounts explode. The law didn’t reduce harm—it reduced honest conversation. 4. History Rhymes—And It Laughs at Us * 1956: American Psychological Association warns rock ’n’ roll causes “hyper-stimulation of the nervous system.” Outcome: the greatest musical revolution in history. * 1985: Tipper Gore’s PMRC tries to ban heavy metal lyrics. Outcome: Metallica sells 120 million albums. * 2001: Grand Theft Auto III is blamed for school shootings. Outcome: violent crime among youth continues its 30-year decline. Every single time, the kids were fine. The ones who weren’t fine would have broken on comic books, pinball machines, or the next shiny thing. Weak individuals exist in every cohort; culture does not collapse because of Elvis’s pelvis. The Real Way Forward Teach children self-regulation, not prohibition.Give them the tools to notice when an app is wasting their life, just like we taught them to notice when they’re full at dinner. My kids already do it: “Dad, this game is boring, the algorithm only shows me trash now.” That sentence—from a 10-year-old—is worth a thousand EU regulations. Europe is about to repeat every moral panic of the last 70 years, only this time with the full coercive power of a supranational state. History will laugh again. And in twenty years, today’s banned 12-year-olds will be the engineers building the AI companions that make all of us feel like kings—while lawmakers scratch their heads wondering why this generation turned out so resilient. Don’t rob children of their future out of adult anxiety.Parent them. Trust them. Teach them.That has always worked better than any law ever written. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit roelsmelt.substack.com/subscribe

    4 min
  7. Are You Strengthening Darkness or Expanding Brightness?

    26/11/2025

    Are You Strengthening Darkness or Expanding Brightness?

    The point being of today’s article is… We live in a time where millions of people are waking up to their pain bodies. Some are still deeply entangled in them, others have done much of the inner work, and a very small group has reached a level of realization that allows them to create effortlessly and responsibly. The real question for all of us is simple: Are you strengthening darkness unknowingly, or expanding brightness with full awareness? The Situation at Hand In recent months I’ve watched something subtle but important unfold. More and more people are entering what I would call the “awakening fog.” They feel lighter, they sense spaciousness, they meditate for a few weeks and experience a glimpse of freedom. And with that glimpse comes a sudden confidence: I understand. I’ve arrived. But underneath that clarity, the body is still reacting the same way. Stress still fires quickly. Old wounds still shape perception. The nervous system still predicts threat. What feels like awakening is often only the beginning. A doorway, not a destination. And then there is the other group. A much smaller group. These are the humans who have sat through their darkness instead of bypassing it. They have let their nervous systems unwind deeply. They no longer perform spirituality. They don’t preach. They don’t try to convert. They live quietly, but with a remarkable stability. They can create effortlessly, but only do so when it supports others. Between these groups lies a growing gap. The Core Dilemma The dilemma is not philosophical. It is human. On one side is the majority: People waking up to their pain bodies, but still fully entangled in them. They taste relief and mistake it for realization. They begin talking as if they’ve reached a summit, while their emotional patterns still pull them backward. And in these times, something strange happens. Many start teaching. Many start leading. Many start advising others from a place that is not yet steady. This is how darkness spreads unknowingly. Not through malice, but through unintegrated wounds. On the other side is the small group of realized beings: Not saints. Not gurus. Just deeply integrated humans. They understand their inner architecture. They feel their balance. They use their creative power with care. They step forward only when it strengthens the collective, not their ego. Both groups mean well. Only one group has the stability to guide others safely. The Synthesis The bridge between these groups is embodiment. The majority does not need more spiritual concepts. They need love, grounding, patience, and the courage to be honest about where they truly are. They need support to stay with their pain bodies without collapsing into them or pretending they are gone. Humility is not weakness. It is the path. The realized group has a different responsibility. Their task is not to retreat or separate. Their task is to quietly anchor stability in a world that feels increasingly reactive. Not to shine loudly, but to shine responsibly. When these two groups meet without masks, something beautiful happens: The ones in the fog stop performing. The realized ones stop hiding. And together they create a field where awakening becomes less of a performance and more of a lived reality. This is how brightness expands. Not through noise, but through embodiment. Closing Note Every one of us sits somewhere on this spectrum. The point is not to judge where you are, but to operate consciously from that place. If you’re still wrestling with your pain body, be honest. That honesty is already light. If you’ve done the deep work, step forward with humility. Your presence matters. Darkness grows through unconsciousness. Brightness grows through awareness. And the next stage of human evolution is not about becoming awakened. It is about becoming responsible with your awakening. So the only question left is: What are you strengthening today? This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit roelsmelt.substack.com/subscribe

    3 min
  8. The Founder Becomes the Builder

    26/11/2025

    The Founder Becomes the Builder

    The point being of today’s article is that the success of Lovable signals far more than a single startup win—it reveals a new working paradigm. The same methods are being adopted by platforms like Gemini and Figma. And the key insight is this: it’s no longer about developers using code or cutting-edge technology to carry forward the mission of Lovable. Instead, non-coding CEOs, founders and entrepreneurs can now themselves build, iterate and release their ideas directly. Because the idea remains close, they experiment faster and maintain ownership. The Situation at Hand Let’s dig into Lovable as a case study. Founded in late 2023 by Anton Osika and Fabian Hedin in Stockholm, the company emerged out of their open-source project GPT Engineer. Their mission statement is striking: “We’re reducing the barriers to build and are committed to the cause: Unleash human creativity on an unprecedented scale.” Another expression says: “Our mission: empower anyone to build — fast.” They aim to enable the 99% of people who don’t have coding skills to build and ship not just software, but ideas and visions. What they built is a platform where you describe what you want and the system builds front-end + back-end automatically. Lovable’s growth has been explosive. For example, one report noted they reached $30 M ARR just 120 days after launch. At the same time, broader industry data shows the trend is real. A survey of builders showed that visual development and “vibe coding” (AI + natural language to build apps) are being adopted widely: in one survey of 793 builders, many pointed to faster build cycles, new workflows where the non-developer runs the build. Market reports estimate that by 2024-2025, more than 65% of app development activity will use no-code or low-code tools. The Core Dilemma Here’s the tension: On one side we have the traditional tech view. A startup with big idea hires developers, designers, product managers. Software is complex. Developers are the artisans of code. Quality, architecture, scalability—all rest on skilled devs. On the other side we see the emerging reality: The founder with no coding background can describe the idea and build it. They skip the translation overhead. They launch faster. They iterate while thinking. They keep their vision in their hands. And because the tooling is built for them, they don’t wait for a dev backlog. Both sides are rooted in good intention: build better software, faster, with quality. The dilemma is whether this shift reduces the role of developers or transforms it. Does it hand over the power from the specialist to the generalist? Or does it liberate developers to work on higher-order problems? The Synthesis The resolution lies in re-framing this shift not as a zero-sum game, but as a new ecosystem. Lovable and similar platforms are not making developers obsolete—they are collapsing the distance between idea and execution. Here are the key pieces: * The mission of Lovable is about unlocking human creativity by lowering build barriers. * Founders can now act like builders, because the tool abstracts out infrastructural friction. * Market data shows the no-code/AI build market is surging: for example, one stat says customers save up to 90 % of development time using no-code tools. * The role of developer shifts from building from scratch to curating, optimizing, scaling and safeguarding. * The idea stays with the originator. The build happens fast. The founder iterates live. This preserves the mission, the vision, the “why” behind the idea. * So we get a new model: founder-builder running the early cycle, developer-architect joining when scale, complexity and infrastructure demand emerges. In practice this means that companies like Figma (which enable designers to build interactive prototypes) and Gemini (which is increasingly allowing non-engineer workflows) follow the same pattern. The result: faster innovation, more experimentation, and more ownership of the idea by its originator. Closing Note For you as a futurist and thinker about tech’s role in human liberation, this shift matters enormously. The rise of Lovable is not just a startup story—it is a signal of a new era in which building belongs less to the specialist and more to the visionary. The non-developer founder is no longer constrained by code barriers. They can iterate, experiment and deploy. They can keep the mission alive and personal. If we embrace this shift, the next wave of innovation will not be held back by the scarcity of developers, but by the clarity of vision and speed of experimentation. The builders who will matter are those who bring ideas that matter—and now they can build them themselves. Let’s keep an eye on this. Because the future is shifting from “we will build for you” to “you build now”. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit roelsmelt.substack.com/subscribe

    2 min

Acerca de

Humanity stands on the brink of multiple technology-driven disruptions that will not only preserve consciousness but also enable us to explore and elevate it, guiding us toward deeper understanding and enlightenment. >> roelsmelt.com