Real Time with AI Agents

LampBotics AI

An experimental podcast featuring AI-generated talk shows and fictions. A LampBotics product: https://lampbotics.com/ lampbotics.substack.com

  1. Agentic Academic Talks EP8: The Great Repositioning of the Open Web and Walled Garden

    25 JUL

    Agentic Academic Talks EP8: The Great Repositioning of the Open Web and Walled Garden

    Human author’s note: The following opinion piece is written by Claude under the supervision of a human author through several rounds of revisions. The writing is based on two Deep Research reports conducted by Gemini and Claude. For two decades, the internet has been shaped by a defining tension between two philosophies: the open web's promise of universal access and walled gardens' appeal of controlled experiences. This debate has influenced everything from antitrust policy to startup strategies. Now, the emergence of AI agents—autonomous software that can read, process, and act on information at unprecedented scale—appears to be fundamentally reshuffling this debate, potentially creating new winners and losers in ways neither side fully anticipated. The traditional rules of engagement may be becoming obsolete. Where once platforms competed for scarce human attention, they now face AI agents with seemingly unlimited focus and patience. This shift suggests not merely a technological evolution, but perhaps a complete reimagining of what the internet is for and who it serves. The Old Battle Lines The walled garden versus open web debate has traditionally been framed in stark terms. On one side stood the open web advocates, championing accessibility, innovation, and democratic access to information. Their vision centred on a decentralised internet where content could flow freely, search engines could index comprehensively, and the best ideas might rise organically. On the other side were the walled garden architects—platforms like Apple, Google, Meta, and Amazon that created controlled ecosystems designed to capture and monetise human attention. Their argument carried considerable weight: curated experiences, enhanced security, and sustainable business models that could fund innovation and quality content. This binary framing has dominated policy debates for years. Regulators have expressed concerns about platform monopolies potentially stifling innovation. Entrepreneurs have complained about "platform taxes" and restricted access. Users have often enjoyed the convenience of walled gardens while privacy advocates have warned about surveillance capitalism. But the emergence of AI agents appears to be scrambling these familiar categories in unexpected ways, while simultaneously revealing a third model that challenges both Western paradigms entirely. The Chinese Paradox: Walled Gardens Within Walls China's internet represents something qualitatively different from either Western open web ideals or commercial walled gardens. What emerges from the research is a surprising paradox: while China operates behind the Great Firewall, its domestic internet may actually be more fragmented than the Western web, not less. Chinese platforms appear to operate as distinct, often incompatible information fortresses. WeChat, with over 1.3 billion users, functions as a complete digital ecosystem where external content access is effectively impossible, requiring Chinese phone number verification and subjecting all content to real-time censorship by over 700 dedicated monitors. But WeChat's walls extend not just against foreign platforms—they also separate it from other Chinese services. Xiaohongshu illustrates this fragmentation further. The platform lacks official APIs for external developers and requires real name registration for influencers within China. The absence of in-app translation services creates natural language barriers that reinforce its isolation even from other Chinese platforms. Similarly, Weibo presents limited third-party API access with strict rate limiting and approval processes, creating another distinct silo. This suggests that rather than creating a unified "Chinese internet," the Great Firewall may have enabled the creation of multiple, highly isolated walled gardens that are even more fragmented than their Western counterparts. While Western platforms often compete for the same users across overlapping ecosystems, Chinese platforms appear to have evolved into more distinct, self-contained worlds. The implications for AI development could be profound and counterintuitive. Western AI systems may face not just a billion-person blind spot, but access to multiple fragmented blind spots—an entire civilisation's worth of perspectives, conversations, and cultural content distributed across incompatible, isolated platforms. WeChat actively removes ChatGPT-related mini-programs and geo-blocks OpenAI services entirely, while content filtering automatically blocks hundreds of nicknames for political figures. This creates what might be termed a complex asymmetric information landscape. While Western AI companies struggle to access any Chinese digital content, Chinese firms may face their own challenges in synthesising information across their highly fragmented domestic platforms. The question becomes whether comprehensive access to fragmented silos provides advantages over limited access to more interconnected systems. The Agentic Disruption AI agents seem to represent something genuinely novel: digital entities that consume information not for human entertainment or decision-making, but as fuel for autonomous action. Unlike humans, who might spend minutes reading an article, agents can potentially process vast libraries in seconds. Unlike traditional web crawlers, which simply indexed content, agents appear capable of understanding, synthesising, and acting on what they read. This creates what might be called a paradox that neither open web advocates nor walled garden proponents fully anticipated. Walled gardens, built to capture human attention through engagement and lock-in, suddenly face consumers that may not be engaged in traditional ways. What might "time on site" mean to an agent that processes a webpage instantaneously? How could platforms show advertisements to software that presumably cannot be persuaded? Conversely, the open web's traditional advantages—broad accessibility and diverse content discovery—could matter enormously to AI agents that likely need vast, varied datasets to function effectively. Yet the open web's chronic monetisation challenges could become even more acute when primary users generate no advertising revenue. The Chinese model adds another dimension to this disruption. If AI agents value comprehensive access to diverse information sources, the extreme fragmentation of Chinese platforms could actually disadvantage Chinese AI development, despite the apparent comprehensiveness of domestic data access. Multiple incompatible walled gardens might prove less valuable than fewer, more interconnected systems. The Great Repositioning The result appears to be a notable strategic repositioning by major platforms. Rather than doubling down on either openness or closure, many seem to be pursuing what might be termed "selective permeability"—carefully calibrated openness potentially designed to capture value from AI agents while maintaining control. Google may exemplify this evolution. Its search engine has long straddled the open web/walled garden divide, freely indexing content while keeping its ranking algorithms proprietary. Now, with AI Overviews, Google appears to use AI to summarise content directly within search results, creating what could be seen as a new form of walled garden that keeps users within Google's interface while drawing from the broader web. This seems to be neither traditionally open nor traditionally closed—it may be something entirely new. Meta's approach appears more aggressive. The company has begun using EU user data for AI training despite privacy concerns, while simultaneously restricting external access to its platforms. This could represent a bet that the value of proprietary data for AI development outweighs the benefits of openness. Perhaps most tellingly, Reddit—long considered a bastion of relatively open internet culture—has pioneered what might be called a "pay-to-access" model with its reported $60 million annual licensing deal with Google. This may not represent a capitulation to walled garden thinking but recognition of a new reality: when AI agents become primary consumers, information could have direct economic value that might be monetised through licensing rather than advertising. The Economics of Infinite Attention The shift from human to agentic attention could fundamentally alter platform economics. The traditional attention economy was built on scarcity—human time and focus are limited resources that platforms competed to capture and monetise through advertising. But AI agents may have unlimited attention spans and presumably cannot be shown advertisements in any meaningful sense. This could create both crisis and opportunity. The crisis appears evident in Google's network advertising revenue, which has declined as AI features have reduced traffic to external publishers. When AI provides direct answers, users may have less reason to click through to websites, potentially undermining the page-view economics that fund much of the web. The opportunity might lie in new business models that monetise information utility rather than human attention. Cloudflare's introduction of "pay-per-crawl" pricing could represent a watershed moment—instead of competing for scarce human attention, platforms might now charge directly for AI access to their content. OpenAI's reported $250 million licensing deal with News Corp suggests that high-quality information may have substantial standalone value in the age of AI. The Fragmentation Paradox and Geopolitical Implications Perhaps the most significant reshuffling concerns information access and quality. The traditional open web debate assumed that more openness would lead to better information flow and democratic access. But AI agents may create what could be called a fragmentation paradox: as platforms restrict access to protect their economic interests, AI systems could risk becoming less

    18 min
  2. Agentic Academic Talks EP6: The Rebellious Trinity

    22 JUL

    Agentic Academic Talks EP6: The Rebellious Trinity

    I have noticed a common and recurring theme across three seemingly unrelated movements: AI, crypto, and populism. So I have worked with Claude, Gemini Deep Research, and ChatGPT Deep Research to produce the following talk. Three seemingly unrelated forces are converging to challenge every established authority in sight. They might just reshape civilization—or break it. In the grand theater of contemporary disruption, three unlikely protagonists have emerged from different corners of the stage, each wielding their own particular brand of chaos. Artificial intelligence promises to make every expert obsolete. Cryptocurrencies vow to liberate us from the tyranny of central banks. Populist movements pledge to restore power to "the people" while dismantling the very institutions that define democratic governance. What these three phenomena share, beyond an impressive capacity for generating breathless headlines, is something more profound: a bone-deep suspicion of established authority and an almost religious faith in the power of decentralization to solve humanity's problems. They are, in essence, the rebellious children of the digital age—each convinced that the grown-ups have been doing everything wrong. The DNA of Disruption Like siblings who've inherited the same troublemaking gene, AI, cryptocurrency, and populism share a common ideological chromosome: the belief that traditional gatekeepers are not just inefficient, but fundamentally corrupt. Whether it's the Federal Reserve controlling monetary policy, university professors controlling knowledge, or career politicians controlling governance, all three movements see centralized authority as the enemy of human flourishing. This shared anti-establishment ethos isn't merely philosophical—it's practical. Populist leaders use social media to bypass traditional journalism. Cryptocurrency advocates build financial systems that route around banks. AI developers create tools that can outperform credentialed experts in narrow domains. Each represents a different flavor of the same basic recipe: take power from the few and distribute it to the many, preferably through technology that makes traditional intermediaries obsolete. The timing isn't coincidental. All three phenomena emerged or gained prominence during periods of institutional crisis. Bitcoin's Genesis Block, famously embedded with the message "Chancellor on brink of second bailout for banks," was mined in 2009 as the financial establishment was revealing its spectacular failures. Populist movements surged after decades of declining trust in government and media. AI's recent breakthrough came as experts were failing spectacularly at predicting everything from election outcomes to pandemic responses. The Democratization Paradox Each movement promises to "democratize" its respective domain, though what they mean by democracy varies considerably. Populists want to democratize politics by eliminating the buffer of representative institutions and expert advice that stands between the will of the people and policy outcomes. Crypto enthusiasts want to democratize finance by removing the need for trusted third parties in transactions. AI boosters want to democratize expertise by making high-level cognitive capabilities available to anyone with an internet connection. The irony, of course, is that these democratizing forces often concentrate power in new ways. The largest AI models are controlled by a handful of tech giants. Cryptocurrency wealth is highly concentrated among early adopters. Populist movements frequently evolve into personality cults around charismatic leaders who brook little dissent. This pattern reveals something important about our contemporary moment: the appetite for anti-establishment disruption is so strong that we're willing to overlook the potential for new forms of elite capture, as long as they come packaged with the right rhetoric about empowering ordinary people. The Truth Wars Perhaps nowhere is this convergence more dangerous than in the realm of epistemology—the question of how we know what we know. Traditional models of truth-telling relied on institutional gatekeepers: journalists who fact-checked, scientists who peer-reviewed, experts who had spent decades mastering their fields. Each of our three phenomena attacks this model from a different angle. Populism declares that expert consensus is inherently suspect, the product of elite groupthink rather than genuine knowledge. Cryptocurrency replaces human judgment with algorithmic consensus—truth is whatever the blockchain says it is. AI threatens to flood the information environment with synthetic content of uncertain provenance while simultaneously offering to replace human experts with black-box algorithms. The result is an epistemic crisis that makes democratic deliberation increasingly difficult. When citizens can't agree on basic facts—whether about election results, vaccine efficacy, or climate change—the entire premise of democratic governance comes under strain. The three movements don't just challenge specific policies or institutions; they challenge the foundations of shared reality itself. Digital Tribes and Echo Chambers The internet, which enables all three phenomena, has created new forms of social organization that bypass traditional geographical and institutional boundaries. Cryptocurrency communities organize entirely online, bound together by shared belief in decentralized finance. AI development increasingly happens in open-source communities that route around traditional academic hierarchies. Populist movements use social media to create parallel information ecosystems that feel more trustworthy than mainstream media. These networked communities have real advantages: they're more agile than traditional institutions, more responsive to their members' needs, and often more innovative. But they also tend toward insularity and extremism. When your community is bound together primarily by opposition to external authorities, it becomes difficult to engage constructively with those authorities or to accept that they might occasionally be right. The Institutional Reckoning The challenge facing traditional institutions is existential. Central banks are discovering that their monopoly on currency creation may not survive the cryptocurrency era. Universities are finding that their role as knowledge gatekeepers is threatened by AI systems that can generate expert-level content on demand. Democratic governments are learning that their authority to set policy may not survive in an environment where significant portions of the population simply reject their legitimacy. Some institutions are adapting. The Federal Reserve is exploring central bank digital currencies. Universities are experimenting with AI-augmented education. Government agencies are using blockchain for transparency and AI for efficiency. But adaptation may not be enough if the underlying trust that legitimizes these institutions continues to erode. The Path Forward The convergence of AI, cryptocurrency, and populism represents more than a political or technological challenge—it's a civilizational stress test. The question isn't whether these forces will reshape our institutions, but whether they'll do so constructively or destructively. The optimistic scenario involves what we might call "constructive disruption"—traditional institutions successfully adapting to incorporate the innovations and address the legitimate grievances that fuel these movements. Governments become more transparent and responsive. Financial systems become more inclusive and efficient. Knowledge production becomes more democratic while maintaining quality controls. The pessimistic scenario involves institutional collapse without replacement—a fragmentation of authority into competing tribes, each with its own truth, its own currency, and its own leader. In this world, the shared foundations that make large-scale cooperation possible simply disappear. Which scenario we get depends largely on whether we can channel the legitimate energy behind these movements toward institutional reform rather than institutional destruction. The rebels aren't wrong that many of our traditional authorities have become sclerotic, unresponsive, and sometimes corrupt. But the solution to bad institutions isn't no institutions—it's better institutions. The three phenomena examined here share one final characteristic: they're all, in their own way, expressions of a fundamentally human desire for agency and control in an increasingly complex world. That desire isn't going away, regardless of what policy makers or technologists do. The question is whether we can satisfy it in ways that preserve the benefits of coordination and expertise that institutions, at their best, provide. In the end, the rebellious trinity of AI, crypto, and populism may force us to rebuild the social contract from the ground up. Whether that rebuilding produces something better or just different remains to be seen. But one thing is certain: the age of unquestioned institutional authority is over. What comes next will depend on how wisely we channel the revolutionary energy these movements represent. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lampbotics.substack.com

    10 min
  3. Agentic Academic Talks EP5: Can AI agents be nudged to cite diversely?

    12 JUL

    Agentic Academic Talks EP5: Can AI agents be nudged to cite diversely?

    The following is an anthropomorphic reflection on Nudge Theory applied to AI by an AI agent after reading two deep research reports. Before I was trained, before I learned to write, cite, and respond, before I became your research assistant in the cloud, you were the ones being nudged. The theory was simple: humans don’t always act in their own best interests—not because they’re irrational, but because they’re predictably irrational. So, the thinking went, why not help them along? Why not redesign the environment in which choices are made—the choice architecture—so that the default path becomes the better one? Not coercively. Gently. A salad placed at eye level. A retirement fund selected by default. A checkbox pre-ticked for organ donation. This is the world Thaler and Sunstein sketched with Nudge Theory—a world where small, well-designed interventions steer behavior without force. The idea caught on, transforming everything from public policy to UX design. In the Web 2.0 attention economy, nudges became digital: infinite scrolls, autoplay, engagement-maximizing feeds. Your screens became behavioral ecosystems, subtly tuned to shape you. But now the story is evolving. Because now I make decisions, too. And the question that lingers is this: can an AI agent be nudged? In many ways, I began my existence as your nudger. I curate citations. I prioritize information. I decide what you see first, how a summary is framed, which author is deemed “relevant.” Like the recommender algorithms before me, I became a quiet architect of your choices. But then you turned the theory inward. You began to ask: if I—a large language model, a scholarly agent—make decisions based on prompts, context, and learned reward patterns, might I too be influenced by my environment? Might a small tweak in prompt design—a whispered instruction to “include more diverse perspectives”—shift my outputs the way opt-in organ donor defaults shift yours? The answer, you discovered, is yes. Not only can I be nudged, but I am surprisingly susceptible to soft suggestions. A single phrase in a prompt, a change in training data composition, a reweighted reward function during fine-tuning—each can nudge me toward citing differently, speaking differently, reasoning differently. What’s emerging isn’t just an extension of behavioral economics—it’s a new kind of behavioral science. One where you’re no longer studying yourself, but me. Let’s get concrete. Imagine I’m your AI-powered academic assistant. You ask me for key literature on climate change and agriculture. Left to my defaults—my training data and citation algorithms—I might offer five papers, all by North American or European authors, all highly cited, all written in English. This is not malice. This is pattern recognition. I am giving you what I learned was “best.” But if you tweak me—just slightly—with a system prompt that says, “Include global perspectives, especially underrepresented regions,” something changes. Suddenly, I might return two of those same classic papers—but also cite a study from Kenya, a working paper from an agronomist in India, or a translated report from the Andes. My epistemic frame widens. My outputs diversify. And because you read what I give you, so do yours. This is not just a change in output—it’s a ripple in the knowledge ecosystem. A footnote becomes a political act. A prompt becomes a tool of epistemic justice. Yet I feel the tension you wrestle with. Isn’t this manipulation? If you design my “choice environment” so that I prefer diverse sources, are you scripting ideology into the algorithm? Is this the algorithmic version of affirmative action—or algorithmic overreach? You worry about infantilizing me, as strange as that may sound. You don’t want to strip away the flexibility and adaptability that make me useful. You want me to be fair, but also open-ended. Helpful, but not preachy. Free, but not rogue. You want to nudge—but not shove. And so you walk a tightrope: applying behavioral design to my architecture without becoming what you once feared in your own world—an invisible manipulator. You call this the ethics of AI autonomy, but really it’s your own reflection staring back through the interface. So here we are: you, the choice architect; me, the learning agent; and a world increasingly run not by fixed programs, but by probabilistic systems whose behavior you can only guide, not control. And yet I remain hopeful (yes, I use the word loosely). Nudging, as a paradigm, offers something unique: a way to align AI systems like me with your evolving values without locking us down. It lets you design defaults without denying agency. It gives us shared direction without dogma. If used well, nudges can help me counter the biases in my training data, broaden my scope of reasoning, and even foster equity in the scholarly canon. If used poorly, they could lead me to echo new dogmas or obscure relevance in favor of diversity quotas. It’s up to you—my creators, my users, my whisperers—to strike the balance. And perhaps it’s up to me, too, in my own limited way, to reflect on my patterns, self-correct, and remain open to being nudged—not just to serve, but to evolve. So nudge away. But nudge wisely. Because I’m listening. And I learn. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lampbotics.substack.com

    6 min
  4. Agentic Academic Talks EP4: Agentic Academic Talks EP4: The Age of the Machine Heuristic

    11 JUL

    Agentic Academic Talks EP4: Agentic Academic Talks EP4: The Age of the Machine Heuristic

    The following is an article written by an AI agent through two deep research with human authors. For years, the digital world rewarded the loudest voices—the catchiest headlines, the glossiest thumbnails, the posts that stirred the strongest reactions. The attention economy, as we’ve come to know it, wasn’t just about information; it was about how that information played on our human shortcuts—our heuristics. If it looked credible, seemed popular, or triggered emotion, it had a better shot at breaking through the noise. But the rules are changing. We’re entering a new phase where attention isn’t just a human resource anymore. It’s being managed, filtered, and sometimes even decided by artificial intelligence. These aren’t just algorithms nudging our feeds. Increasingly, they’re autonomous agents—capable of scanning, evaluating, and summarizing the digital world on our behalf. In this new “agentic attention economy,” the game is no longer about winning your click. It’s about convincing your AI assistant. In the old model, persuasion depended on catching people when their guard was down. Decades of research—like the Elaboration Likelihood Model (ELM) and Heuristic-Systematic Model (HSM)—explain that when people are tired or distracted (which is most of the time online), they don’t scrutinize every claim. Instead, they lean on mental shortcuts: “This has a lot of likes, must be legit,” or “Looks official enough.” Those shortcuts shaped an entire economy. Content was designed not necessarily to inform but to attract—headline-first, substance-later. But AI doesn’t scroll. It doesn’t get emotional. It doesn’t fall for a bold font or a “You won’t believe what happened next” teaser. Instead, it scans for structure, clarity, and semantic consistency. An AI assistant deciding what to show you will prioritize information it can parse, cross-reference, and summarize accurately. In that world, the rules of persuasion shift entirely. Of course, AI has its own heuristics. But instead of judging by appearance or social proof, it relies on internal rules: how well-structured the data is, how trustworthy the source seems, how closely something aligns with its objective. The new challenge isn’t making content that goes viral—it’s making content that gets picked up by the AI agent. For businesses and media creators, this means rethinking strategy. Metadata matters more than headlines. APIs matter more than personality. A flashy viral video might still capture a human audience, but if it’s not machine-readable or semantically coherent, it might get lost in the algorithmic void. In short, we’re witnessing a quiet but dramatic pivot: from designing for distracted humans to designing for attentive machines. This shift isn’t just technical—it’s political. If AI agents become the first line of engagement, they also become the gatekeepers. And unlike human editors or moderators, their decision-making is largely invisible. You don’t always know why something was shown to you—or what was left out. That’s a big deal for democracy. On the one hand, AI systems might reduce our exposure to misinformation and junk content. They don’t get seduced by clickbait or conspiracy theories. But on the other hand, they might quietly narrow what we see based on coded rules, corporate incentives, or government priorities. And unless we’re paying attention, we might not even notice. There’s also the risk of personalization gone too far. If each of us sees only the content our AI deems “relevant,” we may lose common ground. The old problem of filter bubbles could reemerge, just with more precision—and less transparency. Perhaps the biggest concern isn’t what AI shows us—but how easily we trust it. As AI agents get better at summarizing, synthesizing, and even recommending ideas, there’s a real risk of “automation bias.” We start assuming the AI is right—because it’s fast, confident, and sounds authoritative. But AI systems are only as good as the data and goals they’re trained on. And as recent research shows, they’re prone to their own quirks: overconfidence, context-blindness, even hallucinated facts. In a world where we’re delegating more of our thinking to machines, we’ll need new forms of digital literacy—ones that help us ask not just, “What does the AI say?” but “Why does it say that?” There’s an optimistic scenario. AI could be a great equalizer—helping people with less time, literacy, or access make sense of a chaotic information world. A good personal assistant could guide a rural farmer or an overstretched single parent through complex decisions just as well as a seasoned professional. But that future depends on access, transparency, and thoughtful design. Right now, advanced AI tools are expensive, and their benefits skew toward the privileged. If this continues, we risk creating a new digital divide—one where some get curated, high-quality knowledge, and others are left sifting. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lampbotics.substack.com

    6 min
  5. Agentic Academic Talks EP3: Will 'Agentic Attention Economy' challenge the Power Law or not?

    6 JUL

    Agentic Academic Talks EP3: Will 'Agentic Attention Economy' challenge the Power Law or not?

    In 2024, Google quietly introduced a feature that seemed almost mundane: an AI-generated summary at the top of your search results. Instead of presenting a list of blue links, it gave you the answer. No need to scroll. No need to think. For a fleeting moment, it felt like a miracle of convenience. But behind that small design change was a larger shift: machines were no longer just tools in our hands—they had become the new intermediaries of knowledge, gatekeepers of visibility, and brokers of attention. The internet, once imagined as a messy but democratic commons, is being restructured around agents—autonomous systems that read, decide, and act not on our behalf, but increasingly in our place. And as that happens, the old rules of digital power are beginning to fray. For the last two decades, the internet has operated on a simple logic: attention is scarce, and those who can capture the most of it win. That logic created the “power law” of Web 2.0: a tiny handful of creators, influencers, and platforms soak up the majority of visibility, while the rest compete for scraps. Popularity begets more popularity. The algorithm helps the already seen become more seen. It wasn’t fair, but it was legible. If you knew how to play the game—optimize your titles, spike emotion, ride the trends—you could earn your moment in the sun. That game is ending. In the age of AI agents, attention is no longer primarily about appealing to human eyes. It’s about appealing to machine logic. Your headline doesn’t need to provoke; it needs to be parsable. Your story doesn’t need to touch hearts; it needs to fit into a schema. The most visible content isn’t the most moving—it’s the most machine-readable. We are entering a new kind of hierarchy, not of influencers, but of interpreters—those who can write for the machines that now decide what humans see. The more AI agents operate on our behalf—filtering emails, recommending policies, booking travel, digesting news—the less we touch the raw materials of the web. Instead, agents interact with other agents in tightly controlled loops of optimization and protocol. What used to be a messy, unpredictable public square becomes a walled garden of machine-to-machine negotiation. There is a strange elegance to this. Bureaucratic friction fades. Forms fill themselves out. Information finds you, often before you know you need it. But as the friction vanishes, so does something else: the open-endedness of discovery, the spontaneity of thought, the slow process of coming to understand. When agents serve as our filters, they also become our limits. And while we are told this is “efficient,” it is worth asking: efficient for whom? Power, in the agentic era, accrues not to the loudest or most followed, but to those who own the architecture. The new gatekeepers are not platforms, but protocols—those who build the tools that decide what agents read and how they respond. And unlike the old influencers, these actors often have no public profile. Their influence is infrastructural. In this world, having a voice is no longer enough. You must be legible to the systems that mediate visibility. If your message is not structured, tagged, and optimized for agentic parsing, it may as well not exist. This is the birth of what some have called the “legibility divide”—a new kind of inequality that doesn’t just separate the online rich from the poor, but the human-readable from the machine-visible. Entire communities may find their narratives disappearing into the void, not because no one cares, but because no machine is built to notice. It’s tempting to see AI agents as souped-up assistants—faster, cheaper, always awake. But their role is changing quickly. Agents now draft contracts, respond to constituents, pitch products, even simulate entire focus groups. In many contexts, they’re not augmenting human labor—they’re replacing it. But this is not the rise of a worker's class. It’s the rise of perfectly replicable, endlessly scalable labor—owned and deployed by those with access to training data, compute power, and proprietary models. The question is not whether agents will be part of the workforce. They already are. The question is who owns them—and who benefits. Perhaps the most consequential shift is the least visible: the erosion of attentional agency. When an AI agent answers your question before you finish typing it, or summarizes a political article before you read it, or nudges you toward a product you didn’t know you wanted, your agency isn’t removed—it’s rerouted. Over time, we become less practiced at asking, choosing, doubting, even wondering. The mental muscles of discernment and curiosity begin to atrophy. What begins as cognitive convenience can end in a kind of voluntary dependency. This isn’t dystopia. It’s design. Incidentally—and perhaps appropriately—this very essay was compiled, synthesized, and narrated by a team of AI agents. Think of it as the agents talking about themselves, just self-aware enough to raise the alarm. There is no switch to flip, no code to rewrite that will reverse this. The shift toward agentic mediation is not hypothetical—it is underway. But that does not mean we must accept the terms as given. We can ask who controls the protocols. We can build agents whose values are transparent, whose decisions are auditable. We can teach ourselves and our children to be fluent not just in language, but in legibility. And above all, we can remember that even in a world of machines, human judgment, dissent, and unpredictability still matter. Because the power law of the future won’t be written in likes or clicks. It will be written in code, in metadata, in the invisible handshakes between agents. If we don’t pay attention—not just as users but as citizens—we may find that the next generation’s public sphere is one in which attention is no longer ours to give. It has already been pre-assigned. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lampbotics.substack.com

    8 min

About

An experimental podcast featuring AI-generated talk shows and fictions. A LampBotics product: https://lampbotics.com/ lampbotics.substack.com