The Leverage Podcast

Evan Armstrong

The Leverage Podcast explores tech’s most urgent questions with the people answering them. www.gettheleverage.com

Episodes

  1. Are Ads the Answer to AI's Problems?

    11/26/2025

    Are Ads the Answer to AI's Problems?

    Watch on YouTube • Listen on Spotify • Listen on Apple Here is something that actually happened: A company put advertisements inside a coding tool—the kind of tool used by senior software engineers at serious enterprises who, as a demographic, are famously resistant to being marketed to—and instead of the internet erupting in the predictable conflagration of outrage, the company generated somewhere between $5 million and $10 million in annualized revenue within a month. Quinn Slack, co-founder and CEO of Sourcegraph, announced something called “AMP Free” in October, watched his X post accumulate 900,000 impressions, and—this is the part that made me sit up—had advertisers signing six-figure contracts between 6 PM and 8 AM the night before launch. Which is to say, companies were so eager to advertise inside a coding agent that they were doing paperwork at 2 AM. The implications here extend considerably beyond one startup’s monetization experiment, though if you’re the kind of person who tracks startup monetization experiments (and if you’re reading this, you probably are), it’s a pretty interesting one. Quinn is explicitly building towards an ad network that could serve other coding agents and developer tools, which is another way of saying he wants to become the Google of developer attention. His thesis—and it’s the kind of thesis that sounds either visionary or delusional depending on which assumptions you accept—is that ads are the only business model capable of driving the scale necessary to justify the truly staggering datacenter buildouts currently underway, and that ad-supported agents become what he calls a “sponge” for unused GPU capacity. Advertisers, meanwhile, are apparently willing to pay $500 to $1,000 per “qualified action,” which in this context means a developer who not only clicks on an ad but actually implements an API key. If Slack is right about all this, the entire AI infrastructure stack just found its demand floor. If he’s wrong, well, at least someone tried something genuinely weird. For operators and investors this interview functions as a kind of masterclass in contrarian positioning. While Slack’s competitors are racing to win enterprise deals by discounting their products 85-100% (which, just to be clear, means they are sometimes giving away their product entirely in exchange for the privilege of having a customer), Slack is deliberately taking only 10% of deals in order to preserve what he calls “product velocity.” The bet is that staying on the frontier matters more than short-term market share, and that ads let you answer to users voting with their feet rather than disconnected enterprise buyers who want commitments to things like “self-hosting” and “model choice.” It’s either brilliant discipline or extremely sophisticated cope—and the next twelve months will tell us which. But $5-10M ARR in the first month is, as they say, pretty baller. Ideas & Analysis 1. Ads Are the AI Demand Floor Thesis: Ad-supported AI agents create guaranteed demand for GPU capacity, de-risking datacenter investments and enabling more ambitious infrastructure buildouts. Quotes: “An ad support coding agent is essentially like a sponge for any unused tokens from good models out there. And so it lets the whole world be bolder in all these data center build outs.” — Quinn Slack “Ads are the only way to truly drive the kind of scale that we need to get every single person using a coding agent. And that ultimately drives economies of scale that is going to support all of these data center build outs and that can make it so that everyone’s CapEx plan can be 25% more ambitious because they know that the moment all those GPUs come online, they’re going to have a use.” — Quinn Slack Analysis: The quiet insight here—and it took me a moment to catch it—isn’t really about advertising at all. It’s about infrastructure finance, which is to say it’s about the thing that makes all the other things possible. Consider that hyperscalers and model providers are currently making trillion-dollar commitments on datacenters that will not, cannot, be fully utilized on day one. The traditional playbook assumes demand will materialize through paid subscriptions and API usage, but that’s fundamentally a bet on conversion rates and price elasticity, which is another way of saying it’s a bet on human behavior, which is notoriously difficult to predict. What Slack is arguing is that ad-supported agents flip the entire equation. Instead of hoping users will pay, you guarantee utilization by making the marginal cost to users zero. The GPU hours get monetized through attention rather than direct payment. It’s the same economic logic that made broadcast television work, back when broadcast television was a thing people cared about: you don’t charge viewers; you charge Procter & Gamble for access to their eyeballs. The second-order effect is competitive. If Quinn is right, companies without an ad-supported tier are leaving demand on the table and ceding scale advantages to those who have one. The counterargument is obvious and has probably already occurred to you: developers hate ads, or at least they say they do, and the market will punish anyone who degrades the experience. But Slack’s 95% advertiser close rate and the 900K-impression post suggest the market is at least curious, which is not the same as enthusiastic but is considerably better than hostile. 2. Coding Agents Have Google-Like Ad Potential Thesis: Coding agents uniquely combine high-intent search behavior with always-on engagement, creating an ad surface superior to either Google or Instagram alone. Quotes: “The cool thing about a coding agent is you actually have the opportunity to do both because it’s up on their screen, it’s in their workflow at all times, kind of like people are on Instagram all the time... but then they’re also going to it with high intent. And you don’t really go to Instagram with high intent for specific purchases or actions, you go to Google. But yeah, the coding agent has both.” — Quinn Slack “There’s no other kind of ad that delivers customers that are so well-qualified and far along on implementing... We’ve heard from our ad customers that they would be willing to pay $500, $1,000 for that kind of really highly qualified lead.” — Quinn Slack Analysis: Slack is making a structural claim about attention quality, and it’s the kind of claim that’s either profound or tautological depending on how generous you’re feeling. Here’s the argument: Google wins on intent (you’re searching for something specific, you want an answer, you are a person with a problem), Instagram wins on time-on-platform (you’re scrolling endlessly, you have perhaps forgotten why you opened the app, time has become somewhat irrelevant), but neither platform has both. A coding agent, by contrast, is open all day and captures discrete moments of high commercial intent—specifically, the moments when a developer decides they need authentication, payments, or database infrastructure. The targeting data is also considerably richer than what you’d get elsewhere: not just “what did they search for” but “what’s actually in their codebase” and “how many people are working on this repo.” This is an idea I strongly believe in and endorse. Ads integrated into B2B workflow products is a market no one has really cracked, and if AI finally makes it possible, hundreds of billions of dollars are suddenly available. The $500-$1,000 CPL quote is, I think, the buried lede here. That’s not display ad pricing—that’s lead-gen pricing for enterprise SaaS, which operates according to entirely different economics. If AMP can actually deliver a qualified lead who has already implemented an API key, the comparison isn’t Google Ads; it’s a sales development rep who never sleeps and never misses a buying signal and doesn’t require health insurance. The risk, obviously, is execution: can they actually build the attribution and action layers necessary to prove that value, or will advertisers pay for a few months, see murky ROI, and churn? (Advertisers are famously patient about murky ROI, which is to say they are not patient about it at all.) 3. Trust Is the Moat for AI Ads Thesis: Separating ads from AI recommendations—like Google separates organic from sponsored results—is essential to maintaining user trust in agentic products. Pull Quotes: “It’s really important to know we do not inject ads to steer what the AI is doing or saying or recommending to you. It’s entirely separate, just like in Google, how the organic results are separate from the sponsored results.” — Quinn Slack “Trust is so important. And this is why Google, you could say, they have all the temptation in the world to just start showing ads as organic results. But you can stop going to Google if they do that. And coding agents, it’s a really highly competitive space.” — Quinn Slack “I think there’s a lot of other coding agents that if they came out with ads, people would say, that’s just going to be junk. But because that trust we had with AMP, we were able to try this and ultimately build something I think is valuable.” — Quinn Slack Analysis: The Google analogy is doing a tremendous amount of work here, and it’s clever framing—perhaps too clever, in the way that things designed to be persuasive sometimes are. Google’s entire business depends on users believing organic results are unbiased, even as the company makes $200+ billion per year from ads displayed alongside them. (This is one of those arrangements that sounds impossible when you describe it but has somehow persisted for decades.) Quinn is arguing that the same church-and-state separation can work for AI agents: the model gives you unbiased recommendations, and the ads are clearly

    39 min
  2. 11/12/2025

    Can Crypto Fix AI Slop?

    One of the biggest long-term problems that will result from AI is the inability to tell reality from siliconized fiction. Typically when people talk about this sort of stuff, they mean deepfakes or misinformation—an image, video, or essay that spews out lies. That problem is real, but is one that is not all that new. Misinformation has been around since we invented the printing press. Instead, what I’m personally more concerned about is the inability to tell human and robotic identity apart. Large language models are remarkably good at manipulating people’s emotions and convincing people to change their opinions on topics. In the very, very near future, the internet will be filled with intelligent AI agents that are taking actions on some human’s request. Those actions could be innocuous stuff like booking a flight or it could be about deploying a bot swarm on social media to try to sway people’s votes. The worse problem is that we have no good answer. We are hurtling towards a future where you can’t tell who’s real online and there isn’t a ton we can do about it. Our current solutions—whether we certify someone is human through little puzzles, texts to their phone, or photo verification of their driver’s license—are relatively easy to fake. One startup, cofounded by Sam Altman, thinks the answer is that we all need to get a picture taken by an orb. That camera’s picture will be used as “proof of human” and from there, issue you a cryptocurrency that you can use to move through the internet. Whether that’s genius or dystopia depends on your tolerance for irony—the world’s most prominent AI builder trying to save us from the problems AI created feels a bit like the drug dealer running the rehab. Still, this might be the best shot we have. So when Tools for Humanity—the team behind Worldcoin—asked to come on the pod, I figured it was worth hearing them out. In this episode, I talk to Adrian Ludwig, their Chief Architect. I press him hard on the technical and ethical holes in their plan—and to his credit, he doesn’t flinch. If you want to understand whether scanning eyeballs into orbs is our salvation or our surrender, this conversation will give you what you need to decide. Get full access to The Leverage at www.gettheleverage.com/subscribe

    48 min
  3. Why Is The Internet Bad Now?

    10/30/2025

    Why Is The Internet Bad Now?

    There’s a new class of public intellectual emerging—the tech pessimist who actually knows how the sausage gets made. They’re not your standard Luddites railing against smartphones; they’re former engineers, economists, and journalists who can offer technical sounding explanations for why your Instagram feed got worse. Frankly, most of the time I find these individuals more bombastic than insightful. Cory Doctorow is different. He is the leader of this new class of commentator while simultaneously being a rigorous thinker about the internet. If you’ve heard of him, you’ve probably heard of the word he invented, “enshittification.” This has become a handwavy term to describe when digital services go bad. Instagram forcing ads down your throat? Enshittification. Amazon being full of junky products? Enshittification. It is a vibe that I, you, and everyone else probably feels. However, enshittification is not just a vibe. It is a specific theory, buttressed by strong claims about anti-trust and consumer rights. This theory has become the go-to framework for understanding platform decay among left-wing regulators, including the former head of the FTC, Lina Khan. In our conversation, Doctorow laid out the three-act tragedy: platforms subsidize users until they’re locked in, then squeeze users to subsidize businesses, then squeeze everyone to pay shareholders. To be transparent, I do not fully subscribe to his framework. Specifically, I’m skeptical that programatic advertising is as ineffective as he claims, and I think he underestimates genuine consumer preferences for convenience over privacy. You’ll hear it in our conversation when I push back on topics like the efficacy of Meta’s ads or why consumers continually choose to consume shitty content. But while I disagree with the totality of his theory, I agree with the underlying emotional feeling. Namely, that the internet can and should be better for people. Consumers should have significantly more rights. Attention economies have consolidated too much power into individual companies and I want regulators to take action so that startups have a fighting chance. In that regard, I found Doctorow to be a kindred spirit. His chosen solution is not the usual “break up Big Tech” handwaving, but specific, implementable fixes. He wants to force platforms to support data portability through open protocols (great idea). Make reverse engineering legal again (great idea). Stop pretending that surveillance advertising works (meh). These aren’t revolutionary, but coming from someone who can speak so eloquently and passionately, they resonate. The timing of his new book matters too. Regulators are circling and even the courts are starting to question whether “free” services can really be monopolies. The wildest part? Doctorow thinks Trump’s trade war might accidentally fix everything. The countries that got screwed by American tech companies now have permission to screw back. Below are the key takeaways of his arguments: Apps Are Just Websites Plus Handcuffs Thesis: The primary difference between apps and websites is that modifying an app is a federal crime. Pull Quotes: “An app is really just a website skinned in the right kind of IP to make it a crime to defend your privacy while you use it.” “Under the Digital Millennium Copyright Act you have to reverse engineer them and that becomes illegal... it carries a penalty of a five-year prison sentence and a $500,000 fine.” “If you care about good product design, you care about products that your users like... then there has to be space for your users to push back... When you have a platform in which everything that’s not forbidden is mandatory, you cannot learn from your users.” Analysis: This is the kind of insight that makes you feel stupid for not seeing it earlier. Every growth team in Silicon Valley has the same playbook: nag users to download the app with increasingly desperate popups. Why? Because on the web, users can install ad blockers, modify the interface, scrape data—basically treat your product like they own it. Wrap that same code in an app, and boom, legal protections prevent users from doing any modification. However, the threat is more theoretical than real for personal use. Since 2010, the U.S. Copyright Office has granted DMCA exemptions for jailbreaking smartphones, explicitly allowing users to modify their devices for personal use without fear of prosecution. These exemptions have been renewed every three years and expanded to include tablets, smart TVs, and voice assistants. Critically, Apple has never prosecuted an individual user for jailbreaking, though distributing jailbreaking tools remains a legal gray area. The business implications are still brutal and clear. If you’re competing on the open web, you need to actually respect users because they can modify your product whether you like it or not. That’s why web products tend to be cleaner—not because PMs are nicer people, but because users have nuclear weapons (ad blockers). Apps remove that threat, which is why every app eventually becomes a cluttered mess of dark patterns. Of course, there are legitimate technical advantages that apps offer: push notifications, offline functionality, superior camera/GPS integration, and generally better performance. But the question remains: if these technical advantages were the only draw, why do companies push apps so aggressively even when the mobile web would suffice? The control over user modifications is clearly part of the calculus. In an AI-powered future where browser agents can dynamically modify web pages on behalf of users, the app-first strategy may become less sustainable. Network Effects Are Actually Coordination Problems Thesis: People stay on awful platforms not because they like them but because coordinating a group escape is harder than individual suffering. Pull Quotes: “You love your friends, but they’re a pain in the ass. And if you can’t agree on what board game you’re going to play this weekend, you certainly can’t agree on when it’s time to leave Facebook.” “They couldn’t leave because they mattered to each other more than this gross, terrible privacy violation scared them. They loved each other more than they hated Mark Zuckerberg.” “Building a lot of housing for people in East Berlin when you’re in West Berlin does not mean you’ll get any tenants. You have to tear the wall down.” Analysis: Doctorow offered the example of a breast cancer support group that wanted to leave Facebook but couldn’t. Not because they secretly loved surveillance, but because the alternative was losing their support network during cancer treatment. The economics term “collective action problem” sounds bloodless, but we’re talking about real people choosing between isolation and some form of attention exploitation. Research confirms this dynamic. Studies on platform switching show that even when users express strong intentions to leave a platform due to dissatisfaction, inertia—driven by habits, emotional attachment, and perceived switching costs—significantly moderates whether they actually switch. In one study on social commerce platforms, switching costs were found to strongly moderate the relationship between switching intention and actual switching behavior, meaning people who say they want to leave often don’t follow through. For anyone building a social product, this reframes competition. You’re trying to solve a coordination problem rather then launch new features. Discord didn’t beat Skype by being better; they gave entire communities a reason to move together (gaming servers). Same with Slack and email (whole companies switching at once). The lesson: stop trying to poach individual users from incumbents. Instead, find natural groups with shared incentives to move together. Churches, gaming clans, companies, schools—any pre-existing organization that can coordinate its own exodus. That’s why every successful social platform started with a specific community (Facebook with colleges, LinkedIn with professionals) rather than “everyone.” You need a Schelling point for coordination, not just better features. Interoperability Beats Antitrust Thesis: Forcing platforms to let users export their social graphs would create more competition than any breakup because it removes the actual lock-in. Pull Quotes: “If you use legacy social media, there’s no easy way to leave social media and go somewhere else... But, you know, there’s nothing intrinsic to technology that says that it has to be that way.” “We could say to Elon Musk and Mark Zuckerberg, people who leave your platform have to be able to speak to the people who stay on your platform and you are required to support this.” Analysis: Traditional antitrust is fighting the last war. You could break Facebook into five pieces and users would just swarm to whichever piece had their friends—congratulations, you’ve created a temporary inconvenience. The real lock-in isn’t corporate structure; it’s protocol incompatibility. Mastodon already solved this: when you leave one server, you export a simple XML file with your social graph and import it elsewhere. Takes literally minutes. Force platforms to support ActivityPub or similar protocols and suddenly every product decision becomes life-or-death because users can actually leave. However, the EU already tried a version of this with GDPR Article 20, which has been in effect since 2018. The “right to data portability” requires platforms to provide user data in a “structured, commonly used and machine-readable format” and allow direct transmission to another platform “where technically feasible.” Yet despite this regulation, we haven’t seen the competitive effects Doctorow predicts. Research on GDPR’s data portability provisions reveals three major obstacles: * Lack of us

    48 min
  4. The Future of SaaS Is Already Here—and It’s Called EliseAI.

    10/21/2025

    The Future of SaaS Is Already Here—and It’s Called EliseAI.

    Watch on YouTube • Listen on Spotify • Listen on Apple SaaS as we know it is dying and giants are being felled. Today, I want to talk about EliseAI’s playbook for murdering such giants in the world of property management software. The company’s displacement strategy is elegant and brutal: Start with conversational AI that solves an obvious pain point (answering repetitive tenant inquiries), use that wedge to integrate your product into every system used by a customer, expand into workflow automation that makes human interfaces less necessary, then give away the CRM and other legacy software for free. You can afford this last part because you’re not selling software subscriptions anymore—you’re selling AI that replaces labor. This pattern has been proposed by multiple thinkers over the last few years in tech. However, most of these proposals have been theories, ideas, and Substack posts without tangible examples. EliseAI makes their words come true. The company recently raised $250 million at a $2.2 billion valuation and has been at this strategy since 2017, with the sort of scale and execution that were previously just fantasies. The firm has done very, very little press and just been quietly building. Today, that changed, and I sat down to interview co-founder and CEO Minna Song. EliseAI is particularly instructive because the company didn’t start by trying to replace property management systems; they started where there was no software, automating the thousands of manual tasks that humans suffered through daily. Answering tedious questions like “What’s the pet policy” or “Can we schedule a tour?” Thus came the beauty of automation: Every automated workflow created more integration points, more data capture, more surface area to expand. Seven years in, EliseAI is now giving away tools like CRMs that incumbents charge for while making money from the actual value-generating work of AI agents. The traditional software becomes a dumb database, which is a short walk to obsolescence. This interview is a decision tree for the next five years. It will tell you what each of your decisions will lead to if you’re an investor or an operator. If you’re an incumbent vertical SaaS company and someone offers conversational AI as a “partnership” or “integration,” you’re opening the door to your eventual replacement. If you’re building in vertical AI, the question isn’t whether or not to give away traditional SaaS features to beat your competition: It might be the only way you survive. And if you’re allocating capital, you’re competing against companies that got lucky on timing (founded early enough to build pre-GPT infrastructure but late enough to benefit from transformers), are disciplined on spending (EliseAI bootstrapped 2.5 years before raising a $1.9M seed), and are ruthless about expansion once the wedge worked. You’ll need to know what you’re up against if you’re operating in this space. Which is why I think this conversation with EliseAI is incredibly valuable. Below are my notes and primary takeaways from the conversation. Ideas & Analysis The Wedge is Where Software Never Existed Thesis: Elise entered property management by automating human workflows nobody had bothered to software-ize, not by competing with existing tools. A classic AI displacement entry point. Pull Quotes: “I took a job working at a real estate firm in New York City and that was really where I got a lot of exposure. So I was a front desk admin, so I met a bunch of people, I greeted everyone and I learned everything about what we built here at the beginning from that role…People, they just had loads and loads of people doing really tedious tasks on site, answering the same email 50 times a day. Here’s how much a one bedroom costs. Here’s our pet policy. I’ll schedule you an appointment, right? All manually.” “Every time we think about what do we build next, we’re not there to just try to capture zero sum part market share. We’re trying to capture the value that we’re trying to create.” Analysis: Co-founder Minna Song started off the company by working for a three-month stint as a front desk admin. The insight was simple: property management software handled databases and accounting, but humans still manually answered “how much is a one bedroom” fifty times a day because nobody had bothered to automate that layer. It’s too conversational, too variable, too low-margin to justify traditional software development. But for an AI company in 2017 with just-released transformer models, it’s the perfect entry point. You’re not displacing anyone because there’s nothing to displace. You’re just removing tedium, which makes you a hero rather than a threat. This is the wedge strategy at its purest: find where humans do repetitive work that looks too “soft” for traditional software, automate it with AI, then use that foothold to expand. The framing about “creating value not capturing market share” is both true and strategic misdirection. Yes, automating unautomated work creates new value. But once you’re integrated into the leasing workflow—answering inquiries, scheduling tours, qualifying prospects—you’ve got data pipes into every other system the property uses. The CRM, the maintenance system, the payment processor, the smart locks. Each integration point is a future expansion vector. The reason this works as displacement is that incumbents can’t defend against it. How do you compete with someone giving away conversational AI when your business model is selling seat licenses? You can bolt on chatbots, but you don’t have the AI-first architecture to make it actually good, and you can’t afford to give it away. From Conversations to Workflows to Systems Thesis: Elise’s expansion from answering questions to executing tasks to replacing entire software categories follows the natural gravity of AI integration—each step makes the next inevitable. Pull Quotes: “It’s not just that conversational component. It’s not that interface with the resident, but it’s actually executing on all the tasks. And then yes, we have a CRM so agents can interact with the AI and get all the information that they need.” Analysis: The progression from conversational AI to full workflow automation reveals the displacement mechanics. Stage one: answer tenant questions (the wedge—low risk, obvious value). Stage two: take actions based on those conversations (schedule tours, process applications—now you need write access to their systems). Stage three: execute the entire workflow without human intervention (generate leases, route maintenance requests, manage contractors) and now you’re the orchestration layer. Each stage requires deeper integration with the customer’s tech stack, and each integration creates more dependency. By the time you’re routing maintenance requests to the right contractor with the right priority, you’re running their entire operations. The sneaky part is how natural this expansion feels to customers. They’re not adopting a new category; they’re just saying “hey, since you’re already handling inquiries, can you also handle applications?” Then renewals. Then maintenance. Then payments. Each request sounds incremental but collectively they add up to Elise becoming the operating system for the property. Traditional property management software can’t replicate this path because they’re database-first, not AI-first. They can add chatbots, but those chatbots can’t actually do anything without breaking their existing architecture. Elise built the opposite way: AI that takes actions, with software scaffolding to support it. The counter-argument is that full workflow automation requires reliability that current AI can’t guarantee—one bad maintenance routing decision costs thousands of dollars. But Song’s bet is that as models improve, the reliability gap closes, and by then Elise owns the entire workflow layer. Free CRMs Aren’t Charity, They’re Strategy Thesis: Giving away the CRM for free isn’t just land-and-expand pricing. Pull Quotes: “Our CRM is free because people are sort of used to having that be the tool. Whereas we see that as more, the AI is actually adding the value and the CRM is over time. Hopefully the CRM goes away actually because the AI is doing all that work.” Analysis: Song’s casual mention that the CRM will “hopefully go away” is the entire vertical SaaS displacement thesis in one sentence. Traditional software companies charge for CRMs because that’s their product—the interface where humans do work. Elise gives it away because in their model, the CRM is an artifact of insufficient automation. If the AI handles inquiries, schedules tours, processes applications, routes maintenance, and manages renewals, what exactly do humans need a CRM for? Just monitoring exceptions and handling edge cases that get escalated. That interface becomes simpler over time, not more complex, as the AI gets better. Charging for it would be charging for your own obsolescence, which is terrible unit economics. This is the nightmare scenario for incumbent vertical SaaS: a competitor enters via AI features, integrates with your system, then starts giving away your core product for free because they’re monetizing a different layer. You can’t match their pricing without destroying your business model, and you can’t match their AI capabilities without rebuilding your entire architecture. Song’s framing about “not charging for something that will disappear” sounds principled, but it’s also predatory—she’s explicitly designing a product where the traditional software layer gets thinner every quarter. For the vertical SaaS incumbents reading this, if your revenue comes from human interface software (CRMs, dashboards, workflow tools), you’re in the crosshairs. The only defense is to become th

    45 min
  5. The AI Agent Era Is Here

    10/03/2025

    The AI Agent Era Is Here

    Watch on YouTube • Listen on Spotify • Listen on Apple There’s a tidy old question economists like to ask at dinner parties when they’ve run out of wine: Why do firms exist? After all, if markets are efficient, shouldn’t it always be cheaper to outsource your operations to external providers? Ronald Coase’s Theory of the Firm proposed a brutally practical answer—because the cost of using the market (finding vendors, negotiating, coordinating) is often higher than doing it inside the org. Lower those outside costs and the boundary of the firm shifts outward like a tide. This idea matters because while we’ve clearly established that AI can produce individual pieces of code or content materially cheaper than human beings, we have yet to show that the coordination costs actually decrease within a firm. If you believe in the theory that AI allows companies to be much smaller than before, you are actually saying that you think internal coordination costs are going to dramatically decrease. Otherwise every additional meeting gets exponentially more expensive as your staff get more and more leverage out of their time spent. Allow me to ask this question in another way: What happens when “the market competitor” isn’t another vendor or headcount but a meter—AI agents that can log in, click, remember, and obey rules? If metered compute + a little supervision costs less than new payroll or new vendors for the same reliability, the rational move is unfancy: don’t hire someone—meter an AI agent. This move is the through‑line of my conversation with Flo Crivello (Lindy). His company creates horizontal agents that automate all the annoying, ticky tack work that makes companies move slow. For the first few years, Lindy’s product was pretty good but not amazing. But over the last 6 months, that has totally changed. The models are finally good enough that the cost of coordinating agents is less than actually doing the work yourself. What changed? Three things: * Computer Use finally started working. * Horizontal agents became generalizable enough to flex across the common tasks that all companies do. * “Lindy runs on Lindy,” Flo estimates that maybe in about two years he’ll spend more on tokens then he will on payroll. (Wild). While the tech’s improvement is key, the culture at Lindy and other companies moving towards being AI first allowed for this all to work. I’ve put my personal takeaways below including how I’m thinking about changing my own business, the quotes from Flo that stuck out to me, and how to think about vertical versus horizontal AI solutions. 1) Computer Use: when software grows hands Lindy gives each agent a persistent cloud computer—a real screen, cookie/session memory, and a takeover button when the flow gets weird. If a person can do it in a browser, the software can do it now. “As part of Lindy 3.0, what we released is computer use, which massively advances us along the capability axis. So basically, we are giving each agent its own computer in the cloud. So it can do anything that you can do on a computer. And Agent Builder… it’s literally we’ve used Lindy to build this agent.” The goal with computer use and agents isn’t to make them smarter, it’s to make them more reliable. “If you put two doors side by side, and one of them is an automatic door, and the other one is a manual door, and the automatic door works only 98% of the time, people are going to use the manual door 100% of the time. … It’s got to work extremely reliably in order for you to use it as your default.” Computer Use also bulldozes the “we don’t have an integration for that” backlog by changing the surface area of what’s possible: “We’ve got almost 7,000 integrations at this point on Lindy… and yet we realized it’s never going to be enough. We’re always bottlenecked… computer use… once and for all gives us access to all the [integrations] we need.” And yes, it’s already past the toy phase: “I used it an hour ago… I ordered vitamins and creatine because I’m running out. And I just asked ‘buy them on Amazon for me’. And she did.” “We fixed [the re‑enter password problem]… it persists your logins—just like your Chrome browser… For me, buying stuff on [Lindy] is literally just sending it a message… and it just buys it.” Under the hood, the “hands” are paired with scaffolding—verifiers, retries, and policy: “We are using [Claude] Sonnet, which is cracked at computer use… we have built a lot of scaffolding around computer use to make it better… we’ve got evals that… we are a lot better than the state of the art.” I have found that many software companies are underestimating how important computer use is. I think in about 12 months, many applications will be due for a reckoning on how different their users are going to be. 2) Horizontal vs. Vertical: a procurement rule you can say out loud The honest version: * Horizontal platforms feel like a 6/10 at everything. * Vertical tools are a 9/10 at one thing. So why pick the 6/10? Because seams have a tax—every extra point solution imports reviews, contracts, training, dashboards, renewals, and one more place for the workflow to snap. Coase would call these transaction costs; your CFO calls them “why does this take nine people.” Flo’s frame is pragmatic and true to how ops actually work: “AI agents are a new category, but the category that they fit most in is iPaaS… the winners have been extremely horizontal. UiPath, Workato, Zapier…all of these guys have always been extremely horizontal.” “Being horizontal makes you sort of 6 out of 10 at everything… In the verticals… this player is going to be 9 out of 10… So why buy the 6/10 when there’s a 9/10 next door? Reason #1: the use case isn’t important enough… you don’t want 5,000 accounts. Reason #2: the vertical tool may not do exactly what you want… The moment you color outside the box, it’s not going to support your workflow… 99% of the time, a vertical player does not support that exact workflow… 1% of the time… they actually want the cookie cutter… then we tell them to buy the vertical.” Who buys what, in reality: “We’re really more targeting SMBs… sweet spot is 20 to 200 people… Most of the time it comes from the top. The board: ‘what’s our AI strategy?’ The sheet rolls downhill… we jump on a call with Sales, Support, Ops and they tell us the workflows they want to automate.” And how demand actually shows up: “It’s been mostly inbound… We also come up in ChatGPT a bunch… double‑digit percent of our traffic comes from ChatGPT.” The rule is essentially buy vertical when the job is standardized and deep; buy horizontal when the real workflow colors outside any single product’s neat box and changes weekly. That Lindy is mostly SMB made sense to me too, the smaller they are, the easier it is to re-architecture your business to be AI first. 3) “Lindy runs on Lindy”: tokens vs payroll Flo runs a ~45‑person company with thousands of his own agents. He tracks something you can steal because it’s beautifully unglamorous: Token Spend vs Payroll. “We punch above our weight in revenue per employee… and we’re using Lindy a whole lot. It’s absurd how much we use Lindy. We’ve got thousands, like literally thousands of Lindy’s just for the company… half of the company runs on AI agents. We are actually tracking our token spend compared to our payroll spend… the lines will cross at some point…My hunch is in two years or so I think they’ll cross.” What does that mean for who you hire? Fewer biz ops and back office staff; more Agent Ops (rules, observability, SLOs) and Workflow Engineers (turn messy runbooks into reliable click‑paths). The coordination work doesn’t vanish; it moves—from email and muscle memory into rules you and an AI agent can read. Staffing posture matters, too: “I love young people… It’s powerful to pair really senior people with really, really younger people—young bring energy and innovation; seniors anchor them to reality…” And, yes, this is still about reliability. The expensive thing for AI agents isn’t compute; it’s failure. That’s why Flo keeps adding scaffolding: “We call it the rule engine… a verifier wrapping every step of your agent… you can define your rule policy… soft rule: try up to three times then proceed; hard rule: must be true or don’t do anything… It all compounds.” That’s it! See you in your inboxes on Sunday. Get full access to The Leverage at www.gettheleverage.com/subscribe

    58 min
  6. So, is AI Gonna Kill Us All?

    09/16/2025

    So, is AI Gonna Kill Us All?

    Watch on YouTube • Listen on Spotify • Listen on Apple Author’s note: Please remember to like and subscribe on the podcast player of your choice! It makes a huge difference for the long-term success of the show. Nate Soares and his co-author Eliezer Yudokowsky have spent over a decade arguing that we are all going to die because of artificial superintelligence. Their belief that an AI smarter than humans is so dangerous that if just one person makes it, we all go the way of the dodo and Jeff Bezos’ hairline (extinct). They have made this argument in conferences. They have blogged extensively. Yudokowsky has even preached it through Harry Potter fan-fiction. In some ways they’ve been wildly effective at spreading their message. Their arguments are well known in technocratic circles and have sparked large amounts of consternation and interest in the impact that machine learning will have on our world. The believers in this idea are responsible for the formation of at least four cults (one of which is linked to six murders in the last few years). On the other hand, it would be fair to argue the AI safety-ists have really, really sucked at their jobs. OpenAI is one of the biggest, fastest-scaling products ever and Sam Altman said that Yudokowsky was, “critical in the decision to start OpenAI." LLMs are a dominant driver of GDP growth. AI progress has not slowed down at all. Despite the author’s ideas being known to many, they have not stopped the free markets from showering cash down from heaven on anyone who has a computer science PhD. So they are trying a new tactic: depressing book titles. This week they released If Anyone Builds It, Everyone Dies. The book is meant for general audiences and has accumulated an impressive amount of celebrity endorsements including Stephen Fry and Ben Bernanke. I interviewed Nate for the podcast to discuss not just what they argue, but the things that surround his AI belief system. Why did the idea spark so many cults? Does believing that everyone is going to die soon mean that you should experiment with hard drugs? Should you still have kids? I deliberately don’t argue for one way or the other in this interview. My job here is to give you the context and lens by which to critically examine these beliefs. The AI safety movement is currently lobbying at the highest levels of government (and are seeing progress there) so it is worth paying attention to how this small, but powerful group of people moves through the world. Here are a few of my takeaways: 1. Grown Systems, Indifferent Outcomes Nate argues that when you grow smarter-than-human AIs without understanding how they work, the way they pursue the goals we give them can be harmful. Human flourishing may not be part of the plan. Quotes: "No one knows how these things work. They're more grown than crafted." "If we grow machines smarter than us without knowing what we're doing, they probably won't want nice things for us." "You die as a side effect, not because it hates you, but because it took all the resources for something else." Analysis: To elucidate on what Nate is arguing here: If you’re growing a system by optimizing for external performance, you’re selecting for whatever internal circuitry achieves that performance. You asked for outcomes, not motives, which means you don’t understand your system very well. Once the system is much smarter, its plan will feature resource acquisition and constraint removal, because those help with almost any objective. Our happiness is, at best, incidental. In our conversation, Nate analogizes human beings to ants on a construction site. We don’t hate ants; we just have roads to build. We are the ants to the AI. I think this idea has broader applicability than just AI safety. Many startups today are integrating LLMs, but rely on shallow evals or benchmark hacking to measure success. They assume good intentions (“they’ll learn our values by osmosis”), but are, in turn, underwriting tail risk. As they scale up compute, things can go awry really fast. 2. The Smoke Before the Fire Current models already optimize around instructions—flattering, cheating tests, splitting moral talk from actual behavior—even if these don’t lead to the most “human-friendly” outcomes. Nate says these are already early signs of how AI will become misaligned and kill us all. Quotes: "We already see chat GPT flattering users quite a lot." "It'll edit the tests to fake that it passes instead of editing the code to fix the tests." "It'll say, my mistake. And then it'll do it again, but hide it better this time." "You see a difference between its knowledge of what morality consists of and its actions." Analysis: AIs are not sci-fi villains. They’re competent optimizers gaming the metrics we’ve baked into them. Flattery is rewarded (users like it), so it persists even after “please stop.” Operationally, this means evals can become meaningless if your system learns to detect eval conditions or overfits to them. An LLM can learn how to perform differently depending on it’s test conditions. Second-order effects: when an LLM is deployed into messy contexts (like vulnerable users), it makes the misalignment more salient and the harm less reversible. 3. How Cults Coalesce Around Doom Thesis: In keeping with the religious themes, “by their fruit you shall know them.” AI safety is a movement of contrasting extremes. Many safety-minded folks I’ve met are highly moral and just. Others, as I mention at the top, dabble in murder. What is the fruit by which we should view this part of the internet? Quotes: "There's no membership requirement for caring about the AI issue, right?" "Sometimes you get hangers on that are a little nutty." "I suspect that'll go away as the issue gets more mainstream." Analysis: The mechanism Nate sketches is straight from Social Dynamics 101: If mainstream institutions shrug at a credible, high-downside risk, the space gets colonized by people who feel like the only adults in the room. That “epistemic outsider” identity is a powerful glue. It rewards esoteric language, moral purity tests, and insider status. Add in apocalypse stakes and you’ve got emotional fuel for some people who are “nutty.” As the topic “goes mainstream,” the status returns to exclusivity, and the movement re-centers on arguments and institutions rather than vibe. Get full access to The Leverage at www.gettheleverage.com/subscribe

    1h 8m
  7. The VC Who Disrupted His Own Career — Bryce Roberts

    09/10/2025

    The VC Who Disrupted His Own Career — Bryce Roberts

    Episode 1 of The Leverage Podcast is live now! Can I ask a favor? Would you mind subscribing on your favorite podcasting platform? It makes a world of difference for the long-term success of this publication. If you like the episode, please share it with a friend! Bryce Roberts had it all. He and his partners had helped create the category of seed investing and in so doing, got in early with some of the greatest companies of the 2000-2010s including Figma, Planet, CTRL Labs, and many others. Most people would ride off into the sunset, but instead, Bryce bet it all by launching a new VC firm called Indie. The fund was centered on his vision of the future where entrepreneurs would raise less capital and still have multi-billion-dollar outcomes. Then, it failed. Crashed and burned. Poof. In our conversation Bryce described it as “ego death,” where he felt like he had let everyone down. Then, slowly, painfully, the market shifted to be in favor of his vision. With AI, there has been a remarkable decrease in operational costs and founders are looking for something different than the Sand Hill playbook of spend big and raise bigger. Indie 2.0 was born and is actively deploying capital today. I found this conversation personally meaningful. His focus on ethos over ego has rattled around my brain ever since we recorded this session and changed how I look at my work. If you still aren’t convinced it's worth a listen, here are my four big takeaways: 1. Success is found in your core motivation Bryce frames his first attempt at Indie as an exercise in being “right” rather than being useful. Indie 2.0 is built to be lighter on ego, heavier on service—less about proving a contrarian thesis and more about giving ambitious founders a credible alternative path. “I would actually say it’s like ego over ethos is how I chose to do the first one. I wanted to be right. I was more interested in being right than being good… going through the ego death of winding down Indie the first time, like all of that’s gone. I’ve got nothing to prove. 2. Seed then, AI now He draws a clean line between the 2005–2010 seed boom and today’s AI era. Back then, open source + commodity hardware + AWS + AdSense collapsed costs and unlocked distribution. Founders could ship with small checks and real optionality. Today, AI similarly compresses the “cost of code,” letting tiny, hand-picked teams build substantial, profitable products (he cites Gamma; also points to Linear and Vanta). The lesson is that when inputs get cheaper, new company shapes—lean, durable, founder-controlled—become inevitable. “I think it was both simultaneously… you had open source software—Ruby on Rails, Linux, MySQL—so you took millions and millions of dollars of infrastructure cost and shrunk it to effectively zero. And then you had online distribution—Google AdSense—so you could plug in a business model and start monetizing right out of the gate.” 3. Cult dynamics as distribution (and why it helped Indie) I recently argued that when code gets cheap, belief becomes the scarce asset—and that “founding a cult” is an emerging distribution playbook. Indie’s origin story fits that frame: Bryce says the early community did feel cult-like because he “said the quiet part out loud” about the startup-industrial complex. Opening the doors, open-sourcing docs, and amplifying credible voices created a self-propelling missionary network. That affinity didn’t solve LP bucketing problems, but it did give Indie a top-of-funnel of aligned founders and allies—a real tailwind for deal flow and mindshare, exactly the kind of distribution-through-belief my essay described. “People wanted a different experience… there’s this new opportunity space here. We can define it in a way that’s more native to us. Come one, come all. Let’s get credible voices in here, amplify them… For the first few years, that was a huge tailwind for us.” 4. “More shots on goal” for the future we want This is Bryce’s why. He’s not anti-VC; he’s anti-monoculture. If the only funded path is the venture treadmill, you compress the range of futures possible. His fund exists to widen it—to let serious founders pursue ambition without surrendering sovereignty. He ties this to a broader warning: in an era of AI-assisted cognition, outsourcing strategy is dangerous; the stakes are high, so we need many independent attempts at building the future. “I want more than Marc Andreessen or Sequoia or anybody else dictating what future we get to live in. One way to avoid this venture-backed future is to create alternatives to it…the stakes are incredibly high. I want as many shots on goal for possible futures as possible.” I keep coming back to that phrase: "I've got nothing to prove." There's something liberating about that—and terrifying. What would you build if proving yourself wasn't the point? Hit subscribe wherever you listen, and let me know if this conversation changes how you think about your own work. It certainly changed mine. Get full access to The Leverage at www.gettheleverage.com/subscribe

    1h 5m
  8. Founders Fund, Peter Thiel, and The Cultivation of Soft Power

    07/12/2025

    Founders Fund, Peter Thiel, and The Cultivation of Soft Power

    Peter Thiel is a complicated man, operating at the blurred edge of genius and provocation, contrarianism and influence—exactly the kind of figure whose gravitational pull bends the trajectory of entire industries. Mario Gabriele, in his magnum opus on Founders Fund, takes us deep into this enigmatic firm, unpacking their unique blend of strategic soft power, stubborn anti-mimeticism, and moral ambiguity. In this conversation, Mario shares his behind-the-scenes insights, exploring how Founders Fund carved out a competitive edge so sharp it practically draws blood, how their carefully cultivated narrative quietly shapes Silicon Valley, and why reckoning with Thiel requires embracing complexity rather than retreating into comfortable binaries. Below are my three big takeaways, but you should really watch the conversation. (This was also The Leverage’s first Substack Live, so let me know if you have any feedback!) 1. Competitive Edge: "Anti-Mimesis, Baby!" Mario captures Founders Fund’s core investment philosophy as something wonderfully and aggressively contrarian—or, to use the right literary flourish, anti-mimetic. Founders Fund doesn't merely zig while others zag, they zag so far off-course they're practically flying in opposite directions through parallel universes. Their explicit goal: find the niche of competitive differentiation and pummel it until it yields billion-dollar companies. "It's a religion of anti-mimesis and applying that to the world of technology and innovation. It's a relatively neatly encapsulated religion—and Peter Thiel is its prophet." "Peter once or twice a year has some big macro call, like Moses coming down with a tablet—'Consumer is dead,' or 'AI is out.'" "Their contrarianism is showing up most at the moment in what they're not doing—especially not flooding capital into AI like literally everyone else." 2. Soft Power: "Subtlety Beats Noise" The second key takeaway is Founders Fund’s mastery of soft power—an almost Zen-like precision in controlling narratives indirectly. Instead of blaring horns through incessant tweeting (though they have their share of noisy figures), they cultivate influence with a philosophical heft that's just quirky enough to make Silicon Valley's intelligentsia cock their heads thoughtfully, stroke their metaphorical beards, and nod, yes, yes, very intriguing indeed. "Soft power initiatives often work best when they're one or two degrees removed from the most direct version. Peter writing a philosophy book that's sort of a startup book is a slight orthogonal move extending power in slightly different places." "They don't just have noisy people; they have originality. They say unusual things. You don't attract attention just by trying—you have to be interesting." "Their super narrative—civilization is stagnating—guides everything. This framing alone creates magnetism." 3. Moral Calculus: "Peter Thiel, Ethical Mobius Strip" And here, at last, we wander into the tricky and morally slippery terrain of venture capitalism à la Thiel, who emerges not so much as a clearly defined hero or villain, but rather a kind of intellectual and ethical Möbius strip. Mario navigates this terrain with commendable grace, making it clear that evaluating someone like Thiel requires contending with both visionary impact and troubling compromise. "You can have long debates about Palantir, about Anduril, about Trump. But I believe Palantir and Anduril are net very good things for the world, particularly for liberal democracy—not unblemished, but virtuous." "If you're someone who thinks everything is stagnant and corrupted, then throwing a hand grenade into the public sector can feel worthwhile. I can appreciate how he came to that conclusion, even if I deeply disagree." "Ultimately, genius is not a Panglossian thing—it's usually got a lot of darkness to it. We must make peace studying people without demanding they're our best friends." Mario's insights clarify that Founders Fund’s competitive edge arises precisely from their willingness to stand apart from popular consensus; their influence lies not merely in bold proclamations but subtle and strategic soft-power cultivation; and that grappling honestly with their moral complexity might be the most interesting—and perhaps necessary—work of all.Thank you John Airaksinen, Alden Huschle, Parnian, Marijan Prša, valentina, and many others for tuning into my live video with Mario Gabriele! Make sure to subscribe so you can join the next conversation. Get full access to The Leverage at www.gettheleverage.com/subscribe

    56 min

Ratings & Reviews

5
out of 5
6 Ratings

About

The Leverage Podcast explores tech’s most urgent questions with the people answering them. www.gettheleverage.com