Gabriel Weinberg's Substack

Gabriel Weinberg
Gabriel Weinberg's Substack

Tech, policy, and business insights from the DuckDuckGo founder and co-author of Traction and Super Thinking. gabrielweinberg.com

Episodes

  1. JUL 8

    The debate over a potential economic bump from AI points to a much larger economic opportunity.

    The decade surrounding the Internet boom saw a marked increase in U.S. total factor productivity (TFP) growth—the economic efficiency gains beyond just adding more workers or capital—similar to higher levels seen before 1973. However, it then declined again from 2005 onward back to doldrums levels, which brings us to the present. Now, people are starting to debate a similar potential AI productivity bump unfolding over the next decade. However, this debate raises an even bigger question: How do we achieve sustained higher productivity growth forever, not just in occasional blips and bumps? I believe this is the most important economic question because higher productivity growth is the key factor that drives long-term higher standards of living. Here’s a concise, understandable explanation from the Bureau of Labor Statistics as to why: How can we achieve a higher standard of living? One way might simply be to work more, trading some free time for more income. Although working more will increase how much we can produce and purchase, are we better off? Not necessarily. Only if we increase our efficiency—by producing more goods and services without increasing the number of hours we work—can we be sure to increase our standard of living. To continuously produce more without increasing the number of hours worked per person, we need to continually develop better tools. To continually develop better tools, we need access to increasingly better science and technology. To get access to increasingly better science and technology, we need sustained higher investment in basic research. Think of how worker productivity increased in construction with the introduction of power tools and heavy machinery or in offices with the introduction of computers and the Internet. We need more of these, many times over: true leaps forward in technology applications that will dramatically increase our worker productivity. (The Great Race) That is, to ensure higher productivity growth, we must continually invest sufficiently in the next set of technologies that will generate and sustain these higher productivity levels. That requires increased investment in basic research. But, why can’t private industry do it all? The private sector…is excellent at taking established scientific breakthroughs and turning them into products over a few years. That’s because there is a clear profit motive in doing so. However, it is not as great at coming up with those scientific breakthroughs in the first place or commercializing them on much longer timescales like decades, where the profit motive is significantly reduced. This activity still generally involves some government-funded research in the early stages. (The Great Race) Yes, this includes AI too. A good post on this by Mark Riedl titled Visualizing the Influence of Federal Funding on the AI Boom takes seven key papers in AI, including Attention is All You Need (2017), and then traces which of their references explicitly acknowledge federal funding. ~18% of papers referenced by these 7 industry papers have acknowledged US federal funding. ~24% of papers referenced have US university authors. ~20% of papers referenced are industry lab-authored. ~42% of papers referenced do not have any industry authors. The AI boom did not happen in an industry vacuum. As with all research, it was an accumulation of knowledge, much of which generated in university settings. There is a growing narrative that academia isn’t important to AI anymore and US federal funding has no role in the AI boom. It’s more correct to say that the AI boom could not have happened without US federal funding. The same was true with the Internet boom. The same is also true in healthcare, for example, tracing federal funding through the most transformational drugs. And yet, as I previously explored, science funding was already way too low before recent cuts. Quite simply, since the 1960s, we haven’t invested sufficiently in basic research to create sustained higher productivity. But we could change that. And, as I've also previously detailed, this should be a no-brainer since, if done right, it literally pays for itself by expanding the economy, generating higher tax revenues, and ultimately lessening the debt-to-GDP ratio. So while economists debate whether AI will boost productivity by 0.5% or 1.5% for the next decade, we should be asking: How can we better invest in basic research today to ensure we’re still growing faster in 2050? Short-term productivity bumps are a boon. But, if we want sustained productivity gains, we need sustained productivity investment. Thanks for reading! Subscribe for free to receive new posts or get the podcast. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit gabrielweinberg.com

    6 min
  2. JUN 26

    States should be allowed to regulate AI because realistically Congress won't

    A provision in the “Big Beautiful Bill” (the massive federal spending and policy package currently being negotiated) aims to stop all state-level AI regulation for the next ten years. Now, I’m not anti-AI. My company (DuckDuckGo) offers our own private chatbot service at Duck.ai, we now generate millions of anonymous AI-assisted answers daily for our private search engine, and we are working on more AI features in our browser. We’re investing heavily in AI because, to achieve our aim of protecting people online as comprehensively as possible, we must offer a compelling alternative to the most commonly used untrusted online workflows, most notably searching, browsing, and now chatting. At the same time, I believe the AI backlash is real and growing, which is why I’m thinking and writing about it, and why we’re designing all our AI features to be useful, private, and optional. And, the backlash is real for good reason. AI poses a wide range of risks, including massive job displacement, extensive privacy concerns, and, at the extreme, existential risks. That’s why this “pause” is particularly dangerous. I’m not taking a position here on which risks should be regulated, when, or how. More on that in future posts. But I am saying that AI will require at least some well-crafted regulation to address some of its risks over the next ten years, and yet Congress has proven incapable of taking action. The states, on the other hand, do take action. Look no further than privacy law as a close parallel. It’s 2025, and the United States still lacks a comprehensive federal privacy law. The International Association of Privacy Professionals (IAPP) now tracks 144 countries with such laws (as of Jan 2025). The U.S. is a clear outlier: The most populous countries without a comprehensive national privacy law include the U.S., Pakistan, Bangladesh, Iran, and Iraq. This isn’t for lack of trying. Numerous bills have been proposed, and many hearings have been held, yet nothing has even come close to passing, not even after Snowden or Cambridge Analytica. Unrelated to privacy, Congress has proven unable to legislate effectively, and while we should work to fix that independently, we can’t wait for it. Meanwhile, IAPP tracks 19 states that have managed to pass general privacy laws to protect consumers to some extent, including the two most populous states, California and Texas. Despite fears that a “patchwork” of state laws would wreak havoc on innovation by going too far, they haven’t. Innovation hasn’t stalled, and neither have big-tech privacy violations. That’s because state privacy laws, while better than nothing, in my opinion, don’t nearly go far enough, which is why we (DuckDuckGo) still need to develop dozens of overlapping protections to keep consumers safe online. Meta’s latest AI-chatbot leak foreshadows a bleak AI-privacy future if there are literally no regulations in sight. State laws also provide Congress with both a blueprint for action and further incentive to enact laws. Nothing prevents a future AI bill from overriding (preempting) state AI laws. Of course, for that to happen, Congress would need to pass general tech legislation. I would love to witness that and have been working to help make it happen, but I am also realistic about Congress’s current capacity to regulate tech. Finally, the current proposal would seemingly preempt the most protective provisions of existing state privacy laws. That would be a giant step backward for online privacy. We helped pioneer Global Privacy Control, an opt-out signal that is on by default in our browser and extension, which has legal effect in California and other jurisdictions. Senator Maria Cantwell, ranking member on the Senate Commerce Committee, notes that the bill would nullify provisions of many state privacy laws that “give consumers the right to opt-out of profiling.” In the last 25 years, states filled the privacy law vacuum left by Congress; let them do the same for AI. We should not silence states from protecting their citizens from dangerous new risks for a decade. And, if Congress gets its act together, then great—those future bills can preempt any conflicting state provisions. Thanks for reading! Subscribe for free to receive new posts or get the podcast. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit gabrielweinberg.com

    5 min
  3. JUN 21

    Rule #2: There are always underlying assumptions.

    Every decision rests on assumptions. At minimum, you’re assuming the options you’re considering are actually all of your options. This set of options is, in turn, based on a set of assumed facts, which is, in turn, based on even more assumptions about how the world works. I’ve seen this pattern repeatedly: teams spend hours debating the details of Option A versus Option B, only to discover they never considered Option C—or that their entire framing of the problem was based on outdated or misaligned assumptions. Generally, in these situations, or for any significant decision, it’s worth taking some time to literally enumerate the relevant assumptions and why they should be relied upon in this particular situation. This process of enumerating assumptions is almost always illuminating in some way. Often, it reveals that particular assumptions need further de-risking. From our book Super Thinking: You can de-risk anything: a policy idea, a vacation plan, a workout routine. When de-risking, you want to test assumptions quickly and easily. Take a vacation plan. Assumptions could be around cost (I can afford this vacation), satisfaction (I will enjoy this vacation), coordination (my relatives can join me on this vacation), etc. Here, de-risking is as easy as doing a few minutes of online research, reading reviews, and sending an email to your relatives. Or, in the context of a startup idea, also from our book: * My team can build our product—We have the right number and type of engineers; our engineers have the right expertise; our product can be built in a reasonable amount of time; etc. * People will want our product—Our product solves the problem we think it does; our product is simple enough to use; our product has the critical features needed for success; etc. * Our product will generate profit—We can charge more for our product than it costs to make and market it; we have good messaging to market our product; we can sell enough of our product to cover our fixed costs; etc. * We will be able to fend off competitors—We can protect our intellectual property; we are doing something that is difficult to copy; we can build a trusted brand; etc. * The market is large enough for a long-term business opportunity—There are enough people out there who will want to buy our product; the market for our product is growing rapidly; the bigger we get, the more profit we can make; etc. Enumerating assumptions can seem pedantic, but I've found it extremely helpful because it clarifies your argument, much like writing a blog post or explaining something to someone in real-time. It helps ensure you’re drawing a logical conclusion from your assumptions. A great way to help identify potentially shaky assumptions is to conduct a premortem. Here’s the template we use at DuckDuckGo for new project premortems: Premortem Many projects fail to meet their success criteria, so take some time to be pessimistic and ask questions: * What are the key risks to this project and how can they be mitigated? * What could slow down this project and how can we prevent that from happening? * How might our actions in this project negatively affect user trust or be misunderstood? The goal is to uncover problems or blindspots and then decide how to address them up front, such as by starting with the most uncertain part. If a project is destined to fail, failing fast is a success. This principle extends beyond individual and team decisions to broader discourse. I'm constantly frustrated by political and policy debates where participants talk past each other—often because they're operating from different underlying assumptions or facts. If it is a good faith debate, then there should be the opportunity to enumerate and drill down on those underlying assumptions and facts, which can then help clarify exactly where the disagreement is occurring. For example, it might turn on just one misaligned assumption or fact among many. If that’s the case, a determination can be made on what specific evidence would be compelling enough to achieve alignment. Rule #2—There are always underlying assumptions—is universal. You're always making assumptions. The key is examining them before they lead you astray. See other Rules. Thanks for reading! Subscribe for free to receive new posts or get the podcast. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit gabrielweinberg.com

    5 min
  4. JUN 15

    Issues with widespread bipartisan agreement that never go anywhere

    In a country currently known for deep political divisions, some issues still garner supermajority agreement, yet nothing gets done about them. To me, these issues simultaneously represent hope and despair: * Hope—yes, we can agree on some important things. * Despair—uh oh, our widespread agreement isn’t translating into actual change. In other words, the existence of issues with durable, supermajority agreement, which nevertheless never materializes into change, is a strong indicator that we need some significant structural reform. I have ideas for that, but first, let me try to convince you that these magical issues exist! 1. Congressional Term Limits From a 2023 Pew Research report: Term limits for members of Congress are widely popular with both Republicans and Republican-leaning independents (90%) and Democrats and Democratic leaners (86%). This isn’t new. A 2013 Gallup poll showed similar results, noting that these results were also “similar to those from 1994 to 1996 Gallup polls.” 2. Legalizing Marijuana From a 2023 Gallup poll, which has been polling this issue for decades: For the second straight year, majority support for legalization is found among all major subgroups, including by age, political party and ideology. 3. Digital Privacy From a 2023 Pew Research report: A majority of Democrats and Republicans say there should be more government regulation for how companies treat users’ personal information (78% vs. 68%). They also note that “[t]hese findings are largely on par with a 2019 Center survey that showed strong support for increased regulations across parties.” 4. Limiting Money in Campaigns From a 2024 American Promise poll: “A proposed constitutional amendment would allow Congress and the states to reasonably regulate and limit money in our campaigns and elections. Would you support or oppose this amendment?” 5. Universal Background Checks From a 2023 Fox News poll: 87% of Americans support requiring criminal background checks on all gun buyers, including 83% support from gun owning households. This is a long standing result, for example this 2019 Quinnipiac poll: Voters support 93-6 percent "requiring background checks for all gun buyers." Support is 89-10 percent among Republicans and 87-12 percent among gun owners. Support for universal background checks has ranged from 88 to 97 percent in every Quinnipiac University poll since February 2013, in the wake of the Sandy Hook massacre. 6. Negotiating Medicare Prescription Drug Prices From a 2024 KFF poll: A large majority (85%) of voters say they support allowing the federal government to negotiate the price of some prescription drugs for people with Medicare. This includes at least three quarters of Republican (77%), independent (89%) and Democratic (92%) voters. 7. Congressional Stock Trading Ban From a 2023 Maryland School of Public Policy study: Overwhelming bipartisan majorities favor prohibiting stock-trading in individual companies by Members of Congress (86%, Republicans 87%, Democrats 88%, independents 81%) These issues aren’t anomalies. I stopped at seven issues, but there are many more issues like these, for instance, increasing veterans’ benefits (similarly supported by Pew/Gallup polling), and I’m sure there will be even more in the future. I recognize that some issues appear bipartisan at times, and then once a political party takes up the cause, they immediately become more partisan. But gun control, health care, and drugs, for example, have all been politicized in this manner. Yet, as noted above, supermajorities exist for specifically reasonable policies, including universal background checks, negotiating Medicare drug prices, and federal legalization of marijuana. Each issue has its own set of reasons for persisting stubbornly without change, typically a combination of special interests, regulatory capture, and general congressional dysfunction. I contend that, fundamentally, we, the people, need another path to change. (Incidentally, some original printed copies of the U.S. Constitution had a comma after we.) How can “we the people” change things? “We the people” leads off our Constitution, and yet it doesn’t provide a direct way for us to change it. I think it should. It would still have to be very hard to change the Constitution, but I think there should be a citizen path to do so, not just one that goes through elected officials. Sixteen states already allow for totally citizen-initiated state-level constitutional amendments. Typically, a threshold of signatures (between 5-15% of the votes cast in a recent election) needs to be achieved to get on the ballot, and then a direct vote happens where, if a threshold is reached (50-60%), then the amendment is ratified. I envision a similar process, but scaled up to the national level: Something like a signature requirement in enough states triggers getting a binding resolution on a federal election ballot. Then, if a supermajority of citizens in enough states vote for it, it is automatically ratified without additional involvement from Congress or state legislatures. Many people have proposed similar things in the past, and I plan to examine them and put forward my thoughts on a more concrete proposal in the future. As in the past, I don’t expect anything actually to happen, but it at least helps clarify my thinking, and you never know— maybe we’re approaching a moment when something can happen. Thanks for reading! Subscribe for free to receive new posts or get the podcast. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit gabrielweinberg.com

    8 min
  5. JUN 8

    U.S. AI-labor protests could eventually resemble the French Yellow-Vest protests

    I do not yet have a well-formed opinion on the net impact of AI on job loss across different time scales. Will it be a large, net negative, or will it be close to net neutral, similar to previous technology cycles? “Expert” estimates are all over the place, and while there is little job loss directly attributable to AI right now, circumstantial evidence is accumulating. That said, I believe it is clear that even in a net-positive scenario, many jobs will be displaced as new ones are created. For example, the World Economic Forum predicts by 2030 the creation of 170 million new jobs worldwide. However, this growth is expected to be offset by the displacement of the equivalent of 8% (or 92 million) of current jobs, resulting in net growth of 7% of total employment, or 78 million jobs. Supposing that this is true and a similar scenario unfolds in the U.S., I still think this amount of displacement is likely to have significant negative consequences for the displaced. That is, the displaced individuals are unlikely to be the same individuals who secure the latest jobs, and this will leave many with worse jobs, or in a place with no job at all. For example, are cashiers and truck drivers going to get new, fancy AI jobs in another industry, without significant help to do so? I highly doubt it. So, recognizing that significant job displacement is on the horizon for many industries, regardless of how the overall total nets out, it would be ideal to provide affected individuals with a softer landing than we’ve managed to do in the past. Unfortunately, the likely policy outcome is that we do absolutely nothing until there is a considerable backlash to AI, and even then, we probably do nothing. The backlash would have to be big enough that politicians cannot ignore it, or it is seen as a political opportunity, such that it becomes a platform for elections. In a recent post titled Will the AI backlash spill into the streets?, I made the case that, among the numerous anticipated AI harms to society, job displacement stands alone with the potential to significantly spill over into the streets in the form of sustained protests. I’m not saying that it is likely to happen, and I hope it doesn’t, but I also think we could increase the probability of preventing it if we had a clearer picture of what we’re trying to avoid. At the end of that last post, I pondered about historical analogs: Recent protest movements seem more one-sided politically (e.g., climate change, Occupy Wall Street, Tea Party, etc.). Mid-century protests were arguably similar (civil rights movement, Vietnam War protests, etc.), though they were sustained for much longer and ultimately swayed public opinion and accelerated change. A better parallel here, though, would be something that is clearly bipartisan from the start, more squarely on an economic issue, and resulted in swift reforms. This line of reasoning led me to the Yellow Vests protests in France from 2018 to 2020, which I’ve come to believe could provide a decent historical analog of what could happen if AI job displacement reaches a critical mass in the coming years. It’s recent, directly related to economic insecurity, and created policy changes, though significant disruption and violence as well. Ideally, we’d get the policy changes without the widespread disruption and violence. Now I am not French, and I did not personally live through this protest movement. I do remember watching and reading coverage about it at the time, but that was the extent of my knowledge before I delved more deeply into the subject over the past week. (Below, facts are from Wikipedia unless otherwise noted.) Subscribe for free to receive new posts. What were the yellow vest protests about? They began as a reaction to a fuel tax increase, but quickly expanded to a broader reaction against economic insecurity. The movement never had clear leaders, nor was it tied to a particular political party. One widely circulated list of 42 demands compiled from online surveys went viral, including many specific economic demands such as rolling back the fuel tax, implementing minimum pensions, indexing wages to inflation, and providing jobs for the unemployed. It was somewhat disjointed, as it also demanded education and immigration reforms, among others; however, the central theme centered on economics and jobs. How many people participated? Approximately 3 million unique protesters participated, which is roughly 4% of France’s population of nearly 69 million. The U.S. has a population of around 341 million, so an equivalent number of unique protesters would be approximately 14 million. That’s the same order of magnitude as the George Floyd protests. The peak protest time was Nov-Dec 2018, with the highest single-day attendance of about 287,000; the U.S. equivalent would be about 1.5 million. Protests occurred across the entire country. After the initial period, there were still continual weekend demonstrations for about a year and a half in total, until the pandemic essentially brought them to an end. Did the protests turn violent? Unfortunately, yes. At least a dozen people died in the protests, with another five people losing their hands as a result of police grenades, and a reported twenty-three people losing their eyesight. More minor injuries were in the 1,000s for both protestors and police as a result of clashes. How much public support did it have? Very high. Public support for the movement reached a high of 75% in the initial phase, and then it declined over time. What was the government’s reaction? On December 10, 2018, about a month in, French President Macron gave a speech to the public, pledging a €100 increase in the monthly minimum wage, among other reforms. The speech was viewed live by more than 23 million people, or approximately a third of the entire population. The U.S. equivalent would be around 80-85 million. Concessions from the government ultimately totaled about €17 billion, which, converted to USD and scaled up to the U.S. economy, would be about $150 billion. What was the demographic makeup of the protestors? An academic study of the protesters found that approximately 47% were first-time protesters. The median income of the protesters was about 30% less than the country’s median income. Participation cut across the political spectrum, and the researchers concluded: In short, this is indeed a revolt of the ‘people’…in the sense of the working class and the lower-middle class, people on modest incomes. Consequently, in several ways the gilets jaunes movement presents a different kind of challenge from the social movements of recent decades. In addition to its size, the strong presence of employees, people of modest educational qualifications and first-time demonstrators, and, above all, the diversity of their relationship to politics and their declared party preferences, have made roundabouts and tollbooths meeting places for a France that is not used to taking over public spaces and speaking out, as well as places for the exchange of ideas and the construction of collectives in forms rarely seen in previous mobilizations. Why did they wear yellow vests? As noted, the movement originated as a response to a proposed fuel tax increase. Independently, a different French law requires motorists to have a yellow vest in their car to wear in case of an emergency, so many motorists had them readily available. A petition against the tax went viral, and then some associated viral videos called to “block all roads” and included the idea of using the yellow vests. What are some parallels to AI? First, while the backlash to AI is just getting started, as I noted in my previous post, the ingredients are there for a potential future movement that could match a similar broad-based revolt that transcends political parties, if significant negative job impacts accumulate over several years: * Cuts span industries, so outrage lands on both parties. * Every income bracket—from cashiers to coders—takes a hit. * Sudden, deep job cuts risk recession and years of high unemployment. To be clear, we’re not there yet, and may never get there. Job losses may not materialize. Or, they may unfold over a much longer period of time that doesn’t lend itself to banding together across industries. For example, current AI labor organizing remains limited to specific sectors like entertainment and dockworking. But, if AI-driven job displacement accelerates across multiple industries simultaneously—affecting many millions of workers over the next 3-5 years—the conditions could ripen for a Yellow Vest-style eruption. Second, as the Yellow Vest movement showed, it doesn’t actually have to be sparked by a sharp increase in job losses. Instead, if there is enough downward pressure on wages, unrest resulting from that pressure can build up until a critical mass is reached. In other words, that could still happen even if AI diffusion takes many years to touch many industries, as long as the impacts and resentment are sustained. Third, if a critical mass is reached, that’s essentially a powder keg waiting to explode, which means any event could be the proximate cause that sparks it. Therefore, like the Yellow Vests movement, I could see a similar AI movement happening in a decentralized fashion. That is, one that begins with online viral calls to action, which then spills out into the streets. What’s different? One difference is that the Yellow-Vest movement was primarily rooted in the lower income brackets, whereas AI has the potential to draw in affected people from across the income brackets, as noted above. Another difference is the Yellow-Vest spark occurred in response to an immediate economic pain from the fuel tax increase. It’s not clear exactly what that equivalent would be for AI, though I suspect if job displacement is vast enough, some match to light the powder keg will

    14 min
  6. MAY 31

    How science funding literally pays for itself

    Previously, I gave an overview of eleven justifications for why public science funding is way too low: (1) longevity—living longer, (2) defense—wars of the future, (3) returns —pays for itself, (4) prosperity—long-term driver of productivity growth, (5) innovation—better everday products, (6) resilience—insurance for future calamities, (7) jobs—creates some now, and then better jobs in the future, (8) frontier—sci-fi is cool, (9) sovereignty—reduce single points of failure in the economy, (10) environment—new tech needed for climate change and energy efficiency, and (11) power—maintaining reserve currency, among other things. The returns justification—that science funding can literally pay for itself—may not seem like a critical rationale because it isn’t seemingly directly about science, but it’s the one that should end the debate. That’s because if many of the other justifications are valid, then paying for itself means that increasing science funding becomes a no-brainer, as it removes the downside (long-term cost). And crucially, research funding is the only policy with this pay-for-itself property that can scale to hundreds of billions in investment. But, how exactly can a significant government expenditure actually pay for itself? That’s a bold claim that deserves a little unpacking. Subscribe for free to receive new posts. Unpacking how research funding can literally pay for itself. It works by growing the economy so that, over time, the government collects more tax revenues than the initial expenditure. This is easier said than done, because the federal government currently collects only about 17% of GDP in federal revenue, so the growth must be substantial. Funding basic research, however, has been shown in many studies to achieve the growth rates necessary (more on that later). Here’s roughly how it works: Let’s say we invest $500B more in federal science funding this year, which ultimately grows the economy by $1.5T (3 times the investment) each year, once discoveries are fully commercialized in the economy. Suppose the federal government takes in 17% of that increase in GDP, or about $250B a year in extra federal revenues. Then, after roughly 15 years in this toy example, these future increased federal revenues will more than pay for the initial $500B, even considering the time value of money (discount rate). Of course, real-world models are more complicated, though those are roughly first-order accurate numbers for the U.S. In the previous post I referenced above, I had cited this IMF model (if you want to dig into it, search for innovation policy mix, with the underlying math in Online Annex 2.5). Here are their conclusions (note where it says pay for themselves near the end): [T]he implied fiscal multiplier—the increase in output per dollar of fiscal cost—is 3 to 4 over the long term for the most effective tools (Online Annex 2.5). This implies that increasing fiscal support for R&D by 0.5 percentage point of GDP (or about 50 percent of the current level in OECD economies) through a combination of public research funding, grants to firms, and tax credits could raise GDP by up to 2 percent. The GDP impact reflects the complementarity between public and private research. The innovation policy mix also lowers the public-debt-to-GDP ratio by about 0.5 percentage point over an eight-year horizon, as the initial increase in debt from higher fiscal spending is gradually offset by higher GDP and revenue (Online Annex 2.5). However, while innovation policies can pay for themselves in the long term, countries with limited fiscal space may need to raise revenue or reprioritize other spending to finance the short-term costs of those policies (see Chapter 1). In other words, increasing research funding at the margin today is expected to lower the national debt tomorrow by growing the economy, such that tax revenues eventually accumulate enough to start paying down debt. Of course, this assumes an advanced economy (like the U.S.) implementing a comprehensive and well-crafted policy mix (more on that in future posts). Research funding is the only scalable pay-for-itself policy. No other government expenditure category is like this in that it can arguably be raised to the order of $500B and realistically be expected to pay for itself and start reducing the long-term public debt-to-GDP ratio within a couple of decades (acknowledging we’d hit diminishing returns at some point). For example, infrastructure spending doesn’t pay for itself; according to the Congressional Budget Office (CBO), it is expected to reduce net costs by more like one-third (under deficit-neutral financing) or one-fourth (under debt-financing), not return multiple times the expenditure. Universal early childhood education has strong societal returns, but from a fiscal perspective, it takes much longer to break even, if at all, because most of the fiscal benefits occur as people grow up and earn more in middle age, forty to fifty years down the line. Targeted preventive health measures, like childhood vaccines, are also found to pay for themselves, but they don’t cost much from the federal government’s perspective, so they can’t take that much extra investment. In other words, increasing federal basic research funding is the highest return on investment (ROI) at-scale budgetary expenditure we have available. What are the returns for research funding? No one entirely knows, of course, because the returns change based on the particular funding apparatus, and the marginal return is ultimately different from the average; that is, you get diminishing returns at some point. In 2018, the CBO published this call for more research, noting “although extensive data exist on federal spending for nondefense R&D…[a] convincing synthesis of the results from the literature has proved elusive.” Since then, though, researchers have been heeding this call, including Karel Mertens at the Federal Reserve Bank of Dallas. In a 2024 paper entitled “The Returns to Government R&D: Evidence from U.S.,” Mertens and Andrew Fieldhouse take a novel approach by examining the aftereffects of historical “shocks” in R&D funding across five federal agencies. They conclude: [T]he implied rates of return to nondefense R&D are high. The reliable estimates range from around 140 percent to 210 percent… Our estimates also suggest that federal investments in nondefense R&D are self-financing from the perspective of the federal budget, at least in the long run. Assuming a return of 171 percent, a $1 long-run increase in government R&D capital would improve the budget as long as the additional tax revenue raised per dollar of additional GDP is at least 9 cents (δ/ρ = 0.16/1.71 = 0.09), which is substantially below the historical ratio of federal tax revenues to GDP. Note that 171% means $1 in yields $2.71 out, which is close to the IMF assumption above. They also have this summary blog post about their paper that gives a bit more color on the methodology and conclusions: We find that shocks to nondefense R&D appropriations lead to significant increases in various measures of productivity and scientific innovation, but only with a delay—consistent with implementation lags and a gradual diffusion of new knowledge… After about eight years, productivity starts to significantly and steadily increase. It continues rising and remains persistently elevated for at least 15 years after the increase in R&D appropriations. Put differently, greater nondefense government R&D appears to spur gains in long-term productivity, thus increasing living standards. What are the implications? The implications align with my thesis that science funding was already way too low before we started recently going further in the wrong direction. Mertens and Fieldhouse agree: In terms of policy implications, our finding of large returns to government R&D implies substantial underinvestment of public funds in nondefense R&D… I want to start developing a “prosperity platform” for a set of policies that will collectively maximize our future prosperity. I currently believe that dramatically increasing nondefense federal funding in basic research is #1 on that list. We need to reverse the trend and greatly increase this investment in our future: Thanks for reading! Subscribe for free to receive new posts or get the podcast. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit gabrielweinberg.com

    10 min
  7. MAY 24

    Will the AI backlash spill into the streets?

    AI backlash is rising—I see it every day in DuckDuckGo user feedback. That’s why our AI features (Search Assist and Duck.ai) are private, useful, and optional. Concern alone won’t flood the streets—but if AI wipes out paychecks fast, and Washington stalls, more sustained protests and strikes could follow. An April 2025 Pew Research report found widespread AI concern across various topics—jobs, privacy, inaccurate information, loss of human connection, the environment, and bias—but I think jobs can potentially lead to a categorically different societal reaction. Subscribe for free to receive new posts. Why job losses could spark next-level backlash * Cuts span industries, so outrage lands on both parties. * Every income bracket—from cashiers to coders—takes a hit. * Sudden, deep job cuts risk recession and years of high unemployment. Privacy, inaccurate information, loss of human connection, and environmental harms are, of course, very real. Yet, each mirrors existing debates—data protection, misinformation, social-media outrage, and climate change—that have stirred headlines for years without sustained street action or sweeping federal reforms (our still-missing U.S. privacy law, for example). To be fair, climate activism has produced some huge U.S. marches—for example, 400,000 in NYC in 2014, 200,000 in D.C. in 2017, and 250,000 in NYC again in 2019—but these were single-day spikes rather than sustained efforts. Bias is a little different. America did flood the streets after George Floyd in a more sustained manner, for many months, but major reforms stalled out once marches faded and partisan lines re-hardened. Job-loss protests have the potential to run even broader and longer because they could directly hit wallets across the partisan divide, for years on end. The extent of the backlash depends on the size and duration of the economic disruption. As I showed in this Gallup-poll post, even most Republicans who believe tariffs will pay off say they’d tolerate at most one year of economic pain for those benefits. Patience for AI-driven job loss is likely just as thin, if not more so. If AI keeps unemployment high, backlash lasts until it recovers or Washington intervenes. Two historical moments that resemble this pattern: When automated textile frames wiped out skilled jobs in the U.K. in the early 1800s, the Luddite riots turned violent enough (including killing a factory owner) that Parliament dispatched roughly 12,000 troops to restore order, which concluded with over a dozen executions. By contrast, in early-1960s America, still shaking off a recession and high unemployment, there was widespread fear that automation might be at least partially to blame and that it would cause high unemployment to persist. Ultimately, it spawned a presidential commission, but unemployment returned to normal relatively quickly, so the future everyone feared never materialized. There remain many open questions, though, which I hope to explore more in future posts. For example… What is the net job impact? Some jobs will clearly be displaced, which is already happening. But new ones will also be created. Will this be a large net negative, or will it be close to net neutral, similar to previous technology cycles? The public and “experts” are currently split on this question. In our current survey, 64% of the public thinks AI will lead to fewer jobs over the next 20 years. Far fewer experts surveyed say the same (39%). How do we help the displaced? Even if job displacement is closer to net neutral, the displaced people aren’t likely to be the same people who get the new jobs. What, concretely, are we going to do for them? Historically, we haven’t done much, for example, for people who lost U.S. manufacturing jobs. We can do better this time. Interestingly, the recommendations from the 1960s-era presidential commission included income guarantees, relocation assistance, federal unemployment benefits, education subsidies, and government jobs. Who pays for new government programs to help the displaced? If intervention happens, who foots the bill—general taxpayers or the AI “winners” best positioned to do so? If AI diffuses slowly, does aid get perpetually kicked down the road? We’ve already seen several strikes in Hollywood, an auto-worker strike (concerning, in part, increased automation), and a dockworkers strike (with similar concerns). But these were scattered enough in time that they haven’t yet coalesced into a larger movement, similar to other scattered past protests. If future effects take many years to unfold across industries, then a critical mass may never form to create a true reform moment. What will be the nature of the economic disruption? If millions of jobs are displaced, there will be some economic disruption, but it could look very different from the past depending on if AI produces significant growth and productivity benefits, and in what timeframe. For example, it seems possible (though I have no idea right now with what liklihood) that unemployment spikes, but its GDP effects are offset by AI growth tailwinds that prevent a recession. Do protests even matter? I need to dig in more, but the short answer seems to indicate yes, in a few ways. They seem make the people who attend them more politically motivated, at least for that issue. As a result, if enough people go to them, then they can swing elections. On average, a wave of liberal protesting in a congressional district can increase a Democratic candidate’s vote share by 2% and reduce a GOP candidate’s share by 6%. A wave of conservative protests, like those by the Tea Party in 2010, will on average reduce the Democratic vote share by 2% and increase the Republican share by 6%. But, can they actually change policy directly? Here, it seems pretty mixed, like in the examples mentioned above. The potential seems there, though, if they are large and sustained enough. What’s the best historical parallel(s)? Recent protest movements seem more one-sided politically (e.g., climate change, Occupy Wall Street, Tea Party, etc.). Mid-century protests were arguably similar (civil rights movement, Vietnam War protests, etc.), though they were sustained for much longer and ultimately swayed public opinion and accelerated change. A better parallel here, though, would be something that is clearly bipartisan from the start, more squarely on an economic issue, and resulted in swift reforms. Protests after the 1911 Triangle Factory Fire that sparked rapid labor reforms are a candidate. If others come to mind, or if you have thoughts on these other questions, please let me know. Thanks for reading! Subscribe for free to receive new posts or get the podcast. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit gabrielweinberg.com

    9 min
  8. MAY 17

    Americans’ tolerance for lengthy economic disruption due to tariffs is seemingly very low.

    Putting aside whether you or I believe in the potential benefits of widespread tariffs, I found this recent Gallup report fascinating regarding how Americans perceive their tolerance for the “short-term pain, long-term gain” tariff narrative. The chart below from Gallup shows that just 26% of U.S. adults would accept any disruption for more than a year. That leaves one-third of Republicans, 28% of independents and 15% of Democrats willing to endure a multiyear economic disruption to realize the possible benefits of tariffs. Subscribe for free to receive new posts. The Republicans are most interesting in this poll since that group believes most in possible tariff benefits. In particular, it finds that “in the long run,” 77% of Republicans think tariffs “will bring more money into the U.S. than it ends up paying in tariffs to other countries” and that 85% think it is very or somewhat likely that tariffs will lead to more U.S. manufacturing jobs. However, that pairs with 82%—about an equal amount of Republicans—who also think it is very or somewhat likely that tariffs will lead to “you paying more for products you buy.” This expectation of higher prices is reflected similarly in other recent polls, like this AP-Norc poll. So, ~80% of Republicans believe in the short-term pain, long-term gain narrative, yet only ~33% are willing to entertain that short-term pain for longer than a year. Why the huge gap? I don’t know, but here are a couple of non-mutually exclusive theories: * You’re probably more willing to endure short-term pain in a good economy. However, Republican confidence in the overall economy is low. For example, another recent Gallup poll finds that about half (48%) of Republicans/Republican leaners think today’s economy is either slowing down (25%), already in a recession (15%), or already in a depression (8%). This data rhymes with another recent Harris poll. * Most expect the short-term pain from higher tariffs to hit them personally in higher prices, but they expect the benefits (to the extent any are expected) to accrue mainly to others (given that most people don’t expect to be doing new manufacturing jobs themselves). This asymmetry seems like a potential pitfall for sacrifice-framed policies, unless the cost/benefit is more obviously positive at the individual level. Another interesting question is when the clock starts on the economic disruption for people. The on-the-ground disruption from tariffs, such as higher prices and shortages, hasn’t yet fully manifested. However, gyrations in the stock market have already been occurring for a couple of months. Does that mean we’re already two months into the economic disruption period, or, given the stock market’s recent recovery, there is a kind of reset in people’s minds until it either goes back down or until inflation and shortages are clearer? In any case, if the disruption isn’t cleared up within a year from now, then it is likely to have significant effects on the 2026 midterm elections. Thanks for reading! Subscribe for free to receive new posts or get the podcast. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit gabrielweinberg.com

    4 min
  9. MAY 10

    Science funding was already way too low.

    Cutting federal research funding is extremely short-sighted, but the previous funding levels were also short-sighted. I think those previous levels were off by something like 3x. There are so many compelling and synergistic justifications as to why, that it can be overwhelming to reason (and write!) about. So, in this post, I’m going to list ten justifications out at a high level, and plan to explore more nuance in the future. Subscribe for free to receive new posts. Justifications for increasing federal research funding 1. Longevity Do you want your family members to live over 100? I assume so, but U.S. life expectancy is about 80 today. Where do you think the biomedical breakthroughs we need to increase it will come from? For example, Sickle Cell disease is now being cured in some patients via CRISPR gene editing. NIH grants were critical to this work, and that’s true for most transformational drugs. These types of cures can’t come fast enough; increasing research funding is the way to get them even quicker. 2. Defense From my essay, The Great Race: One or more key future technologies, such as artificial superintelligence, quantum computing, humanoid robots, space tech, etc., will likely afford the leader a significant military advantage. What if China becomes years ahead of us in all these critical technologies? Let it sink in that this is what we were to China just a few decades ago. Falling behind on R&D today risks losing real wars tomorrow, and we’re already well on the path to falling behind China in particular. China has aggressively grown their R&D expenditures, 17× from 2000-2020 and now nearing U.S. aggregate levels, while U.S. federal R&D has crawled. We need to invest enough to keep ahead of China, and that’s more than we do today. Empirically, non-governmental funding isn’t keeping pace alone. 3. Returns Science funding is one of, if not the best, dollar-for-dollar investments a country can make. I will do a deeper dive on this, though check out this Science|Business report that seems to have done a relatively thorough meta-analysis on academic studies looking into this question, concluding: The many economic measurements of returns on investment to publicly funded R&I [Research & Innovation] vary wildly in range, but seem to cluster at around a 20% annual [social] return on investment. That’s a higher rate of return than the stock market, and from the federal government’s perspective, it’s a significantly higher return relative to other federal investments available at scale. For example, returns on universal Pre-K are estimated to be more in the 10% range (I happen to think we should do that too, but science funding is even more critical). If you believe the government should invest some of its money in the best investments for the country it has available, then science funding is a contender for #1. We should keep putting money into this investment until we reach diminishing returns, which we are nowhere near today. In fact, the returns are so high that, under reasonable assumptions (for example, real Treasury rate ~3 %, about one-fifth of each extra dollar of GDP showing up as federal revenue, five-year gestation period, etc.), in the long run this investment will actually decrease our debt-to-GDP ratio (via an increased tax base). From the IMF’s April 2024 IMF Fiscal Monitor that ran such a scenario (emphasis added): The innovation policy mix also lowers the public-debt-to-GDP ratio by about 0.5 percentage point over an eight-year horizon, as the initial increase in debt from higher fiscal spending is gradually offset by higher GDP and revenue (Online Annex 2.5) Science funding isn’t an expense, it’s an investment with high returns! 4. Prosperity Funding science pays for itself because scientific breakthroughs are the hidden force that grows our economy. In fact, without new technology, our economic prosperity is fundamentally limited. To see that, suppose no breakthroughs occur from this moment onward; we get no new technology based on no new science. Once we max out the hours we can work, the education people will seek, and the efficiency with existing technology, then what? We’d be literally stuck. Fundamentally, if you don’t have new tools, new technology, new scientific breakthroughs, you stagnate. From my essay: Think of how worker productivity increased in construction with the introduction of power tools and heavy machinery or in offices with the introduction of computers and the Internet. We need more of these, many times over: true leaps forward in technology applications that will dramatically increase our worker productivity. This productivity increase means increasing output per hour worked (by definition), and therefore GDP (assuming hours are constant). If the U.S. had grown just one percent faster over the past fifty years, our average income (real GDP per capita) would be ~66% higher today. Research-driven productivity improvement is one of the few policy levers big enough to recreate that missed windfall, and more of it is better. 5. Innovation Technological breakthroughs also mean better everyday products, so we get a higher standard of living for the same dollar output because we’re getting better, more innovative stuff to consume. Think of better household appliances, or in the future, self-driving cars vs. regular cars. Such innovations reduce household chores and costs, making our leisure time more enjoyable and giving us more of it. By increasing research funding, we’re buying future life satisfaction, and again, the more of that, the better. 6. Resilience Will we have the science to respond to unexpected calamities, like new pandemics, asteroids, resistant antibiotics, etc.? Robust pathogen surveillance, AI-designed antibiotics, planetary-defense tech, and the like are insurance policies we aren’t buying enough of today. For example, we spend about 200M on antibiotic resistance, which already costs us $4.6 billion annually, let alone what that would balloon to if we had a severe, uncontrollable outbreak. 7. Jobs As I noted above, research funding is a great investment long-term. It’s also a jobs engine in the short term. For example, this report from the nonprofit United for Medical Research found, with regards to NIH funding in particular (emphasis added): As NIH funding is awarded to researchers in individual states, that funding supports employment and the purchase of research-related goods, services and materials. The income generated from these operational expenditures, along with that from capital asset expenditures (e.g., building, equipment, machinery, sophisticated software) cycles through the economy to produce new economic activity. In 2022, that funding supported an average of 2,300 jobs and $353 million in new economic activity per state, or $2.3 dollars of economic activity for each dollar of NIH research funding. More generally, research funding is a jobs win in three ways: First, you fund great jobs directly, today. Second, those jobs also fund supporting jobs today, as in the example. Third, the actual research creates better future jobs that will utilize the new technology being developed. Can we ever upgrade our jobs too much? 8. Frontier Are we really going to cede the physical frontier, like Mars and the Moon, and the virtual frontier, like AI and the metaverse, to other countries? I hope not. We will need to spend more to win these races. What happened to the American frontier spirit? 9. Sovereignty Maybe you don’t care about the physical or virtual frontier, fine. But what about ensuring critical technologies for our current infrastructure can be made in the U.S., like those that go into making semiconductors and energy. If we don’t stay at the forefront of these essential components, we risk being held hostage by other countries and at least partially losing our economic sovereignty, which has arguably already happened. From a 2023 report by the U.S. International Trade Commission: Around 92 percent of the world’s most advanced chip manufacturing capacity is located in Taiwan. Any disruptions to Taiwan semiconductor manufacturing—whether caused by pandemics, natural disasters such as typhoons or earthquakes, power or water shortages, factory shutdowns, or international conflict—would potentially have large impacts on global semiconductor supply. 10. Environment Similarly, we need new research to manage climate change effectively without throttling growth, such as in cheap energy storage and scaling carbon removal technologies. From the IPCC 2023 report: Carbon dioxide removal (CDR) will be necessary to achieve net negative CO2 emissions. [CDR Fact Sheet] Or, if you are more generally concerned about using up the Earth’s resources or other environmental impacts, it’s the same story. Increasing research funding means limiting more ecological damage by finding ways to be more energy efficient and finding less damaging energy pathways. 11. Power Technological leadership buys global leverage. If you are technologically dominant, built on the back of research funding, everyone wants to trade with you for your superior technology and invest with you to get those returns mentioned above. Put differently: the more the world needs the next generation of U.S. chips, biomedicine, and clean-tech, the more it requires dollars to trade for them. If we lose that leadership, however, in the worst case, we also lose all the soft power that comes with it, including being the world’s reserve currency. According to this 2024 IMF bulletin, this has already been happening for the last 25 years: Again, from my essay: We believe our ability to control inflation and interest rates is inadequate now—most other countries have much less control. Given our dominance, what we do has an enormous knock-on effect on their currencies. If the yuan becomes the dominant currency, we will experience similar knock-

    15 min
  10. MAY 3

    Rule 1: Reality is always more complicated.

    I’ve always been drawn to characters with “rules.” I’ve been noodling some rules over the past few years myself, on making good decisions. Subscribe for free to receive new posts. Rule 1: Reality is always more complicated. I prefer the more general framing, but this is also phrased as “the map is not the territory” because every map is an imperfect representation of what it represents. For example, Apple Maps won’t tell me where the current potholes are, nor does it have all the trees placed correctly (at least not yet!). All maps have some fidelity limit — Apple will never get an accurate blade-of-grass count in every yard. Maps can also distort reality in various ways. These omissions and misrepresentations are often okay if your decisions don’t turn on them. But if they do turn on them, then you need a better map (or description, model, etc.). So many fallacies and bad decisions stem from not heeding this rule, which is why it is Rule #1. Paramount among those is the narrative fallacy, which I believe is the actual root cause of a lot of our societal problems: People listen to stories (narratives)—from politicians, influencers, etc.—and if they sound plausible, then some percentage of people will think that they are true, at least for some time. But just because something is conceivable doesn’t mean that it is true, or even likely true, and more to the point, a story is just another imperfect map. Reality is always more complicated. Suppose you are faced with (as most modern societal debates are) a complex, dynamic system (for example, faced with fixing part of the economy), and you try to change it based on an overly simplistic model. In that case, you’ll very likely not get what you want. You’re also pretty much guaranteed to create unintended consequences that your basic model does not predict. Many cultural debates are similar, boiling down to again using some overly simplistic model (for example, a binary categorization of race, gender, political ideology, etc.), and then grappling with the numerous edge cases that struggle to fit neatly into that model. This failure state isn’t limited to political or social issues, though. Developers often do this with refactors, aiming for “clean code,” only to painstakingly add back all the edge cases they removed over time as their code grapples with reality. In medicine, complex diagnoses fall through the cracks, sometimes for decades, because they aren’t labeled and categorized yet, as the official labels and categories haven’t yet caught up with the more complicated reality. In physics, there’s “a joke about a physicist who said he could predict the winner of any race provided it involved spherical horses moving through a vacuum.” For every assumption you have, you have to decide how far down the rabbit hole you want to go with it; that is, how complex you want to make your model of reality. That’s because reality is always more complicated than your description of it. In a vacuum, a bowling ball and a feather fall at the same rate; in air, they don’t because of the added complexity of air resistance. So, for any decision-making or problem-solving you’re doing, consider whether your current underlying assumptions (descriptions of reality) are reasonable for the situation at hand, or if you need to pursue better assumptions (more complex descriptions) first. This is often not as straightforward as it seems (more is always better, right?) because making things more complex has real costs: sometimes an actual financial cost, but always an opportunity cost and usually a communication cost too, as more complexity is more difficult to explain. That’s why we default to simple narratives in the first place, as they minimize these costs. They are so easy to communicate—some people make them up on the spot—and AI makes this even easier. See for yourself. Go to a chatbot and ask for a narrative that explains why is a good or bad idea. If you have two opposing narratives like this that predict real-world outcomes, ultimately, reality will show that things line up with one more than the other, which at least starts to expose the truth. But that is backward-looking and interminably frustrating. So what can you do instead? First, you can heed Rule 1 and get appropriate models for the situation. As the world has become increasingly complex, successfully understanding and interacting with its dynamic systems requires increasingly sophisticated descriptions of reality, especially to generate specific outcomes. Simplistic models generally won’t cut it. Yet complicated models are hard to understand and easy to mess up. So, I think the best you can do is to ground your assumptions as best you can in trusted data and repeatedly run experiments in the real world to fine-tune them, with as tight a feedback loop as possible. See other Rules. Thanks for reading! Subscribe for free to receive new posts or get the podcast. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit gabrielweinberg.com

    6 min
  11. APR 27

    Is runaway AI coming in years or decades?

    I’ve been trying to heed the call to prepare for the powerful AI that is coming. But, is it coming soon, like in 2027 (see AI 2027), or significantly later, like in 2045 (see AI as Normal Technology)? Either way, we should prepare, but knowing whether it is coming in years or decades would make a huge difference in that preparation. Subscribe for free to receive new posts. The pieces referenced each make compelling cases for the shorter or longer timeframes (another good one on the longer side is The case for multi-decade AI timelines). These are all lengthy pieces, so I thought it would be helpful for me (and maybe you, too!) to consider the core assumptions that cause them to differ so much. I think it boils down to just two: * Autonomous AI coders can advance AI much faster via recursive self-improvement. AI 2027’s timelines forecast details the data points they extrapolate to predict that such recursive self-improvement will likely occur very soon, predicting about a 25x speedup in AI development pacing as a result. By contrast, AI as Normal Technology remains skeptical if this is even possible: Perhaps recursive self-improvement in methods is possible, resulting in unbounded speedups in methods. But note that AI development already relies heavily on AI. It is more likely that we will continue to see a gradual increase in the role of automation in AI development than a singular, discontinuous moment when recursive self-improvement is achieved. I’m skeptical that getting fully automated AI systems to improve themselves would be impossible. Still, I’m also not close enough to the cutting edge of this development to know when this might occur, or how much speedup we could expect from it when it does. Many potential bottlenecks have been flagged, such as a lack of inherent AI creativity, internal coordination issues between thousands of AIs, connecting the AIs to the whole stack of resources needed, compute scarcity, etc. These bottlenecks are explained away in the aggressive timelines by the more intelligent and capable AIs operating at a much faster feedback loop, such that, to the extent these bottlenecks are significant, they would nevertheless find ways to address and break through them relatively quickly, like in months, not years. For example, they can make software more efficient (bypassing compute scarcity), iterate out of their coordination/creativity bottlenecks, etc. I get that, but one that seems harder to overcome is the potential for widespread societal backlash related to the second assumption. * Advanced AI can control enough of the physical world to transform the global economy fast. AI as Normal Technology contends that “the speed of diffusion [of AI through the global economy] is inherently limited by the speed at which not only individuals, but also organizations and institutions, can adapt to technology.” They point out that “AI diffusion lags decades behind innovation”, such as in medical and legal contexts, and that “there are already extremely strong safety-related speed limits in highly consequential tasks [like self-driving cars, nuclear, etc.]. These limits are often enforced through regulation, such as the FDA’s supervision of medical devices, as well as newer legislation such as the EU AI Act, which puts strict requirements on high-risk AI.” By contrast, AI 2027 paints a picture of bypassing all of that slowness and regulation by granting AI companies special physical zones to operate independently: Both the US and China announce new Special Economic Zones (SEZs) for AIs to accommodate rapid buildup of a robot economy without the usual red tape. The design of the new robots proceeds at superhuman speed. The bottleneck is physical: equipment needs to be purchased and assembled, machines and robots need to be produced and transported. The US builds about one million cars per month. If you bought 10% of the car factories and converted them to robot factories, you might be able to make 100,000 robots per month. OpenBrain [their Open AI equivalent in the forecasted scenario], now valued at $10 trillion, begins this process. Production of various kinds of new robots (general-purpose humanoids, autonomous vehicles, specialized assembly line equipment) are projected to reach a million units a month by mid-year [2028]. I think the AI backlash is real and will only intensify from here, especially as jobs are displaced at societally significant levels, “dark factories” (a.k.a lights out manufacturing) become more widely deployed, and humanoid robots become more visible. This backlash will create political pressure (and corresponding opportunities for politicians and political parties/movements) to slow things down. On the opposite side is the arms race with China (and potentially others) for using AI in military applications. Superhuman AI will create a significant military advantage if the gap between countries in getting access to such runaway AI is significant. I’m honestly not sure how this nets out (backlash slowdown vs. military speedup), and some of the military side may happen in secret for some time, similar to the secrecy in the Manhattan Project. However, it will ultimately be hard to hide because to truly transform things, many physical objects must be made to revamp the military and the economy (vs. a small number of nuclear bombs out of the Manhattan Project). So, what happens next with these two assumptions will largely determine the timeline. This leaves me thinking we still have to take the shorter timeframes seriously and accelerate societal preparations, to the extent that is even possible. Avengers: Age of Ultron (2015) Thanks for reading! Subscribe for free to receive new posts or get the podcast. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit gabrielweinberg.com

    7 min

About

Tech, policy, and business insights from the DuckDuckGo founder and co-author of Traction and Super Thinking. gabrielweinberg.com

To listen to explicit episodes, sign in.

Stay up to date with this show

Sign in or sign up to follow shows, save episodes, and get the latest updates.

Select a country or region

Africa, Middle East, and India

Asia Pacific

Europe

Latin America and the Caribbean

The United States and Canada