Gabriel Weinberg's Substack

Gabriel Weinberg

Tech, policy, and business insights from the DuckDuckGo founder and co-author of Traction and Super Thinking. gabrielweinberg.com

  1. 4 DAYS AGO

    The key to increasing standard of living is increasing labor productivity

    Standard of living doesn’t have a strictly agreed-upon definition, but for the sake of anchoring on something, let’s use “the level of income, comforts, and services available to an individual, community, or society” (Wikipedia). Gross Domestic Product (GDP) per capita, that is, the average economic output per person in a country, is often used as a proxy metric to compare the standard of living across countries. Of course, this proxy metric, being solely about money, doesn’t directly capture non-monetary aspects of standard of living associated with quality of life or well-being. However, most of these non-monetary aspects are tightly correlated with GDP per capita, rendering it a reasonable proxy. Our World in Data features numerous plots of such measures against GDP per capita. Here are a few of the ones people tend to care about most: These measures are clearly tightly correlated to GDP per capita, as are common aggregate measures such as the UN’s Human Development Index that combines lifespan, education levels, and GDP per capita. These tight correlations are somewhat intuitive because GDP per capita by definition means more money to buy things, and that includes buying more healthcare, education, leisure time, and luxiries, which one would expect to be correlated to healthspan, life satisfaction, and other measures of quality of life and well-being. Nevertheless, at some level of GDP per capita, you reach diminishing returns for a given measure, and we would then expect the corellation to cease for that measure. For example, here is access to clean (“improved”) water sources, which maxxes out at medium incomes after you reach 100% since you can’t go higher than 100% on this measure. However, we haven’t seen that yet for the most important measures like life expectancy, the poverty line, and self-reported life satisfaction. All of those can go higher still, and are expected to do so with further increases to GDP per capita, certainly for lower GDP-per-capita countries (climbing up the existing curve) but also for the U.S. (at or near the fronteir). In other words, with enough broad-based increases in income, many are lifted out of poverty, the middle class is more able to afford much of the current luxury and leisure of the rich, and the rich gets access to whatever emerges from new cutting-edge (and expensive) science and technology. We should continue to watch and ensure these correlations remain tight. But as they remain tight, I think it is safe to say right now that we would expect increases in standard of living to be tightly correlated with increasing GDP per capita. While there are other necessary conditions like maintaining rule of law, broadly giving people more money to buy better healthcare, education, and upgraded leisure time should increase standard of living. That part is pretty intuitive. What’s not intuitive is how to do so. You can’t just print money, because that results in inflation. It has to be increases in real income, that is, after inflation. So, how do you do that? If you’re a country where a large % of the working-able population doesn’t currently have a job, the easiest way is to find those people jobs. Unfortunately, that won’t work for the U.S. anymore since most everyone who wants a job has a job. It worked for a while through the 1960s, 70s, and 80s as ever greater %s of women entered the workforce, but then plateaued in the 1990s. You could try to get people with jobs to work more hours (and therefore make more money per person), but that also doesn’t work for the U.S. since we already work a lot relative to other frontier countries, and as people get more money they seem to want to work less, not more. For example, in the U.S. we’re working a lot less hours per worker than we did in 1950, let alone 1870. This makes intuitive sense since quality of life and well-being can’t get to the highest levels if you’re working all of the time. That leaves upgrading the jobs people already have in the form of higher income for the same amount of hours worked. And this means, by definition, increasing labor productivity, which is the amount of goods and services produced per hour of labor. To pay people more without triggering inflation, they also have to produce more output. That’s the counterintuitive piece and also it is our biggest opportunity for higher GDP per capita, and therefore higher standard of living. OK, but how do you increase labor productivity? I’m glad you asked. There are three primary ways, but only one has unbounded upside. Can you guess what it is? First, you can educate your workforce more, providing them with, on average, better skills to produce higher quality output per hour worked, a.k.a. investment in human capital. The U.S. is currently going in the wrong direction on this front when you look at the % of recent high-school graduates enrolled in “tertiary” education (which includes vocational programs). If we had continued to make steady progress through the 2010s and 2020s, we would be headed towards diminishing returns on this front. While it will surely be good to increase this further to get those gains—and there is more you can do than just tertiary education such as on-the-job training—like we saw earlier with access to clean water, there is effectively a max out point for education in terms of its effect on GDP per capita. Think of a point in the future where everyone who is willing and able has a college degree, or even a graduate degree. Second, you can buy your workforce more tools, equipment, and facilities to do their job more efficiently, a.k.a. investment in physical capital. This isn’t inventing new technology, just spending more money to get workers access to the best existing technology. Again, you clearly reach diminishing returns here too, that is, another max out point, as you buy everyone the best tech. Think of the point where everyone has a MacBook Pro with dual Studio Displays—or whatever the equivalent is in their job—to maximize their productivity. Third, and the only way that doesn’t have a max out point, is to invent new technology that enables workers to do more per hour. These are better tools than the existing tools on the market. Think of upgrading to the latest software version with updated features that make you a bit more productive. Or, more broad-based: Think of how worker productivity increased in construction with the introduction of power tools and heavy machinery or in offices with the introduction of computers and the Internet. We need more of these, many times over: true leaps forward in technology applications that will dramatically increase our worker productivity. (The Great Race) AI is likely one of these leaps, but by investing much more in basic research we can make higher labor productivity growth more continuous instead of the bumpy road it has recently been on. These leaps don’t come out of nowhere. They require decades of investment in research, and that investment requires a decent level of government investment at the earliest stages. This was the case for AI, as it was for the Internet, and as it is for life-saving drugs. This is actually good news, since it means we have a lever to pull to increase labor productivity that we’re not currently fully pulling: increase federal investment in basic research. The level we’ve ended at today is somewhat arbitrary, an output of a political process that wasn’t focused on increasing standard of living. In any case, I estimate at the bottom of this post that we’re off by about 3X. If you want another view on this topic, here is a good post from the International Monetary Fund (IMF): [I[mprovements in living standards must come from growth in TFP [Total Factor Productivity] over the long run. This is because living standards are measured as income per person—so an economy cannot raise them simply by adding more and more people to its workforce. Meanwhile, economists have amassed lots of evidence that investments in capital have diminishing returns. This leaves TFP advancement as the only possible source of sustained growth in income per person, as Robert Solow, the late Nobel laureate, first showed in a 1957 paper. TFP growth is also the answer to those who say that continued economic growth will one day exhaust our planet’s finite resources. When TFP improves, it allows us to maintain or increase living standards while conserving resources, including natural resources such as the climate and our biosphere. Or, as Paul Krugmam put it even more succinctly in his 1990 book The Age of Diminished Expectations: Productivity isn’t everything, but, in the long run, it is almost everything. A country’s ability to improve its standard of living over time depends almost entirely on its ability to raise its output per worker. —Paul Krugman Thanks for reading! Subscribe for free to receive new posts or get the podcast. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit gabrielweinberg.com

    10 min
  2. 20 JUL

    Most chatbot users miss this key setting, so we moved it up front

    The magic of chatbots is that they make it seem like you’re chatting with a real person. But the default personality of this “person” isn’t one I particularly enjoy talking to, and in many cases I find downright annoying. Based on feedback from duck.ai users—who rely on our service for private access to popular chatbots—I know I’m not alone. What people want in a chatbot’s personality varies widely: I cringe at extra exclamation points and emojis, while others love them. I also find the default output too verbose, whereas some appreciate the added exposition. Of course, I could tell the chatbot every time to keep its replies short and emoji-free, but pasting that constantly is enough friction that I rarely bother. OpenAI and Anthropic do offer customization options in their settings, yet those options are buried and feature intimidating blank text boxes, such that I highly suspect most people never touch them. Recently, we’ve been considering this issue in the context of duck.ai. I’m sure what we’ll do here will continue to evolve as we get feedback, but to get started we’ve just introduced a much easier-to-find customization dialog. Not only does it make the responses feel better, it can make the actual content significantly better as well. As you can see in the video, it provides customization guidance through drop-downs and fields, including options to customize: * The tone of responses * The length of responses * Whether the chatbot should ask clarifying questions * The role of the chatbot (for example, teacher) * Your role (for example, student) * The nickname of the chatbot * Your nickname All fields are optional, and you can also add additional info if desired, as well as inspect what the instructions will look like in aggregate. If you select role(s), then there are detailed instructions that get created specifically for those. Here’s an example using the ‘Tech support specialist’ role, which asks you clarifying questions to drill down faster to a solution vs. the more generic (and lengthier) default response. Customized response: Generic response: All of this works through the “system prompt.” In an excellent post titled AI Horseless Carriages, Pete Koomen explains system prompts: LLM providers like OpenAI and Anthropic have adopted a convention to help make prompt writing easier: they split the prompt into two components: a System Prompt and a User Prompt, so named because in many API applications the app developers write the System Prompt and the user writes the User Prompt. The System Prompt explains to the model how to accomplish a particular set of tasks, and is re-used over and over again. The User Prompt describes a specific task to be done. When you set the duck.ai customization options, the instructions that are created are appended to the default system prompt, which is repeated (in the background) when you start a new conversation. That is, the instructions will apply to the current conversation as well as subsequent ones, until you change them again. Like everything we do at DuckDuckGo, these system prompt tweaks are also private. They are stored locally on your device only, along with your recent chats (if you choose to save them). When we ultimately add an optional ability to sync settings and chats across devices, it will be part of our end-to-end encrypted sync service, which DuckDuckGo cannot decrypt. And Duck.ai itself anonymizes chats to all model providers, doesn’t store chats itself, and ensures your chats aren’t used for AI training. More at the Duck.ai Privacy Policy. Our approach to AI is to make features that are useful, private, and optional. We believe these new duck.ai customization options tick all three boxes, but please try them out and let us know what you think. As always, please feel free to leave comments here. However, the best method for sharing feedback about duck.ai is to do so directly through the product, as it will then be shared with the entire team automatically. Thanks for reading. Subscribe for free to receive new posts or get the podcast. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit gabrielweinberg.com

    5 min
  3. 8 JUL

    The debate over a potential economic bump from AI points to a much larger economic opportunity.

    The decade surrounding the Internet boom saw a marked increase in U.S. total factor productivity (TFP) growth—the economic efficiency gains beyond just adding more workers or capital—similar to higher levels seen before 1973. However, it then declined again from 2005 onward back to doldrums levels, which brings us to the present. Now, people are starting to debate a similar potential AI productivity bump unfolding over the next decade. However, this debate raises an even bigger question: How do we achieve sustained higher productivity growth forever, not just in occasional blips and bumps? I believe this is the most important economic question because higher productivity growth is the key factor that drives long-term higher standards of living. Here’s a concise, understandable explanation from the Bureau of Labor Statistics as to why: How can we achieve a higher standard of living? One way might simply be to work more, trading some free time for more income. Although working more will increase how much we can produce and purchase, are we better off? Not necessarily. Only if we increase our efficiency—by producing more goods and services without increasing the number of hours we work—can we be sure to increase our standard of living. To continuously produce more without increasing the number of hours worked per person, we need to continually develop better tools. To continually develop better tools, we need access to increasingly better science and technology. To get access to increasingly better science and technology, we need sustained higher investment in basic research. Think of how worker productivity increased in construction with the introduction of power tools and heavy machinery or in offices with the introduction of computers and the Internet. We need more of these, many times over: true leaps forward in technology applications that will dramatically increase our worker productivity. (The Great Race) That is, to ensure higher productivity growth, we must continually invest sufficiently in the next set of technologies that will generate and sustain these higher productivity levels. That requires increased investment in basic research. But, why can’t private industry do it all? The private sector…is excellent at taking established scientific breakthroughs and turning them into products over a few years. That’s because there is a clear profit motive in doing so. However, it is not as great at coming up with those scientific breakthroughs in the first place or commercializing them on much longer timescales like decades, where the profit motive is significantly reduced. This activity still generally involves some government-funded research in the early stages. (The Great Race) Yes, this includes AI too. A good post on this by Mark Riedl titled Visualizing the Influence of Federal Funding on the AI Boom takes seven key papers in AI, including Attention is All You Need (2017), and then traces which of their references explicitly acknowledge federal funding. ~18% of papers referenced by these 7 industry papers have acknowledged US federal funding. ~24% of papers referenced have US university authors. ~20% of papers referenced are industry lab-authored. ~42% of papers referenced do not have any industry authors. The AI boom did not happen in an industry vacuum. As with all research, it was an accumulation of knowledge, much of which generated in university settings. There is a growing narrative that academia isn’t important to AI anymore and US federal funding has no role in the AI boom. It’s more correct to say that the AI boom could not have happened without US federal funding. The same was true with the Internet boom. The same is also true in healthcare, for example, tracing federal funding through the most transformational drugs. And yet, as I previously explored, science funding was already way too low before recent cuts. Quite simply, since the 1960s, we haven’t invested sufficiently in basic research to create sustained higher productivity. But we could change that. And, as I've also previously detailed, this should be a no-brainer since, if done right, it literally pays for itself by expanding the economy, generating higher tax revenues, and ultimately lessening the debt-to-GDP ratio. So while economists debate whether AI will boost productivity by 0.5% or 1.5% for the next decade, we should be asking: How can we better invest in basic research today to ensure we’re still growing faster in 2050? Short-term productivity bumps are a boon. But, if we want sustained productivity gains, we need sustained productivity investment. Thanks for reading! Subscribe for free to receive new posts or get the podcast. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit gabrielweinberg.com

    6 min
  4. 26 JUN

    States should be allowed to regulate AI because realistically Congress won't

    A provision in the “Big Beautiful Bill” (the massive federal spending and policy package currently being negotiated) aims to stop all state-level AI regulation for the next ten years. Now, I’m not anti-AI. My company (DuckDuckGo) offers our own private chatbot service at Duck.ai, we now generate millions of anonymous AI-assisted answers daily for our private search engine, and we are working on more AI features in our browser. We’re investing heavily in AI because, to achieve our aim of protecting people online as comprehensively as possible, we must offer a compelling alternative to the most commonly used untrusted online workflows, most notably searching, browsing, and now chatting. At the same time, I believe the AI backlash is real and growing, which is why I’m thinking and writing about it, and why we’re designing all our AI features to be useful, private, and optional. And, the backlash is real for good reason. AI poses a wide range of risks, including massive job displacement, extensive privacy concerns, and, at the extreme, existential risks. That’s why this “pause” is particularly dangerous. I’m not taking a position here on which risks should be regulated, when, or how. More on that in future posts. But I am saying that AI will require at least some well-crafted regulation to address some of its risks over the next ten years, and yet Congress has proven incapable of taking action. The states, on the other hand, do take action. Look no further than privacy law as a close parallel. It’s 2025, and the United States still lacks a comprehensive federal privacy law. The International Association of Privacy Professionals (IAPP) now tracks 144 countries with such laws (as of Jan 2025). The U.S. is a clear outlier: The most populous countries without a comprehensive national privacy law include the U.S., Pakistan, Bangladesh, Iran, and Iraq. This isn’t for lack of trying. Numerous bills have been proposed, and many hearings have been held, yet nothing has even come close to passing, not even after Snowden or Cambridge Analytica. Unrelated to privacy, Congress has proven unable to legislate effectively, and while we should work to fix that independently, we can’t wait for it. Meanwhile, IAPP tracks 19 states that have managed to pass general privacy laws to protect consumers to some extent, including the two most populous states, California and Texas. Despite fears that a “patchwork” of state laws would wreak havoc on innovation by going too far, they haven’t. Innovation hasn’t stalled, and neither have big-tech privacy violations. That’s because state privacy laws, while better than nothing, in my opinion, don’t nearly go far enough, which is why we (DuckDuckGo) still need to develop dozens of overlapping protections to keep consumers safe online. Meta’s latest AI-chatbot leak foreshadows a bleak AI-privacy future if there are literally no regulations in sight. State laws also provide Congress with both a blueprint for action and further incentive to enact laws. Nothing prevents a future AI bill from overriding (preempting) state AI laws. Of course, for that to happen, Congress would need to pass general tech legislation. I would love to witness that and have been working to help make it happen, but I am also realistic about Congress’s current capacity to regulate tech. Finally, the current proposal would seemingly preempt the most protective provisions of existing state privacy laws. That would be a giant step backward for online privacy. We helped pioneer Global Privacy Control, an opt-out signal that is on by default in our browser and extension, which has legal effect in California and other jurisdictions. Senator Maria Cantwell, ranking member on the Senate Commerce Committee, notes that the bill would nullify provisions of many state privacy laws that “give consumers the right to opt-out of profiling.” In the last 25 years, states filled the privacy law vacuum left by Congress; let them do the same for AI. We should not silence states from protecting their citizens from dangerous new risks for a decade. And, if Congress gets its act together, then great—those future bills can preempt any conflicting state provisions. Thanks for reading! Subscribe for free to receive new posts or get the podcast. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit gabrielweinberg.com

    5 min
  5. 21 JUN

    Rule #2: There are always underlying assumptions.

    Every decision rests on assumptions. At minimum, you’re assuming the options you’re considering are actually all of your options. This set of options is, in turn, based on a set of assumed facts, which is, in turn, based on even more assumptions about how the world works. I’ve seen this pattern repeatedly: teams spend hours debating the details of Option A versus Option B, only to discover they never considered Option C—or that their entire framing of the problem was based on outdated or misaligned assumptions. Generally, in these situations, or for any significant decision, it’s worth taking some time to literally enumerate the relevant assumptions and why they should be relied upon in this particular situation. This process of enumerating assumptions is almost always illuminating in some way. Often, it reveals that particular assumptions need further de-risking. From our book Super Thinking: You can de-risk anything: a policy idea, a vacation plan, a workout routine. When de-risking, you want to test assumptions quickly and easily. Take a vacation plan. Assumptions could be around cost (I can afford this vacation), satisfaction (I will enjoy this vacation), coordination (my relatives can join me on this vacation), etc. Here, de-risking is as easy as doing a few minutes of online research, reading reviews, and sending an email to your relatives. Or, in the context of a startup idea, also from our book: * My team can build our product—We have the right number and type of engineers; our engineers have the right expertise; our product can be built in a reasonable amount of time; etc. * People will want our product—Our product solves the problem we think it does; our product is simple enough to use; our product has the critical features needed for success; etc. * Our product will generate profit—We can charge more for our product than it costs to make and market it; we have good messaging to market our product; we can sell enough of our product to cover our fixed costs; etc. * We will be able to fend off competitors—We can protect our intellectual property; we are doing something that is difficult to copy; we can build a trusted brand; etc. * The market is large enough for a long-term business opportunity—There are enough people out there who will want to buy our product; the market for our product is growing rapidly; the bigger we get, the more profit we can make; etc. Enumerating assumptions can seem pedantic, but I've found it extremely helpful because it clarifies your argument, much like writing a blog post or explaining something to someone in real-time. It helps ensure you’re drawing a logical conclusion from your assumptions. A great way to help identify potentially shaky assumptions is to conduct a premortem. Here’s the template we use at DuckDuckGo for new project premortems: Premortem Many projects fail to meet their success criteria, so take some time to be pessimistic and ask questions: * What are the key risks to this project and how can they be mitigated? * What could slow down this project and how can we prevent that from happening? * How might our actions in this project negatively affect user trust or be misunderstood? The goal is to uncover problems or blindspots and then decide how to address them up front, such as by starting with the most uncertain part. If a project is destined to fail, failing fast is a success. This principle extends beyond individual and team decisions to broader discourse. I'm constantly frustrated by political and policy debates where participants talk past each other—often because they're operating from different underlying assumptions or facts. If it is a good faith debate, then there should be the opportunity to enumerate and drill down on those underlying assumptions and facts, which can then help clarify exactly where the disagreement is occurring. For example, it might turn on just one misaligned assumption or fact among many. If that’s the case, a determination can be made on what specific evidence would be compelling enough to achieve alignment. Rule #2—There are always underlying assumptions—is universal. You're always making assumptions. The key is examining them before they lead you astray. See other Rules. Thanks for reading! Subscribe for free to receive new posts or get the podcast. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit gabrielweinberg.com

    5 min
  6. 15 JUN

    Issues with widespread bipartisan agreement that never go anywhere

    In a country currently known for deep political divisions, some issues still garner supermajority agreement, yet nothing gets done about them. To me, these issues simultaneously represent hope and despair: * Hope—yes, we can agree on some important things. * Despair—uh oh, our widespread agreement isn’t translating into actual change. In other words, the existence of issues with durable, supermajority agreement, which nevertheless never materializes into change, is a strong indicator that we need some significant structural reform. I have ideas for that, but first, let me try to convince you that these magical issues exist! 1. Congressional Term Limits From a 2023 Pew Research report: Term limits for members of Congress are widely popular with both Republicans and Republican-leaning independents (90%) and Democrats and Democratic leaners (86%). This isn’t new. A 2013 Gallup poll showed similar results, noting that these results were also “similar to those from 1994 to 1996 Gallup polls.” 2. Legalizing Marijuana From a 2023 Gallup poll, which has been polling this issue for decades: For the second straight year, majority support for legalization is found among all major subgroups, including by age, political party and ideology. 3. Digital Privacy From a 2023 Pew Research report: A majority of Democrats and Republicans say there should be more government regulation for how companies treat users’ personal information (78% vs. 68%). They also note that “[t]hese findings are largely on par with a 2019 Center survey that showed strong support for increased regulations across parties.” 4. Limiting Money in Campaigns From a 2024 American Promise poll: “A proposed constitutional amendment would allow Congress and the states to reasonably regulate and limit money in our campaigns and elections. Would you support or oppose this amendment?” 5. Universal Background Checks From a 2023 Fox News poll: 87% of Americans support requiring criminal background checks on all gun buyers, including 83% support from gun owning households. This is a long standing result, for example this 2019 Quinnipiac poll: Voters support 93-6 percent "requiring background checks for all gun buyers." Support is 89-10 percent among Republicans and 87-12 percent among gun owners. Support for universal background checks has ranged from 88 to 97 percent in every Quinnipiac University poll since February 2013, in the wake of the Sandy Hook massacre. 6. Negotiating Medicare Prescription Drug Prices From a 2024 KFF poll: A large majority (85%) of voters say they support allowing the federal government to negotiate the price of some prescription drugs for people with Medicare. This includes at least three quarters of Republican (77%), independent (89%) and Democratic (92%) voters. 7. Congressional Stock Trading Ban From a 2023 Maryland School of Public Policy study: Overwhelming bipartisan majorities favor prohibiting stock-trading in individual companies by Members of Congress (86%, Republicans 87%, Democrats 88%, independents 81%) These issues aren’t anomalies. I stopped at seven issues, but there are many more issues like these, for instance, increasing veterans’ benefits (similarly supported by Pew/Gallup polling), and I’m sure there will be even more in the future. I recognize that some issues appear bipartisan at times, and then once a political party takes up the cause, they immediately become more partisan. But gun control, health care, and drugs, for example, have all been politicized in this manner. Yet, as noted above, supermajorities exist for specifically reasonable policies, including universal background checks, negotiating Medicare drug prices, and federal legalization of marijuana. Each issue has its own set of reasons for persisting stubbornly without change, typically a combination of special interests, regulatory capture, and general congressional dysfunction. I contend that, fundamentally, we, the people, need another path to change. (Incidentally, some original printed copies of the U.S. Constitution had a comma after we.) How can “we the people” change things? “We the people” leads off our Constitution, and yet it doesn’t provide a direct way for us to change it. I think it should. It would still have to be very hard to change the Constitution, but I think there should be a citizen path to do so, not just one that goes through elected officials. Sixteen states already allow for totally citizen-initiated state-level constitutional amendments. Typically, a threshold of signatures (between 5-15% of the votes cast in a recent election) needs to be achieved to get on the ballot, and then a direct vote happens where, if a threshold is reached (50-60%), then the amendment is ratified. I envision a similar process, but scaled up to the national level: Something like a signature requirement in enough states triggers getting a binding resolution on a federal election ballot. Then, if a supermajority of citizens in enough states vote for it, it is automatically ratified without additional involvement from Congress or state legislatures. Many people have proposed similar things in the past, and I plan to examine them and put forward my thoughts on a more concrete proposal in the future. As in the past, I don’t expect anything actually to happen, but it at least helps clarify my thinking, and you never know— maybe we’re approaching a moment when something can happen. Thanks for reading! Subscribe for free to receive new posts or get the podcast. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit gabrielweinberg.com

    8 min
  7. 8 JUN

    U.S. AI-labor protests could eventually resemble the French Yellow-Vest protests

    I do not yet have a well-formed opinion on the net impact of AI on job loss across different time scales. Will it be a large, net negative, or will it be close to net neutral, similar to previous technology cycles? “Expert” estimates are all over the place, and while there is little job loss directly attributable to AI right now, circumstantial evidence is accumulating. That said, I believe it is clear that even in a net-positive scenario, many jobs will be displaced as new ones are created. For example, the World Economic Forum predicts by 2030 the creation of 170 million new jobs worldwide. However, this growth is expected to be offset by the displacement of the equivalent of 8% (or 92 million) of current jobs, resulting in net growth of 7% of total employment, or 78 million jobs. Supposing that this is true and a similar scenario unfolds in the U.S., I still think this amount of displacement is likely to have significant negative consequences for the displaced. That is, the displaced individuals are unlikely to be the same individuals who secure the latest jobs, and this will leave many with worse jobs, or in a place with no job at all. For example, are cashiers and truck drivers going to get new, fancy AI jobs in another industry, without significant help to do so? I highly doubt it. So, recognizing that significant job displacement is on the horizon for many industries, regardless of how the overall total nets out, it would be ideal to provide affected individuals with a softer landing than we’ve managed to do in the past. Unfortunately, the likely policy outcome is that we do absolutely nothing until there is a considerable backlash to AI, and even then, we probably do nothing. The backlash would have to be big enough that politicians cannot ignore it, or it is seen as a political opportunity, such that it becomes a platform for elections. In a recent post titled Will the AI backlash spill into the streets?, I made the case that, among the numerous anticipated AI harms to society, job displacement stands alone with the potential to significantly spill over into the streets in the form of sustained protests. I’m not saying that it is likely to happen, and I hope it doesn’t, but I also think we could increase the probability of preventing it if we had a clearer picture of what we’re trying to avoid. At the end of that last post, I pondered about historical analogs: Recent protest movements seem more one-sided politically (e.g., climate change, Occupy Wall Street, Tea Party, etc.). Mid-century protests were arguably similar (civil rights movement, Vietnam War protests, etc.), though they were sustained for much longer and ultimately swayed public opinion and accelerated change. A better parallel here, though, would be something that is clearly bipartisan from the start, more squarely on an economic issue, and resulted in swift reforms. This line of reasoning led me to the Yellow Vests protests in France from 2018 to 2020, which I’ve come to believe could provide a decent historical analog of what could happen if AI job displacement reaches a critical mass in the coming years. It’s recent, directly related to economic insecurity, and created policy changes, though significant disruption and violence as well. Ideally, we’d get the policy changes without the widespread disruption and violence. Now I am not French, and I did not personally live through this protest movement. I do remember watching and reading coverage about it at the time, but that was the extent of my knowledge before I delved more deeply into the subject over the past week. (Below, facts are from Wikipedia unless otherwise noted.) Subscribe for free to receive new posts. What were the yellow vest protests about? They began as a reaction to a fuel tax increase, but quickly expanded to a broader reaction against economic insecurity. The movement never had clear leaders, nor was it tied to a particular political party. One widely circulated list of 42 demands compiled from online surveys went viral, including many specific economic demands such as rolling back the fuel tax, implementing minimum pensions, indexing wages to inflation, and providing jobs for the unemployed. It was somewhat disjointed, as it also demanded education and immigration reforms, among others; however, the central theme centered on economics and jobs. How many people participated? Approximately 3 million unique protesters participated, which is roughly 4% of France’s population of nearly 69 million. The U.S. has a population of around 341 million, so an equivalent number of unique protesters would be approximately 14 million. That’s the same order of magnitude as the George Floyd protests. The peak protest time was Nov-Dec 2018, with the highest single-day attendance of about 287,000; the U.S. equivalent would be about 1.5 million. Protests occurred across the entire country. After the initial period, there were still continual weekend demonstrations for about a year and a half in total, until the pandemic essentially brought them to an end. Did the protests turn violent? Unfortunately, yes. At least a dozen people died in the protests, with another five people losing their hands as a result of police grenades, and a reported twenty-three people losing their eyesight. More minor injuries were in the 1,000s for both protestors and police as a result of clashes. How much public support did it have? Very high. Public support for the movement reached a high of 75% in the initial phase, and then it declined over time. What was the government’s reaction? On December 10, 2018, about a month in, French President Macron gave a speech to the public, pledging a €100 increase in the monthly minimum wage, among other reforms. The speech was viewed live by more than 23 million people, or approximately a third of the entire population. The U.S. equivalent would be around 80-85 million. Concessions from the government ultimately totaled about €17 billion, which, converted to USD and scaled up to the U.S. economy, would be about $150 billion. What was the demographic makeup of the protestors? An academic study of the protesters found that approximately 47% were first-time protesters. The median income of the protesters was about 30% less than the country’s median income. Participation cut across the political spectrum, and the researchers concluded: In short, this is indeed a revolt of the ‘people’…in the sense of the working class and the lower-middle class, people on modest incomes. Consequently, in several ways the gilets jaunes movement presents a different kind of challenge from the social movements of recent decades. In addition to its size, the strong presence of employees, people of modest educational qualifications and first-time demonstrators, and, above all, the diversity of their relationship to politics and their declared party preferences, have made roundabouts and tollbooths meeting places for a France that is not used to taking over public spaces and speaking out, as well as places for the exchange of ideas and the construction of collectives in forms rarely seen in previous mobilizations. Why did they wear yellow vests? As noted, the movement originated as a response to a proposed fuel tax increase. Independently, a different French law requires motorists to have a yellow vest in their car to wear in case of an emergency, so many motorists had them readily available. A petition against the tax went viral, and then some associated viral videos called to “block all roads” and included the idea of using the yellow vests. What are some parallels to AI? First, while the backlash to AI is just getting started, as I noted in my previous post, the ingredients are there for a potential future movement that could match a similar broad-based revolt that transcends political parties, if significant negative job impacts accumulate over several years: * Cuts span industries, so outrage lands on both parties. * Every income bracket—from cashiers to coders—takes a hit. * Sudden, deep job cuts risk recession and years of high unemployment. To be clear, we’re not there yet, and may never get there. Job losses may not materialize. Or, they may unfold over a much longer period of time that doesn’t lend itself to banding together across industries. For example, current AI labor organizing remains limited to specific sectors like entertainment and dockworking. But, if AI-driven job displacement accelerates across multiple industries simultaneously—affecting many millions of workers over the next 3-5 years—the conditions could ripen for a Yellow Vest-style eruption. Second, as the Yellow Vest movement showed, it doesn’t actually have to be sparked by a sharp increase in job losses. Instead, if there is enough downward pressure on wages, unrest resulting from that pressure can build up until a critical mass is reached. In other words, that could still happen even if AI diffusion takes many years to touch many industries, as long as the impacts and resentment are sustained. Third, if a critical mass is reached, that’s essentially a powder keg waiting to explode, which means any event could be the proximate cause that sparks it. Therefore, like the Yellow Vests movement, I could see a similar AI movement happening in a decentralized fashion. That is, one that begins with online viral calls to action, which then spills out into the streets. What’s different? One difference is that the Yellow-Vest movement was primarily rooted in the lower income brackets, whereas AI has the potential to draw in affected people from across the income brackets, as noted above. Another difference is the Yellow-Vest spark occurred in response to an immediate economic pain from the fuel tax increase. It’s not clear exactly what that equivalent would be for AI, though I suspect if job displacement is vast enough, some match to light the powder keg will

    14 min
  8. 31 MAY

    How science funding literally pays for itself

    Previously, I gave an overview of eleven justifications for why public science funding is way too low: (1) longevity—living longer, (2) defense—wars of the future, (3) returns —pays for itself, (4) prosperity—long-term driver of productivity growth, (5) innovation—better everday products, (6) resilience—insurance for future calamities, (7) jobs—creates some now, and then better jobs in the future, (8) frontier—sci-fi is cool, (9) sovereignty—reduce single points of failure in the economy, (10) environment—new tech needed for climate change and energy efficiency, and (11) power—maintaining reserve currency, among other things. The returns justification—that science funding can literally pay for itself—may not seem like a critical rationale because it isn’t seemingly directly about science, but it’s the one that should end the debate. That’s because if many of the other justifications are valid, then paying for itself means that increasing science funding becomes a no-brainer, as it removes the downside (long-term cost). And crucially, research funding is the only policy with this pay-for-itself property that can scale to hundreds of billions in investment. But, how exactly can a significant government expenditure actually pay for itself? That’s a bold claim that deserves a little unpacking. Subscribe for free to receive new posts. Unpacking how research funding can literally pay for itself. It works by growing the economy so that, over time, the government collects more tax revenues than the initial expenditure. This is easier said than done, because the federal government currently collects only about 17% of GDP in federal revenue, so the growth must be substantial. Funding basic research, however, has been shown in many studies to achieve the growth rates necessary (more on that later). Here’s roughly how it works: Let’s say we invest $500B more in federal science funding this year, which ultimately grows the economy by $1.5T (3 times the investment) each year, once discoveries are fully commercialized in the economy. Suppose the federal government takes in 17% of that increase in GDP, or about $250B a year in extra federal revenues. Then, after roughly 15 years in this toy example, these future increased federal revenues will more than pay for the initial $500B, even considering the time value of money (discount rate). Of course, real-world models are more complicated, though those are roughly first-order accurate numbers for the U.S. In the previous post I referenced above, I had cited this IMF model (if you want to dig into it, search for innovation policy mix, with the underlying math in Online Annex 2.5). Here are their conclusions (note where it says pay for themselves near the end): [T]he implied fiscal multiplier—the increase in output per dollar of fiscal cost—is 3 to 4 over the long term for the most effective tools (Online Annex 2.5). This implies that increasing fiscal support for R&D by 0.5 percentage point of GDP (or about 50 percent of the current level in OECD economies) through a combination of public research funding, grants to firms, and tax credits could raise GDP by up to 2 percent. The GDP impact reflects the complementarity between public and private research. The innovation policy mix also lowers the public-debt-to-GDP ratio by about 0.5 percentage point over an eight-year horizon, as the initial increase in debt from higher fiscal spending is gradually offset by higher GDP and revenue (Online Annex 2.5). However, while innovation policies can pay for themselves in the long term, countries with limited fiscal space may need to raise revenue or reprioritize other spending to finance the short-term costs of those policies (see Chapter 1). In other words, increasing research funding at the margin today is expected to lower the national debt tomorrow by growing the economy, such that tax revenues eventually accumulate enough to start paying down debt. Of course, this assumes an advanced economy (like the U.S.) implementing a comprehensive and well-crafted policy mix (more on that in future posts). Research funding is the only scalable pay-for-itself policy. No other government expenditure category is like this in that it can arguably be raised to the order of $500B and realistically be expected to pay for itself and start reducing the long-term public debt-to-GDP ratio within a couple of decades (acknowledging we’d hit diminishing returns at some point). For example, infrastructure spending doesn’t pay for itself; according to the Congressional Budget Office (CBO), it is expected to reduce net costs by more like one-third (under deficit-neutral financing) or one-fourth (under debt-financing), not return multiple times the expenditure. Universal early childhood education has strong societal returns, but from a fiscal perspective, it takes much longer to break even, if at all, because most of the fiscal benefits occur as people grow up and earn more in middle age, forty to fifty years down the line. Targeted preventive health measures, like childhood vaccines, are also found to pay for themselves, but they don’t cost much from the federal government’s perspective, so they can’t take that much extra investment. In other words, increasing federal basic research funding is the highest return on investment (ROI) at-scale budgetary expenditure we have available. What are the returns for research funding? No one entirely knows, of course, because the returns change based on the particular funding apparatus, and the marginal return is ultimately different from the average; that is, you get diminishing returns at some point. In 2018, the CBO published this call for more research, noting “although extensive data exist on federal spending for nondefense R&D…[a] convincing synthesis of the results from the literature has proved elusive.” Since then, though, researchers have been heeding this call, including Karel Mertens at the Federal Reserve Bank of Dallas. In a 2024 paper entitled “The Returns to Government R&D: Evidence from U.S.,” Mertens and Andrew Fieldhouse take a novel approach by examining the aftereffects of historical “shocks” in R&D funding across five federal agencies. They conclude: [T]he implied rates of return to nondefense R&D are high. The reliable estimates range from around 140 percent to 210 percent… Our estimates also suggest that federal investments in nondefense R&D are self-financing from the perspective of the federal budget, at least in the long run. Assuming a return of 171 percent, a $1 long-run increase in government R&D capital would improve the budget as long as the additional tax revenue raised per dollar of additional GDP is at least 9 cents (δ/ρ = 0.16/1.71 = 0.09), which is substantially below the historical ratio of federal tax revenues to GDP. Note that 171% means $1 in yields $2.71 out, which is close to the IMF assumption above. They also have this summary blog post about their paper that gives a bit more color on the methodology and conclusions: We find that shocks to nondefense R&D appropriations lead to significant increases in various measures of productivity and scientific innovation, but only with a delay—consistent with implementation lags and a gradual diffusion of new knowledge… After about eight years, productivity starts to significantly and steadily increase. It continues rising and remains persistently elevated for at least 15 years after the increase in R&D appropriations. Put differently, greater nondefense government R&D appears to spur gains in long-term productivity, thus increasing living standards. What are the implications? The implications align with my thesis that science funding was already way too low before we started recently going further in the wrong direction. Mertens and Fieldhouse agree: In terms of policy implications, our finding of large returns to government R&D implies substantial underinvestment of public funds in nondefense R&D… I want to start developing a “prosperity platform” for a set of policies that will collectively maximize our future prosperity. I currently believe that dramatically increasing nondefense federal funding in basic research is #1 on that list. We need to reverse the trend and greatly increase this investment in our future: Thanks for reading! Subscribe for free to receive new posts or get the podcast. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit gabrielweinberg.com

    10 min

About

Tech, policy, and business insights from the DuckDuckGo founder and co-author of Traction and Super Thinking. gabrielweinberg.com