Gabriel Weinberg's Blog

Gabriel Weinberg

Tech, policy, and business insights from the DuckDuckGo founder and co-author of Traction and Super Thinking. gabrielweinberg.com

  1. 1D AGO

    Progress isn't automatic

    Everyone living today has lived in a world where science and technology, globally, have progressed at a relatively high rate compared to earlier times in human history. For most of human history, a random individual could expect to use roughly the same technology in one decade that they did the previous decade. That’s obviously not the case today. In fact, most of us alive today have little to no personal experience with such a degree of technological stagnation. That’s a good thing because long-term technological stagnation puts an upper bound on possible increases in our collective standard of living. From an earlier post: [W]ithout new technology, our economic prosperity is fundamentally limited. To see that, suppose no breakthroughs occur from this moment onward; we get no new technology based on no new science. Once we max out the hours we can work, the education people will seek, and the efficiency with existing technology, then what? We’d be literally stuck. Fundamentally, if you don’t have new tools, new technology, new scientific breakthroughs, you stagnate. That is, standard of living is fundamentally a function of labor productivity. To improve your standard of living, you need to make more money so you can buy better things, like housing, healthcare, leisure, etc. Once you get the best education you can, and maximize your hours, you are then limited in how much you can make based on how much you can produce, your output. How do you increase your output? Through better technology. At an economy-wide level, therefore, if we’re not introducing new technology, we will eventually hit a maximum output level we cannot push beyond. This is a counterintuitive and profound conclusion that I think gets overlooked because we take technological progression for granted. Science and technology don’t just progress on their own. There were many periods in history where they essentially completely stagnated in parts of the world. That’s because it takes considerable effort, education, organization, and money to advance science and technology. Without enough of any one of those ingredients, it doesn’t happen. And, if technological progression can go slower, perhaps it could also go faster, by better attuning the level of effort, education, organization, and money. For example, I’ve been arguing in this blog that the political debate now around science funding has an incredible amount of status quo bias embedded in it. I believe reducing funding will certainly slow us down, but I also believe science funding was already way too low, perhaps 3X below optimal levels. Put another way, I think a primary goal of government and society should be to increase our collective standard of living. You simply can’t do that long-term without technological progression. A couple of quick critiques I may tackle more in-depth in the future. Some people are worried that we’re just producing more stuff for the sake of producing more stuff, and that’s not really increasing the standard of living. First, with technological progression, the stuff gets both better and cheaper, and that is meaningful, for example, take medicines. Better medicines mean better health spans, and cheaper medicines mean more access to medicine. Second, people buy things, for the most part, on their own free will, and maybe people do want more stuff, and that’s not a bad thing in and of itself, as long as we can control for the negative externalities. Third, controlling for those negative externalities, like combating climate change effectively, actually requires new science and technology. Another common critique is that technology causes problems, for example, privacy problems. As someone who started a privacy-focused company, I’ve been holding this position for decades and continue to do so. But we shouldn’t throw the baby out with the bathwater. We need to do a more effective job regulating technology without slowing down its progression. Thanks for reading! Subscribe for free to receive new posts or get the podcast. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit gabrielweinberg.com

    5 min
  2. AUG 23

    Musings on evergreen content

    FYI: This post is a bit meta—about writing/blogging itself—so it may not be your cup of tea. I’ve been having some cognitive dissonance about blogging. On the one hand, I don’t believe in doing things primarily for legacy purposes, since in the long arc of history, hardly anything is likely to be remembered or matter that much, and I won’t be here regardless. On the other hand, I also don’t like spending a lot of my writing time on crafting for the ephemeral—like a social media post—because it seems that same writing time could be spent on developing something more evergreen. I stopped blogging a decade ago following that same logic, focusing my writing time on books instead, which are arguably more evergreen than blogging. But, I’m obviously back here blogging again, and with that context, here are some dissonant thoughts I’m struggling with: Is the chasing of more evergreen content just a disguised form of chasing legacy? I think not because long-term legacy is about after you’re dead, and I’m not looking for something that will last that long. I’m more looking to avoid the fate of most content that has a half-life of one day, such that my writing can have more of an impact in my lifetime. That is, it’s more about maximizing the amount of impact per unit time of writing than any long-term remembrance. Is there more value in more ephemeral content than I previously thought? I’m coming to believe yes, which is why I’ve started blogging again, despite most blog posts still having that short half-life I’m trying to avoid. Specifically, I think there can be cumulative value in more ephemeral content when it: * Builds to something like a movement of people behind a thematic idea that can spring into action collectively at some point in the future, which is also why I started this up again on an email-first (push) platform. * Helps craft a more persuasive or resonant argument, given feedback from smaller posts, such as how comedians build up their comedy specials through lots of trial and error. This last piece reminds me of Grounddog Day (the movie) where he keeps revising his day to achieve the perfect day, much like you can try to keep refining your argument until it perfectly resonates. In any case, it’s hard to achieve occasional evergreen content if you don’t have an audience to seed it with and if you don’t have a fantastic set of editors to help craft it (which hardly anyone does except in a professional context). That is, putting out more ephemeral content can be seen as part of the process of putting out quality evergreen content, both in terms of increasing its quality (from continuous feedback) and in terms of increasing its reach (from continuous audience building). Should I spend as much time editing these posts as I do? Probably not, given that it is very rare for one of these posts to go viral / become evergreen. The problem is, I like editing. However, trying to stick roughly to a posting frequency and using formats like this one (Q/A headings) really helps me avoid my over-editing tendencies. What is the relationship between blogging frequency and evergreen probability? There’s no doubt that some blog posts are evergreen in that people refer back to them years after they were written (assuming they are still accessible). Does the probability of becoming evergreen have any relationship to the frequency of posting? You can make compelling arguments for both sides: * If you post more, you have more chances to go viral, and most people in a viral situation don’t know your other posts anyway, so the frequency isn’t seemingly inhibiting any particular post from going viral. * If you post less, you will likely spend more time crafting each post, increasing each post’s quality, and thus increasing its chances of virality, which I think (though I am not sure) is a necessary condition of evergreenness. My current sense is that if you post daily, then you are unlikely to be creating evergreen content in those posts. Still, you can nevertheless have a significant impact (and perhaps more) by being top of mind in a faster-growing audience and influencing the collective conversation through that larger audience more frequently. That’s because there does seem to be a direct relationship between posting frequency and audience growth. However, posting daily is a full-time job in and of itself, and one I can’t personally do (since I already have a full-time job) and one I don’t want to do (since I don’t like being held to a schedule and also like editing/crafting sentences too much). So, yes, I do think there is a relationship between frequency and evergreenness, and there is probably some sweetspot in the middle between weekly and monthly that maximizes your evergreen chances. You need to be top of mind enough to retain and build an audience (including through recommendations), you need enough posting to get thorough feedback to improve quality, and you need enough time with each post to get it to a decent quality in the first place. The full-timers also have other options, like daily musings paired with more edited weekly or monthly posts. But, isn’t there a conflict between maximizing audience size and maximizing evergreen probability? Yes, I think there is. If you want to maximize audience size, the optimal post frequency is at least daily, vastly increasing the surface area with which your audience can grow, relative to a weekly posting schedule (or even less). But, that frequency, as previously stated, is not the optimal frequency for optimizing the probability of producing evergreen content. So, you have a tradeoff—audience size vs. evergreen probability. And it is a deeper tradeoff than just frequency, since I also think the kind of content that best grows audience size is much shallower than the kind of content that is more likely to go evergreen. As noted, you can relax this tradeoff with more time input, which I don’t have. So, for right now, acknowledging this tradeoff, I think I’m going to stick to a few, deeper posts a month, maybe edited a bit less though. I’d rather build a tighter audience that wants to engage more deeply in ideas that can last than a larger audience that wants to consume shallower content that is more ephemeral. I hope you agree and I could also use more feedback! Thanks for reading. Subscribe for free to receive new posts or get the podcast. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit gabrielweinberg.com

    7 min
  3. AUG 10

    Rule #3: Every decision involves trade-offs

    One of the universal rules of decision-making is that every decision involves trade-offs. By definition, when making a decision, you are trading off one option for another. But other, less obvious trade-offs are also lurking beneath the surface. For example, whatever you choose now will send you down a path that will shape your future decisions, a mental model known as path dependence. That is, your future path is both influenced and limited by your past decisions. If you enroll your child in a language immersion school, you significantly increase the chances that they will move to an area of the world where that language is spoken, decades from now. Maybe you're OK with that possible path, but you should at least be aware of it. In a real sense, you aren’t just trading the immediate outcomes of one option for another, but one future path for another. From that perspective, you can consider path-related dimensions, such as how the various options vary in terms of future optionality or reversibility. It’s not that more optionality or reversibility is always better, as often they come at a cost, but just that these less-obvious trade-offs should be considered. A related, less-obvious trade-off involves opportunity cost. If you pick from the options in front of you, what other opportunities are you forgoing? By explicitly asking this question, more options might reveal themselves. And, you should also always ask specifically what opportunities you are forgoing by deliberating on the decision. Sometimes waiting too long means you miss out on the best option; other times, putting off the decision means more options will emerge. Again, one side isn’t always better, since every situation is different. But the opportunity costs, including from waiting, should be explored. Another common but less-obvious trade-off concerns the different types of errors you can make in a decision. From our book Super Thinking: [C]onsider a mammogram, a medical test used in the diagnosis of breast cancer. You might think a test like this has two possible results: positive or negative. But really a mammogram has four possible outcomes…the two possible outcomes you immediately think of are when the test is right, the true positive and the true negative; the other two outcomes occur when the test is wrong, the false positive and the false negative. These error models occur well beyond statistics, in any system where judgments are made. Your email spam filter is a good example. Recently our spam filters flagged an email with photos of our new niece as spam (false positive). And actual spam messages still occasionally make it through our spam filters (false negatives). Because making each type of error has consequences, systems need to be designed with these consequences in mind. That is, you have to make decisions on the trade-off between the different types of error, recognizing that some errors are inevitable. For instance, the U.S. legal system is supposed to require proof beyond a reasonable doubt for criminal convictions. This is a conscious trade-off favoring false negatives (letting criminals go free) over false positives (wrongly convicting people of crimes). To uncover other less obvious trade-offs, you can brainstorm and explicitly list out (for example, in a spreadsheet) the more subtle dimensions on which options differ. Think of a comparison shopping page that compares numerous features and benefits. The obvious ones — such as cost — may immediately come to mind, but others may take time to surface, like how choosing one option might impact your personal quality of life in the future. The point is not to overthink decisions, but to be conscious about inherent trade-offs, especially the less obvious yet consequential ones. Just as I think you should take the time to write out assumptions explicitly, I also believe you should do the same for trade-offs. See other Rules. The Transporter (2002) Thanks for reading! Subscribe for free to receive new posts or get the podcast. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit gabrielweinberg.com

    4 min
  4. JUL 31

    9 ways DuckDuckGo's Search Assist Differs from Google’s AI Overviews

    At DuckDuckGo, our approach to AI is to only make AI features that are useful, private, and optional. If you don’t want AI, that’s cool with us. We have settings to turn off all of our AI features, and even a new setting to help you avoid AI-generated images in our search results. At the same time, we know a lot of people do want to use AI if it is actually useful and private (myself included). Our private chat service at duck.ai has the highest satisfaction ratings we’ve seen in a new feature, and Search Assist, our equivalent of Google’s AI Overviews, is currently our highest-rated search feature. Our goal with Search Assist is to improve search results, not to push AI. We’ve been continually evolving it in response to feedback, seeking better UX, and here’s how we’re thinking about that UX right now, relative to Google’s AI Overviews: * You can turn Search Assist off or turn it up—your preference. * When it does show, Search Assist keeps vertical space to a minimum so you can still easily get to other search results. * The initial Search Assist summary is intentionally short, usually two brief sentences. This brevity keeps hallucinations to a minimum since less text means less surface area to make things up. You also get the complete thought without having to click anything. However, you can still click for a fuller explanation. This is a subtle but important distinction: clicking more on Google is getting more of the same, longer summary; clicking more on DuckDuckGo is getting a new, completely independent generation. * You can use the Assist button to either generate an answer on demand if one isn’t showing automatically, or collapse an answer that is showing to zero vertical space. * When we don’t think a Search Assist answer is better than the other results, we don’t show it on top. Instead, we’ll show it in the middle, on the bottom, or not at all. This flexibility enables a more fine-tuned UX. * All source links are always visible, not hidden behind any clicks or separated from the answer. We’ve also been keeping sources to a minimum (usually two) to both increase answer quality (since LLMs can get confused with a lot of overlapping information) and increase source engagement. * Our thumbs up/down is also visible by default, not hidden behind a click. This anonymous feedback is extremely valuable to us as a primary signal to help us find ways to improve. * To generate these answers, we have a separate search crawling bot for Search Assist answers called DuckAssistBot that respects robots.txt headers. By separating DuckAssistBot from our normal DuckDuckBot, and unlike Google, we allow publishers to opt-out of just our Search Assist feature. * Like all of our search results, Search Assist is anonymous. We crawl sites and generate answers on your behalf, not exposing your personal information in the process or storing it ourselves. I’m sure our Search Assist UX will evolve further from here as we’re actively working on it every day. For example, we’re working now on making it easier to enter a follow-up question in-line, which allows you to more easily stay in context when entering your question. That is to say, the above is not set in stone and the answers for these queries will surely change over time, but I hope this post helps illustrate how we’re approaching Search Assist to be consistent with our overall approach to AI to be useful, private, and optional. Feedback is welcomed! Thanks for reading! Subscribe for free to receive new posts or get the podcast. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit gabrielweinberg.com

    5 min
  5. JUL 26

    The key to increasing standard of living is increasing labor productivity

    Standard of living doesn’t have a strictly agreed-upon definition, but for the sake of anchoring on something, let’s use “the level of income, comforts, and services available to an individual, community, or society” (Wikipedia). Gross Domestic Product (GDP) per capita, that is, the average economic output per person in a country, is often used as a proxy metric to compare the standard of living across countries. Of course, this proxy metric, being solely about money, doesn’t directly capture non-monetary aspects of standard of living associated with quality of life or well-being. However, most of these non-monetary aspects are tightly correlated with GDP per capita, rendering it a reasonable proxy. Our World in Data features numerous plots of such measures against GDP per capita. Here are a few of the ones people tend to care about most: These measures are clearly tightly correlated to GDP per capita, as are common aggregate measures such as the UN’s Human Development Index that combines lifespan, education levels, and GDP per capita. These tight correlations are somewhat intuitive because GDP per capita by definition means more money to buy things, and that includes buying more healthcare, education, leisure time, and luxiries, which one would expect to be correlated to healthspan, life satisfaction, and other measures of quality of life and well-being. Nevertheless, at some level of GDP per capita, you reach diminishing returns for a given measure, and we would then expect the corellation to cease for that measure. For example, here is access to clean (“improved”) water sources, which maxxes out at medium incomes after you reach 100% since you can’t go higher than 100% on this measure. However, we haven’t seen that yet for the most important measures like life expectancy, the poverty line, and self-reported life satisfaction. All of those can go higher still, and are expected to do so with further increases to GDP per capita, certainly for lower GDP-per-capita countries (climbing up the existing curve) but also for the U.S. (at or near the fronteir). In other words, with enough broad-based increases in income, many are lifted out of poverty, the middle class is more able to afford much of the current luxury and leisure of the rich, and the rich gets access to whatever emerges from new cutting-edge (and expensive) science and technology. We should continue to watch and ensure these correlations remain tight. But as they remain tight, I think it is safe to say right now that we would expect increases in standard of living to be tightly correlated with increasing GDP per capita. While there are other necessary conditions like maintaining rule of law, broadly giving people more money to buy better healthcare, education, and upgraded leisure time should increase standard of living. That part is pretty intuitive. What’s not intuitive is how to do so. You can’t just print money, because that results in inflation. It has to be increases in real income, that is, after inflation. So, how do you do that? If you’re a country where a large % of the working-able population doesn’t currently have a job, the easiest way is to find those people jobs. Unfortunately, that won’t work for the U.S. anymore since most everyone who wants a job has a job. It worked for a while through the 1960s, 70s, and 80s as ever greater %s of women entered the workforce, but then plateaued in the 1990s. You could try to get people with jobs to work more hours (and therefore make more money per person), but that also doesn’t work for the U.S. since we already work a lot relative to other frontier countries, and as people get more money they seem to want to work less, not more. For example, in the U.S. we’re working a lot less hours per worker than we did in 1950, let alone 1870. This makes intuitive sense since quality of life and well-being can’t get to the highest levels if you’re working all of the time. That leaves upgrading the jobs people already have in the form of higher income for the same amount of hours worked. And this means, by definition, increasing labor productivity, which is the amount of goods and services produced per hour of labor. To pay people more without triggering inflation, they also have to produce more output. That’s the counterintuitive piece and also it is our biggest opportunity for higher GDP per capita, and therefore higher standard of living. OK, but how do you increase labor productivity? I’m glad you asked. There are three primary ways, but only one has unbounded upside. Can you guess what it is? First, you can educate your workforce more, providing them with, on average, better skills to produce higher quality output per hour worked, a.k.a. investment in human capital. The U.S. is currently going in the wrong direction on this front when you look at the % of recent high-school graduates enrolled in “tertiary” education (which includes vocational programs). If we had continued to make steady progress through the 2010s and 2020s, we would be headed towards diminishing returns on this front. While it will surely be good to increase this further to get those gains—and there is more you can do than just tertiary education such as on-the-job training—like we saw earlier with access to clean water, there is effectively a max out point for education in terms of its effect on GDP per capita. Think of a point in the future where everyone who is willing and able has a college degree, or even a graduate degree. Second, you can buy your workforce more tools, equipment, and facilities to do their job more efficiently, a.k.a. investment in physical capital. This isn’t inventing new technology, just spending more money to get workers access to the best existing technology. Again, you clearly reach diminishing returns here too, that is, another max out point, as you buy everyone the best tech. Think of the point where everyone has a MacBook Pro with dual Studio Displays—or whatever the equivalent is in their job—to maximize their productivity. Third, and the only way that doesn’t have a max out point, is to invent new technology that enables workers to do more per hour. These are better tools than the existing tools on the market. Think of upgrading to the latest software version with updated features that make you a bit more productive. Or, more broad-based: Think of how worker productivity increased in construction with the introduction of power tools and heavy machinery or in offices with the introduction of computers and the Internet. We need more of these, many times over: true leaps forward in technology applications that will dramatically increase our worker productivity. (The Great Race) AI is likely one of these leaps, but by investing much more in basic research we can make higher labor productivity growth more continuous instead of the bumpy road it has recently been on. These leaps don’t come out of nowhere. They require decades of investment in research, and that investment requires a decent level of government investment at the earliest stages. This was the case for AI, as it was for the Internet, and as it is for life-saving drugs. This is actually good news, since it means we have a lever to pull to increase labor productivity that we’re not currently fully pulling: increase federal investment in basic research. The level we’ve ended at today is somewhat arbitrary, an output of a political process that wasn’t focused on increasing standard of living. In any case, I estimate at the bottom of this post that we’re off by about 3X. If you want another view on this topic, here is a good post from the International Monetary Fund (IMF): [I[mprovements in living standards must come from growth in TFP [Total Factor Productivity] over the long run. This is because living standards are measured as income per person—so an economy cannot raise them simply by adding more and more people to its workforce. Meanwhile, economists have amassed lots of evidence that investments in capital have diminishing returns. This leaves TFP advancement as the only possible source of sustained growth in income per person, as Robert Solow, the late Nobel laureate, first showed in a 1957 paper. TFP growth is also the answer to those who say that continued economic growth will one day exhaust our planet’s finite resources. When TFP improves, it allows us to maintain or increase living standards while conserving resources, including natural resources such as the climate and our biosphere. Or, as Paul Krugmam put it even more succinctly in his 1990 book The Age of Diminished Expectations: Productivity isn’t everything, but, in the long run, it is almost everything. A country’s ability to improve its standard of living over time depends almost entirely on its ability to raise its output per worker. —Paul Krugman Thanks for reading! Subscribe for free to receive new posts or get the podcast. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit gabrielweinberg.com

    10 min
  6. JUL 20

    Most chatbot users miss this key setting, so we moved it up front

    The magic of chatbots is that they make it seem like you’re chatting with a real person. But the default personality of this “person” isn’t one I particularly enjoy talking to, and in many cases I find downright annoying. Based on feedback from duck.ai users—who rely on our service for private access to popular chatbots—I know I’m not alone. What people want in a chatbot’s personality varies widely: I cringe at extra exclamation points and emojis, while others love them. I also find the default output too verbose, whereas some appreciate the added exposition. Of course, I could tell the chatbot every time to keep its replies short and emoji-free, but pasting that constantly is enough friction that I rarely bother. OpenAI and Anthropic do offer customization options in their settings, yet those options are buried and feature intimidating blank text boxes, such that I highly suspect most people never touch them. Recently, we’ve been considering this issue in the context of duck.ai. I’m sure what we’ll do here will continue to evolve as we get feedback, but to get started we’ve just introduced a much easier-to-find customization dialog. Not only does it make the responses feel better, it can make the actual content significantly better as well. As you can see in the video, it provides customization guidance through drop-downs and fields, including options to customize: * The tone of responses * The length of responses * Whether the chatbot should ask clarifying questions * The role of the chatbot (for example, teacher) * Your role (for example, student) * The nickname of the chatbot * Your nickname All fields are optional, and you can also add additional info if desired, as well as inspect what the instructions will look like in aggregate. If you select role(s), then there are detailed instructions that get created specifically for those. Here’s an example using the ‘Tech support specialist’ role, which asks you clarifying questions to drill down faster to a solution vs. the more generic (and lengthier) default response. Customized response: Generic response: All of this works through the “system prompt.” In an excellent post titled AI Horseless Carriages, Pete Koomen explains system prompts: LLM providers like OpenAI and Anthropic have adopted a convention to help make prompt writing easier: they split the prompt into two components: a System Prompt and a User Prompt, so named because in many API applications the app developers write the System Prompt and the user writes the User Prompt. The System Prompt explains to the model how to accomplish a particular set of tasks, and is re-used over and over again. The User Prompt describes a specific task to be done. When you set the duck.ai customization options, the instructions that are created are appended to the default system prompt, which is repeated (in the background) when you start a new conversation. That is, the instructions will apply to the current conversation as well as subsequent ones, until you change them again. Like everything we do at DuckDuckGo, these system prompt tweaks are also private. They are stored locally on your device only, along with your recent chats (if you choose to save them). When we ultimately add an optional ability to sync settings and chats across devices, it will be part of our end-to-end encrypted sync service, which DuckDuckGo cannot decrypt. And Duck.ai itself anonymizes chats to all model providers, doesn’t store chats itself, and ensures your chats aren’t used for AI training. More at the Duck.ai Privacy Policy. Our approach to AI is to make features that are useful, private, and optional. We believe these new duck.ai customization options tick all three boxes, but please try them out and let us know what you think. As always, please feel free to leave comments here. However, the best method for sharing feedback about duck.ai is to do so directly through the product, as it will then be shared with the entire team automatically. Thanks for reading. Subscribe for free to receive new posts or get the podcast. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit gabrielweinberg.com

    5 min
  7. JUL 8

    The debate over a potential economic bump from AI points to a much larger economic opportunity.

    The decade surrounding the Internet boom saw a marked increase in U.S. total factor productivity (TFP) growth—the economic efficiency gains beyond just adding more workers or capital—similar to higher levels seen before 1973. However, it then declined again from 2005 onward back to doldrums levels, which brings us to the present. Now, people are starting to debate a similar potential AI productivity bump unfolding over the next decade. However, this debate raises an even bigger question: How do we achieve sustained higher productivity growth forever, not just in occasional blips and bumps? I believe this is the most important economic question because higher productivity growth is the key factor that drives long-term higher standards of living. Here’s a concise, understandable explanation from the Bureau of Labor Statistics as to why: How can we achieve a higher standard of living? One way might simply be to work more, trading some free time for more income. Although working more will increase how much we can produce and purchase, are we better off? Not necessarily. Only if we increase our efficiency—by producing more goods and services without increasing the number of hours we work—can we be sure to increase our standard of living. To continuously produce more without increasing the number of hours worked per person, we need to continually develop better tools. To continually develop better tools, we need access to increasingly better science and technology. To get access to increasingly better science and technology, we need sustained higher investment in basic research. Think of how worker productivity increased in construction with the introduction of power tools and heavy machinery or in offices with the introduction of computers and the Internet. We need more of these, many times over: true leaps forward in technology applications that will dramatically increase our worker productivity. (The Great Race) That is, to ensure higher productivity growth, we must continually invest sufficiently in the next set of technologies that will generate and sustain these higher productivity levels. That requires increased investment in basic research. But, why can’t private industry do it all? The private sector…is excellent at taking established scientific breakthroughs and turning them into products over a few years. That’s because there is a clear profit motive in doing so. However, it is not as great at coming up with those scientific breakthroughs in the first place or commercializing them on much longer timescales like decades, where the profit motive is significantly reduced. This activity still generally involves some government-funded research in the early stages. (The Great Race) Yes, this includes AI too. A good post on this by Mark Riedl titled Visualizing the Influence of Federal Funding on the AI Boom takes seven key papers in AI, including Attention is All You Need (2017), and then traces which of their references explicitly acknowledge federal funding. ~18% of papers referenced by these 7 industry papers have acknowledged US federal funding. ~24% of papers referenced have US university authors. ~20% of papers referenced are industry lab-authored. ~42% of papers referenced do not have any industry authors. The AI boom did not happen in an industry vacuum. As with all research, it was an accumulation of knowledge, much of which generated in university settings. There is a growing narrative that academia isn’t important to AI anymore and US federal funding has no role in the AI boom. It’s more correct to say that the AI boom could not have happened without US federal funding. The same was true with the Internet boom. The same is also true in healthcare, for example, tracing federal funding through the most transformational drugs. And yet, as I previously explored, science funding was already way too low before recent cuts. Quite simply, since the 1960s, we haven’t invested sufficiently in basic research to create sustained higher productivity. But we could change that. And, as I've also previously detailed, this should be a no-brainer since, if done right, it literally pays for itself by expanding the economy, generating higher tax revenues, and ultimately lessening the debt-to-GDP ratio. So while economists debate whether AI will boost productivity by 0.5% or 1.5% for the next decade, we should be asking: How can we better invest in basic research today to ensure we’re still growing faster in 2050? Short-term productivity bumps are a boon. But, if we want sustained productivity gains, we need sustained productivity investment. Thanks for reading! Subscribe for free to receive new posts or get the podcast. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit gabrielweinberg.com

    6 min
  8. JUN 26

    States should be allowed to regulate AI because realistically Congress won't

    A provision in the “Big Beautiful Bill” (the massive federal spending and policy package currently being negotiated) aims to stop all state-level AI regulation for the next ten years. Now, I’m not anti-AI. My company (DuckDuckGo) offers our own private chatbot service at Duck.ai, we now generate millions of anonymous AI-assisted answers daily for our private search engine, and we are working on more AI features in our browser. We’re investing heavily in AI because, to achieve our aim of protecting people online as comprehensively as possible, we must offer a compelling alternative to the most commonly used untrusted online workflows, most notably searching, browsing, and now chatting. At the same time, I believe the AI backlash is real and growing, which is why I’m thinking and writing about it, and why we’re designing all our AI features to be useful, private, and optional. And, the backlash is real for good reason. AI poses a wide range of risks, including massive job displacement, extensive privacy concerns, and, at the extreme, existential risks. That’s why this “pause” is particularly dangerous. I’m not taking a position here on which risks should be regulated, when, or how. More on that in future posts. But I am saying that AI will require at least some well-crafted regulation to address some of its risks over the next ten years, and yet Congress has proven incapable of taking action. The states, on the other hand, do take action. Look no further than privacy law as a close parallel. It’s 2025, and the United States still lacks a comprehensive federal privacy law. The International Association of Privacy Professionals (IAPP) now tracks 144 countries with such laws (as of Jan 2025). The U.S. is a clear outlier: The most populous countries without a comprehensive national privacy law include the U.S., Pakistan, Bangladesh, Iran, and Iraq. This isn’t for lack of trying. Numerous bills have been proposed, and many hearings have been held, yet nothing has even come close to passing, not even after Snowden or Cambridge Analytica. Unrelated to privacy, Congress has proven unable to legislate effectively, and while we should work to fix that independently, we can’t wait for it. Meanwhile, IAPP tracks 19 states that have managed to pass general privacy laws to protect consumers to some extent, including the two most populous states, California and Texas. Despite fears that a “patchwork” of state laws would wreak havoc on innovation by going too far, they haven’t. Innovation hasn’t stalled, and neither have big-tech privacy violations. That’s because state privacy laws, while better than nothing, in my opinion, don’t nearly go far enough, which is why we (DuckDuckGo) still need to develop dozens of overlapping protections to keep consumers safe online. Meta’s latest AI-chatbot leak foreshadows a bleak AI-privacy future if there are literally no regulations in sight. State laws also provide Congress with both a blueprint for action and further incentive to enact laws. Nothing prevents a future AI bill from overriding (preempting) state AI laws. Of course, for that to happen, Congress would need to pass general tech legislation. I would love to witness that and have been working to help make it happen, but I am also realistic about Congress’s current capacity to regulate tech. Finally, the current proposal would seemingly preempt the most protective provisions of existing state privacy laws. That would be a giant step backward for online privacy. We helped pioneer Global Privacy Control, an opt-out signal that is on by default in our browser and extension, which has legal effect in California and other jurisdictions. Senator Maria Cantwell, ranking member on the Senate Commerce Committee, notes that the bill would nullify provisions of many state privacy laws that “give consumers the right to opt-out of profiling.” In the last 25 years, states filled the privacy law vacuum left by Congress; let them do the same for AI. We should not silence states from protecting their citizens from dangerous new risks for a decade. And, if Congress gets its act together, then great—those future bills can preempt any conflicting state provisions. Thanks for reading! Subscribe for free to receive new posts or get the podcast. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit gabrielweinberg.com

    5 min

About

Tech, policy, and business insights from the DuckDuckGo founder and co-author of Traction and Super Thinking. gabrielweinberg.com

You Might Also Like