Gabriel Weinberg's Blog

Gabriel Weinberg

Tech, policy, and business insights from the DuckDuckGo founder and co-author of Traction and Super Thinking. gabrielweinberg.com

  1. 2天前

    A U.S.-China tech tie is a big win for China because of its population advantage

    China’s population is declining, but UN projections show it will remain at least twice the size of the U.S. for decades. So what? Population size matters because economy size (GDP) matters and GDP = population × output per person. For example: * 100 million people × $50,000 per year = a $5 trillion economy * 1 billion people × $50,000 per year = a $50 trillion economy Then why hasn’t China’s economy already dwarfed America’s? It’s because China’s output per person is still much lower than America’s. China has about 4× the people but about ¼ the output per person. 4 × ¼ = 1. This means that by any standard GDP measure, such as market exchange rates or Purchasing Power Parity, the two economies are in the same ballpark today. OK, but why is China’s economic output per person so much lower than America’s? A primary reason is that large swaths of its workforce aren’t yet at the technological frontier. About 23% of Chinese workers are in agriculture vs. about 1½% in the U.S. However, if China continues to educate its population, mechanize its workforce, and diffuse technology across it, that gap will continue to narrow and per-worker output will continue to climb. Only a decade ago, over 30% of China’s workforce was in agriculture, and per-person output has grown much faster than in the U.S. for decades. Technology is the driving force enabling China to catch up with the U.S. in economic output per person. As long as China diffuses increasingly sophisticated technology through its workforce significantly faster than in the U.S., then it will keep raising output per person relative to the U.S., growing its economy faster. Diffusion is not automatic; it depends on continued private-sector dynamism and sound policy. It isn’t guaranteed, but it is certainly plausible, if not likely. Put another way, a U.S.-China tech tie is a big win for China because of its population advantage. China doesn't need to surpass us technologically; it just needs to implement what already exists across its massive workforce. Matching us is enough for its economy to dwarf ours. If per person output were equal today, China’s economy would be over 4× America’s because China’s population is over 4× the U.S. That exact 4× outcome is unlikely given China’s declining population and the time it takes to diffuse technology, but 2 to 3× is not out of the question. China doesn't even need to match our per-person output: their population will be over 3× ours for decades, so reaching ⅔ would still give them an economy twice our size since 3 × ⅔ = 2. Some may recall similar predictions about Japan in the 1980s that never materialized. But China is fundamentally different: Japan's population peaked at less than ½ the U.S., while China's is over 4× ours. Japan’s workforce had already reached the technological frontier when it stalled out, while China is still far behind with massive room to catch up. And what does China win exactly? China wins a much bigger economy. With an economy a multiple of the U.S., it’s much easier to outspend us on defense and R&D, since budgets are typically set as a share of GDP. Once China’s economy is double or triple ours, trying to keep up would strain our economy and risk the classic guns-over-butter trap. (This is the same trap that contributed to the Soviet Union’s collapse: too much of its economy steered toward military ends.) Alliances could help offset raw population scale, but only if we coordinate science, supply chains, and procurement, which we have not achieved at the needed scale. What if China then starts vastly outspending us on science and technology and becomes many years ahead of us in future critical technologies, such as artificial superintelligence, energy, quantum computing, humanoid robots, and space technology? That’s what the U.S. was to China just a few decades ago, and China runs five-year plans that prioritize science and technology. What can we do about it? Our current per person output advantage is not sustainable unless we regain technological dominance. By dominance, I don’t mean a few months ahead like today’s AI cycles. I mean many years ahead in developing, diffusing, and commercializing frontier science and technology. My takeaway: we need to recognize how quickly we are losing our privileged position to China. If its economy doubles or triples ours, it can outspend us to lock in technological and military dominance. That may not happen, but we shouldn’t bet on it. Instead, we should materially increase effective research funding and focus on our own technology diffusion plans to upgrade our jobs and raise our living standards. What about AI automation? The net job effect of AI automation is hotly debated, but any outcome doesn’t change this calculus. If employment levels remain about the same then the status quo population advantage remains. If net jobs drop dramatically due to an AI-dominated economy, staying ahead in AI systems becomes even more important. So, either way, doing more effective research and development is critical. This should be the most important and bipartisan political issue. Research and technology diffusion isn’t everything, but it is the cornerstone of future prosperity. If we don’t get it right, we definitely lose, and we’re currently not getting it right. Thanks for reading. Subscribe for free to receive new posts or get the audio feed. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit gabrielweinberg.com

    7 分钟
  2. 9月6日

    AI surveillance should be banned while there is still time.

    All the same privacy harms with online tracking are also present with AI, but worse. While chatbot conversations resemble longer search queries, chatbot privacy harms have the potential to be significantly worse because the inference potential is dramatically greater. Longer input invites more personal information to be provided, and people are starting to bare their souls to chatbots. The conversational format can make it feel like you’re talking to a friend, a professional, or even a therapist. While search queries reveal interests and personal problems, AI conversations take their specificity to another level and, in addition, reveal thought processes and communication styles, creating a much more comprehensive profile of your personality. This richer personal information can be more thoroughly exploited for manipulation, both commercially and ideologically, for example, through behavioral chatbot advertising and models designed (or themselves manipulated through SEO or hidden system prompts) to nudge you towards a political position or product. Chatbots have already been found to be more persuasive than humans and have caused people to go into delusional spirals as a result. I suspect we’re just scratching the surface, since they can become significantly more attuned to your particular persuasive triggers through chatbot memory features, where they train and fine-tune based on your past conversations, making the influence much more subtle. Instead of an annoying and obvious ad following you around everywhere, you can have a seemingly convincing argument, tailored to your personal style, with an improperly sourced “fact” that you’re unlikely to fact-check or a subtle product recommendation you’re likely to heed. That is, all the privacy debates surrounding Google search results from the past two decades apply one-for-one to AI chats, but to an even greater degree. That’s why we (at DuckDuckGo) started offering Duck.ai for protected chatbot conversations and optional, anonymous AI-assisted answers in our private search engine. In doing so, we’re demonstrating that privacy-respecting AI services are feasible. But unfortunately, such protected chats are not yet standard practice, and privacy mishaps are mounting quickly. Grok leaked hundreds of thousands of chatbot conversations that users thought were private. Perplexity’s AI agent was shown to be vulnerable to hackers who could slurp up your personal information. Open AI is openly talking about their vision for a “super assistant” that tracks everything you do and say (including offline). And Anthropic is going to start training on your chatbot conversations by default (previously the default was off). I collected these from just the past few weeks! It would therefore be ideal if Congress could act quickly to ensure that protected chats become the rule rather than the exception. And yet, I’m not holding my breath because it’s 2025 and the U.S. still doesn’t have a general online privacy law, let alone privacy enshrined in the Constitution as a fundamental right, as it should be. However, there does appear to be an opening right now for AI-specific federal legislation, despite the misguided attempts to ban state AI legislation. Time is running out because every day that passes further entrenches bad privacy practices. Congress must move before history completely repeats itself and everything that happened with online tracking happens again with AI tracking. AI surveillance should be banned while there is still time. No matter what happens, though, we will still be here, offering protected services, including optional AI services, to consumers who want to reap the productivity benefits of online tools without the privacy harms. Thanks for reading! Subscribe for free to get new posts or get the podcast. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit gabrielweinberg.com

    4 分钟
  3. 8月30日

    Progress isn't automatic

    Everyone living today has lived in a world where science and technology, globally, have progressed at a relatively high rate compared to earlier times in human history. For most of human history, a random individual could expect to use roughly the same technology in one decade that they did the previous decade. That’s obviously not the case today. In fact, most of us alive today have little to no personal experience with such a degree of technological stagnation. That’s a good thing because long-term technological stagnation puts an upper bound on possible increases in our collective standard of living. From an earlier post: [W]ithout new technology, our economic prosperity is fundamentally limited. To see that, suppose no breakthroughs occur from this moment onward; we get no new technology based on no new science. Once we max out the hours we can work, the education people will seek, and the efficiency with existing technology, then what? We’d be literally stuck. Fundamentally, if you don’t have new tools, new technology, new scientific breakthroughs, you stagnate. That is, standard of living is fundamentally a function of labor productivity. To improve your standard of living, you need to make more money so you can buy better things, like housing, healthcare, leisure, etc. Once you get the best education you can, and maximize your hours, you are then limited in how much you can make based on how much you can produce, your output. How do you increase your output? Through better technology. At an economy-wide level, therefore, if we’re not introducing new technology, we will eventually hit a maximum output level we cannot push beyond. This is a counterintuitive and profound conclusion that I think gets overlooked because we take technological progression for granted. Science and technology don’t just progress on their own. There were many periods in history where they essentially completely stagnated in parts of the world. That’s because it takes considerable effort, education, organization, and money to advance science and technology. Without enough of any one of those ingredients, it doesn’t happen. And, if technological progression can go slower, perhaps it could also go faster, by better attuning the level of effort, education, organization, and money. For example, I’ve been arguing in this blog that the political debate now around science funding has an incredible amount of status quo bias embedded in it. I believe reducing funding will certainly slow us down, but I also believe science funding was already way too low, perhaps 3X below optimal levels. Put another way, I think a primary goal of government and society should be to increase our collective standard of living. You simply can’t do that long-term without technological progression. A couple of quick critiques I may tackle more in-depth in the future. Some people are worried that we’re just producing more stuff for the sake of producing more stuff, and that’s not really increasing the standard of living. First, with technological progression, the stuff gets both better and cheaper, and that is meaningful, for example, take medicines. Better medicines mean better health spans, and cheaper medicines mean more access to medicine. Second, people buy things, for the most part, on their own free will, and maybe people do want more stuff, and that’s not a bad thing in and of itself, as long as we can control for the negative externalities. Third, controlling for those negative externalities, like combating climate change effectively, actually requires new science and technology. Another common critique is that technology causes problems, for example, privacy problems. As someone who started a privacy-focused company, I’ve been holding this position for decades and continue to do so. But we shouldn’t throw the baby out with the bathwater. We need to do a more effective job regulating technology without slowing down its progression. Thanks for reading! Subscribe for free to receive new posts or get the podcast. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit gabrielweinberg.com

    5 分钟
  4. 8月23日

    Musings on evergreen content

    FYI: This post is a bit meta—about writing/blogging itself—so it may not be your cup of tea. I’ve been having some cognitive dissonance about blogging. On the one hand, I don’t believe in doing things primarily for legacy purposes, since in the long arc of history, hardly anything is likely to be remembered or matter that much, and I won’t be here regardless. On the other hand, I also don’t like spending a lot of my writing time on crafting for the ephemeral—like a social media post—because it seems that same writing time could be spent on developing something more evergreen. I stopped blogging a decade ago following that same logic, focusing my writing time on books instead, which are arguably more evergreen than blogging. But, I’m obviously back here blogging again, and with that context, here are some dissonant thoughts I’m struggling with: Is the chasing of more evergreen content just a disguised form of chasing legacy? I think not because long-term legacy is about after you’re dead, and I’m not looking for something that will last that long. I’m more looking to avoid the fate of most content that has a half-life of one day, such that my writing can have more of an impact in my lifetime. That is, it’s more about maximizing the amount of impact per unit time of writing than any long-term remembrance. Is there more value in more ephemeral content than I previously thought? I’m coming to believe yes, which is why I’ve started blogging again, despite most blog posts still having that short half-life I’m trying to avoid. Specifically, I think there can be cumulative value in more ephemeral content when it: * Builds to something like a movement of people behind a thematic idea that can spring into action collectively at some point in the future, which is also why I started this up again on an email-first (push) platform. * Helps craft a more persuasive or resonant argument, given feedback from smaller posts, such as how comedians build up their comedy specials through lots of trial and error. This last piece reminds me of Grounddog Day (the movie) where he keeps revising his day to achieve the perfect day, much like you can try to keep refining your argument until it perfectly resonates. In any case, it’s hard to achieve occasional evergreen content if you don’t have an audience to seed it with and if you don’t have a fantastic set of editors to help craft it (which hardly anyone does except in a professional context). That is, putting out more ephemeral content can be seen as part of the process of putting out quality evergreen content, both in terms of increasing its quality (from continuous feedback) and in terms of increasing its reach (from continuous audience building). Should I spend as much time editing these posts as I do? Probably not, given that it is very rare for one of these posts to go viral / become evergreen. The problem is, I like editing. However, trying to stick roughly to a posting frequency and using formats like this one (Q/A headings) really helps me avoid my over-editing tendencies. What is the relationship between blogging frequency and evergreen probability? There’s no doubt that some blog posts are evergreen in that people refer back to them years after they were written (assuming they are still accessible). Does the probability of becoming evergreen have any relationship to the frequency of posting? You can make compelling arguments for both sides: * If you post more, you have more chances to go viral, and most people in a viral situation don’t know your other posts anyway, so the frequency isn’t seemingly inhibiting any particular post from going viral. * If you post less, you will likely spend more time crafting each post, increasing each post’s quality, and thus increasing its chances of virality, which I think (though I am not sure) is a necessary condition of evergreenness. My current sense is that if you post daily, then you are unlikely to be creating evergreen content in those posts. Still, you can nevertheless have a significant impact (and perhaps more) by being top of mind in a faster-growing audience and influencing the collective conversation through that larger audience more frequently. That’s because there does seem to be a direct relationship between posting frequency and audience growth. However, posting daily is a full-time job in and of itself, and one I can’t personally do (since I already have a full-time job) and one I don’t want to do (since I don’t like being held to a schedule and also like editing/crafting sentences too much). So, yes, I do think there is a relationship between frequency and evergreenness, and there is probably some sweetspot in the middle between weekly and monthly that maximizes your evergreen chances. You need to be top of mind enough to retain and build an audience (including through recommendations), you need enough posting to get thorough feedback to improve quality, and you need enough time with each post to get it to a decent quality in the first place. The full-timers also have other options, like daily musings paired with more edited weekly or monthly posts. But, isn’t there a conflict between maximizing audience size and maximizing evergreen probability? Yes, I think there is. If you want to maximize audience size, the optimal post frequency is at least daily, vastly increasing the surface area with which your audience can grow, relative to a weekly posting schedule (or even less). But, that frequency, as previously stated, is not the optimal frequency for optimizing the probability of producing evergreen content. So, you have a tradeoff—audience size vs. evergreen probability. And it is a deeper tradeoff than just frequency, since I also think the kind of content that best grows audience size is much shallower than the kind of content that is more likely to go evergreen. As noted, you can relax this tradeoff with more time input, which I don’t have. So, for right now, acknowledging this tradeoff, I think I’m going to stick to a few, deeper posts a month, maybe edited a bit less though. I’d rather build a tighter audience that wants to engage more deeply in ideas that can last than a larger audience that wants to consume shallower content that is more ephemeral. I hope you agree and I could also use more feedback! Thanks for reading. Subscribe for free to receive new posts or get the podcast. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit gabrielweinberg.com

    7 分钟
  5. 8月10日

    Rule #3: Every decision involves trade-offs

    One of the universal rules of decision-making is that every decision involves trade-offs. By definition, when making a decision, you are trading off one option for another. But other, less obvious trade-offs are also lurking beneath the surface. For example, whatever you choose now will send you down a path that will shape your future decisions, a mental model known as path dependence. That is, your future path is both influenced and limited by your past decisions. If you enroll your child in a language immersion school, you significantly increase the chances that they will move to an area of the world where that language is spoken, decades from now. Maybe you're OK with that possible path, but you should at least be aware of it. In a real sense, you aren’t just trading the immediate outcomes of one option for another, but one future path for another. From that perspective, you can consider path-related dimensions, such as how the various options vary in terms of future optionality or reversibility. It’s not that more optionality or reversibility is always better, as often they come at a cost, but just that these less-obvious trade-offs should be considered. A related, less-obvious trade-off involves opportunity cost. If you pick from the options in front of you, what other opportunities are you forgoing? By explicitly asking this question, more options might reveal themselves. And, you should also always ask specifically what opportunities you are forgoing by deliberating on the decision. Sometimes waiting too long means you miss out on the best option; other times, putting off the decision means more options will emerge. Again, one side isn’t always better, since every situation is different. But the opportunity costs, including from waiting, should be explored. Another common but less-obvious trade-off concerns the different types of errors you can make in a decision. From our book Super Thinking: [C]onsider a mammogram, a medical test used in the diagnosis of breast cancer. You might think a test like this has two possible results: positive or negative. But really a mammogram has four possible outcomes…the two possible outcomes you immediately think of are when the test is right, the true positive and the true negative; the other two outcomes occur when the test is wrong, the false positive and the false negative. These error models occur well beyond statistics, in any system where judgments are made. Your email spam filter is a good example. Recently our spam filters flagged an email with photos of our new niece as spam (false positive). And actual spam messages still occasionally make it through our spam filters (false negatives). Because making each type of error has consequences, systems need to be designed with these consequences in mind. That is, you have to make decisions on the trade-off between the different types of error, recognizing that some errors are inevitable. For instance, the U.S. legal system is supposed to require proof beyond a reasonable doubt for criminal convictions. This is a conscious trade-off favoring false negatives (letting criminals go free) over false positives (wrongly convicting people of crimes). To uncover other less obvious trade-offs, you can brainstorm and explicitly list out (for example, in a spreadsheet) the more subtle dimensions on which options differ. Think of a comparison shopping page that compares numerous features and benefits. The obvious ones — such as cost — may immediately come to mind, but others may take time to surface, like how choosing one option might impact your personal quality of life in the future. The point is not to overthink decisions, but to be conscious about inherent trade-offs, especially the less obvious yet consequential ones. Just as I think you should take the time to write out assumptions explicitly, I also believe you should do the same for trade-offs. See other Rules. The Transporter (2002) Thanks for reading! Subscribe for free to receive new posts or get the podcast. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit gabrielweinberg.com

    4 分钟
  6. 7月31日

    9 ways DuckDuckGo's Search Assist Differs from Google’s AI Overviews

    At DuckDuckGo, our approach to AI is to only make AI features that are useful, private, and optional. If you don’t want AI, that’s cool with us. We have settings to turn off all of our AI features, and even a new setting to help you avoid AI-generated images in our search results. At the same time, we know a lot of people do want to use AI if it is actually useful and private (myself included). Our private chat service at duck.ai has the highest satisfaction ratings we’ve seen in a new feature, and Search Assist, our equivalent of Google’s AI Overviews, is currently our highest-rated search feature. Our goal with Search Assist is to improve search results, not to push AI. We’ve been continually evolving it in response to feedback, seeking better UX, and here’s how we’re thinking about that UX right now, relative to Google’s AI Overviews: * You can turn Search Assist off or turn it up—your preference. * When it does show, Search Assist keeps vertical space to a minimum so you can still easily get to other search results. * The initial Search Assist summary is intentionally short, usually two brief sentences. This brevity keeps hallucinations to a minimum since less text means less surface area to make things up. You also get the complete thought without having to click anything. However, you can still click for a fuller explanation. This is a subtle but important distinction: clicking more on Google is getting more of the same, longer summary; clicking more on DuckDuckGo is getting a new, completely independent generation. * You can use the Assist button to either generate an answer on demand if one isn’t showing automatically, or collapse an answer that is showing to zero vertical space. * When we don’t think a Search Assist answer is better than the other results, we don’t show it on top. Instead, we’ll show it in the middle, on the bottom, or not at all. This flexibility enables a more fine-tuned UX. * All source links are always visible, not hidden behind any clicks or separated from the answer. We’ve also been keeping sources to a minimum (usually two) to both increase answer quality (since LLMs can get confused with a lot of overlapping information) and increase source engagement. * Our thumbs up/down is also visible by default, not hidden behind a click. This anonymous feedback is extremely valuable to us as a primary signal to help us find ways to improve. * To generate these answers, we have a separate search crawling bot for Search Assist answers called DuckAssistBot that respects robots.txt headers. By separating DuckAssistBot from our normal DuckDuckBot, and unlike Google, we allow publishers to opt-out of just our Search Assist feature. * Like all of our search results, Search Assist is anonymous. We crawl sites and generate answers on your behalf, not exposing your personal information in the process or storing it ourselves. I’m sure our Search Assist UX will evolve further from here as we’re actively working on it every day. For example, we’re working now on making it easier to enter a follow-up question in-line, which allows you to more easily stay in context when entering your question. That is to say, the above is not set in stone and the answers for these queries will surely change over time, but I hope this post helps illustrate how we’re approaching Search Assist to be consistent with our overall approach to AI to be useful, private, and optional. Feedback is welcomed! Thanks for reading! Subscribe for free to receive new posts or get the podcast. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit gabrielweinberg.com

    5 分钟
  7. 7月26日

    The key to increasing standard of living is increasing labor productivity

    Standard of living doesn’t have a strictly agreed-upon definition, but for the sake of anchoring on something, let’s use “the level of income, comforts, and services available to an individual, community, or society” (Wikipedia). Gross Domestic Product (GDP) per capita, that is, the average economic output per person in a country, is often used as a proxy metric to compare the standard of living across countries. Of course, this proxy metric, being solely about money, doesn’t directly capture non-monetary aspects of standard of living associated with quality of life or well-being. However, most of these non-monetary aspects are tightly correlated with GDP per capita, rendering it a reasonable proxy. Our World in Data features numerous plots of such measures against GDP per capita. Here are a few of the ones people tend to care about most: These measures are clearly tightly correlated to GDP per capita, as are common aggregate measures such as the UN’s Human Development Index that combines lifespan, education levels, and GDP per capita. These tight correlations are somewhat intuitive because GDP per capita by definition means more money to buy things, and that includes buying more healthcare, education, leisure time, and luxiries, which one would expect to be correlated to healthspan, life satisfaction, and other measures of quality of life and well-being. Nevertheless, at some level of GDP per capita, you reach diminishing returns for a given measure, and we would then expect the corellation to cease for that measure. For example, here is access to clean (“improved”) water sources, which maxxes out at medium incomes after you reach 100% since you can’t go higher than 100% on this measure. However, we haven’t seen that yet for the most important measures like life expectancy, the poverty line, and self-reported life satisfaction. All of those can go higher still, and are expected to do so with further increases to GDP per capita, certainly for lower GDP-per-capita countries (climbing up the existing curve) but also for the U.S. (at or near the fronteir). In other words, with enough broad-based increases in income, many are lifted out of poverty, the middle class is more able to afford much of the current luxury and leisure of the rich, and the rich gets access to whatever emerges from new cutting-edge (and expensive) science and technology. We should continue to watch and ensure these correlations remain tight. But as they remain tight, I think it is safe to say right now that we would expect increases in standard of living to be tightly correlated with increasing GDP per capita. While there are other necessary conditions like maintaining rule of law, broadly giving people more money to buy better healthcare, education, and upgraded leisure time should increase standard of living. That part is pretty intuitive. What’s not intuitive is how to do so. You can’t just print money, because that results in inflation. It has to be increases in real income, that is, after inflation. So, how do you do that? If you’re a country where a large % of the working-able population doesn’t currently have a job, the easiest way is to find those people jobs. Unfortunately, that won’t work for the U.S. anymore since most everyone who wants a job has a job. It worked for a while through the 1960s, 70s, and 80s as ever greater %s of women entered the workforce, but then plateaued in the 1990s. You could try to get people with jobs to work more hours (and therefore make more money per person), but that also doesn’t work for the U.S. since we already work a lot relative to other frontier countries, and as people get more money they seem to want to work less, not more. For example, in the U.S. we’re working a lot less hours per worker than we did in 1950, let alone 1870. This makes intuitive sense since quality of life and well-being can’t get to the highest levels if you’re working all of the time. That leaves upgrading the jobs people already have in the form of higher income for the same amount of hours worked. And this means, by definition, increasing labor productivity, which is the amount of goods and services produced per hour of labor. To pay people more without triggering inflation, they also have to produce more output. That’s the counterintuitive piece and also it is our biggest opportunity for higher GDP per capita, and therefore higher standard of living. OK, but how do you increase labor productivity? I’m glad you asked. There are three primary ways, but only one has unbounded upside. Can you guess what it is? First, you can educate your workforce more, providing them with, on average, better skills to produce higher quality output per hour worked, a.k.a. investment in human capital. The U.S. is currently going in the wrong direction on this front when you look at the % of recent high-school graduates enrolled in “tertiary” education (which includes vocational programs). If we had continued to make steady progress through the 2010s and 2020s, we would be headed towards diminishing returns on this front. While it will surely be good to increase this further to get those gains—and there is more you can do than just tertiary education such as on-the-job training—like we saw earlier with access to clean water, there is effectively a max out point for education in terms of its effect on GDP per capita. Think of a point in the future where everyone who is willing and able has a college degree, or even a graduate degree. Second, you can buy your workforce more tools, equipment, and facilities to do their job more efficiently, a.k.a. investment in physical capital. This isn’t inventing new technology, just spending more money to get workers access to the best existing technology. Again, you clearly reach diminishing returns here too, that is, another max out point, as you buy everyone the best tech. Think of the point where everyone has a MacBook Pro with dual Studio Displays—or whatever the equivalent is in their job—to maximize their productivity. Third, and the only way that doesn’t have a max out point, is to invent new technology that enables workers to do more per hour. These are better tools than the existing tools on the market. Think of upgrading to the latest software version with updated features that make you a bit more productive. Or, more broad-based: Think of how worker productivity increased in construction with the introduction of power tools and heavy machinery or in offices with the introduction of computers and the Internet. We need more of these, many times over: true leaps forward in technology applications that will dramatically increase our worker productivity. (The Great Race) AI is likely one of these leaps, but by investing much more in basic research we can make higher labor productivity growth more continuous instead of the bumpy road it has recently been on. These leaps don’t come out of nowhere. They require decades of investment in research, and that investment requires a decent level of government investment at the earliest stages. This was the case for AI, as it was for the Internet, and as it is for life-saving drugs. This is actually good news, since it means we have a lever to pull to increase labor productivity that we’re not currently fully pulling: increase federal investment in basic research. The level we’ve ended at today is somewhat arbitrary, an output of a political process that wasn’t focused on increasing standard of living. In any case, I estimate at the bottom of this post that we’re off by about 3X. If you want another view on this topic, here is a good post from the International Monetary Fund (IMF): [I[mprovements in living standards must come from growth in TFP [Total Factor Productivity] over the long run. This is because living standards are measured as income per person—so an economy cannot raise them simply by adding more and more people to its workforce. Meanwhile, economists have amassed lots of evidence that investments in capital have diminishing returns. This leaves TFP advancement as the only possible source of sustained growth in income per person, as Robert Solow, the late Nobel laureate, first showed in a 1957 paper. TFP growth is also the answer to those who say that continued economic growth will one day exhaust our planet’s finite resources. When TFP improves, it allows us to maintain or increase living standards while conserving resources, including natural resources such as the climate and our biosphere. Or, as Paul Krugmam put it even more succinctly in his 1990 book The Age of Diminished Expectations: Productivity isn’t everything, but, in the long run, it is almost everything. A country’s ability to improve its standard of living over time depends almost entirely on its ability to raise its output per worker. —Paul Krugman Thanks for reading! Subscribe for free to receive new posts or get the podcast. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit gabrielweinberg.com

    10 分钟
  8. 7月20日

    Most chatbot users miss this key setting, so we moved it up front

    The magic of chatbots is that they make it seem like you’re chatting with a real person. But the default personality of this “person” isn’t one I particularly enjoy talking to, and in many cases I find downright annoying. Based on feedback from duck.ai users—who rely on our service for private access to popular chatbots—I know I’m not alone. What people want in a chatbot’s personality varies widely: I cringe at extra exclamation points and emojis, while others love them. I also find the default output too verbose, whereas some appreciate the added exposition. Of course, I could tell the chatbot every time to keep its replies short and emoji-free, but pasting that constantly is enough friction that I rarely bother. OpenAI and Anthropic do offer customization options in their settings, yet those options are buried and feature intimidating blank text boxes, such that I highly suspect most people never touch them. Recently, we’ve been considering this issue in the context of duck.ai. I’m sure what we’ll do here will continue to evolve as we get feedback, but to get started we’ve just introduced a much easier-to-find customization dialog. Not only does it make the responses feel better, it can make the actual content significantly better as well. As you can see in the video, it provides customization guidance through drop-downs and fields, including options to customize: * The tone of responses * The length of responses * Whether the chatbot should ask clarifying questions * The role of the chatbot (for example, teacher) * Your role (for example, student) * The nickname of the chatbot * Your nickname All fields are optional, and you can also add additional info if desired, as well as inspect what the instructions will look like in aggregate. If you select role(s), then there are detailed instructions that get created specifically for those. Here’s an example using the ‘Tech support specialist’ role, which asks you clarifying questions to drill down faster to a solution vs. the more generic (and lengthier) default response. Customized response: Generic response: All of this works through the “system prompt.” In an excellent post titled AI Horseless Carriages, Pete Koomen explains system prompts: LLM providers like OpenAI and Anthropic have adopted a convention to help make prompt writing easier: they split the prompt into two components: a System Prompt and a User Prompt, so named because in many API applications the app developers write the System Prompt and the user writes the User Prompt. The System Prompt explains to the model how to accomplish a particular set of tasks, and is re-used over and over again. The User Prompt describes a specific task to be done. When you set the duck.ai customization options, the instructions that are created are appended to the default system prompt, which is repeated (in the background) when you start a new conversation. That is, the instructions will apply to the current conversation as well as subsequent ones, until you change them again. Like everything we do at DuckDuckGo, these system prompt tweaks are also private. They are stored locally on your device only, along with your recent chats (if you choose to save them). When we ultimately add an optional ability to sync settings and chats across devices, it will be part of our end-to-end encrypted sync service, which DuckDuckGo cannot decrypt. And Duck.ai itself anonymizes chats to all model providers, doesn’t store chats itself, and ensures your chats aren’t used for AI training. More at the Duck.ai Privacy Policy. Our approach to AI is to make features that are useful, private, and optional. We believe these new duck.ai customization options tick all three boxes, but please try them out and let us know what you think. As always, please feel free to leave comments here. However, the best method for sharing feedback about duck.ai is to do so directly through the product, as it will then be shared with the entire team automatically. Thanks for reading. Subscribe for free to receive new posts or get the podcast. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit gabrielweinberg.com

    5 分钟

关于

Tech, policy, and business insights from the DuckDuckGo founder and co-author of Traction and Super Thinking. gabrielweinberg.com