Gabriel Weinberg's Blog

Gabriel Weinberg

Tech, policy, and business insights from the DuckDuckGo founder and co-author of Traction and Super Thinking. gabrielweinberg.com

  1. VOR 6 TAGEN

    Simulating likely 2026 World Cup matchups (for all matches)

    I’ve been using Cursor for coding for some time, but I finally gave Claude Code a try for this short side project: simulating the 2026 World Cup bracket to predict likely matchups for all matches, which is useful when considering which matches to potentially go to. Methodology: * Start with the official World Cup tournament schedule (including yet-to-be played playoff matches) * Blend Elo rankings with FIFA rankings (50/50) * Use the Elo formulas to probabilistically predict winners (assuming no draws, even in group stage) * Run one million individual simulations of the full tournament (it reaches diminishing returns around 50K, but hey, why not!) * Run again with a home field advantage boost (+180 Elo) for the U.S., Canada, and Mexico based on prior World Cup outcomes * Count up who participated in each match Some interesting findings (at least to me as a U.S. fan) are below, followed by a rundown for every match (in reverse order). Big Disclaimer 1: The above is of course a gross simplification of the actual tournament. For example, it doesn’t take into account team matchup histories, game models, etc. etc. I do think, however, it is useful enough for the designed purpose of generally predicting likely match participants. Big Disclaimer 2: I did a lot of output validation so I think the results are largely accurate (to the extent they can be given Big Disclaimer 1). However, I didn’t write or review every line of code, so it is likely there are still some bugs in there. If you think you see anything that seems off, let me know and I’ll try to track it down (and update anything if necessary). Aside on Claude code: Like many others, I found this process both productive and frustrating. It was definitely faster than I could have done it alone, but Claude kept forgetting basic context, and was way overconfident in the accuracy of the results. That is, many rounds of validation at every stage of output was absolutely necessary despite Claude saying things were good. I couldn’t trust its word at all. HA+ = with home field advantage (anytime this comes into play there is a + next to the team name) HA- = without home field advantage Most Likely Finals # HA+ HA- Matchup 1 12.4% 13.3% Argentina v Spain 2 5.4% 5.6% Argentina v France 3 4.8% 6.4% England v Spain 4 3.3% 3.4% Brazil v Spain 5 2.8% 3.2% Portugal v Spain 6 2.5% 2.7% France v Spain 7 2.4% 0.2% Argentina v United States 8 2.3% 3.0% England v France 9 1.9% 0.7% Mexico v Spain 10 1.8% 1.9% Argentina v Netherlands 11 1.7% 1.7% Brazil v France 12 1.6% 0.2% Spain v United States 13 1.4% 1.7% Colombia v Spain 14 1.3% 1.5% France v Portugal 15 1.2% 1.4% Argentina v England 16 1.1% 1.2% Netherlands v Spain 17 1.1% 1.2% Spain v Uruguay 18 1.0% 0.1% England v United States 19 1.0% 0.4% France v Mexico 20 0.9% 1.0% Argentina v Germany Championship Probabilities # HA+ HA- FIFA ELO Blend Team 1 29.2% 30.6% 1 1 1 Spain 2 19.0% 19.2% 2 2 2 Argentina 3 11.9% 12.2% 3 3 3 France 4 7.4% 9.1% 4 4 4 England 5 4.7% 4.6% 5 6 5 Brazil 6 4.7% 0.3% 15 31 23 United States+ 7 3.7% 4.1% 6 7 6 Portugal 8 3.4% 3.4% 7 8 7 Netherlands 9 2.2% 2.5% 13 5 8 Colombia 10 1.7% 1.8% 10 12 10 Germany 11 1.5% 1.8% 11 10 9 Croatia 12 1.5% 0.7% 16 20 19 Mexico 13 1.1% 1.3% 9 21 11 Belgium 14 0.9% 0.9% 8 24 13 Morocco 15 0.9% 1.0% 18 13 14 Switzerland 16 0.9% 0.9% 14 17 16 Senegal 17 0.8% 0.9% 17 14 15 Uruguay 18 0.8% 0.8% 23 9 17 Ecuador 19 0.7% 0.7% 12 19 12 Italy 20 0.7% 0.7% 19 16 18 Japan Here’s a visualization of the above made by a reader (thanks!) United States: Tournament Progression HA+ HA- Round 97.9% 88.8% Round of 32 69.1% 39.5% Round of 16 42.7% 14.4% Quarter-Finals 18.7% 3.8% Semi-Finals 10.2% 1.1% Final 4.7% 0.3% Champion United States: Most Likely Opponents by Round Round of 32: # HA+ HA- Opponent 1 12.7% 16.1% Iran 2 12.6% 5.0% Switzerland 3 11.8% 13.7% Canada 4 9.3% 3.8% Italy 5 5.6% 3.3% Ukraine 6 5.5% 7.0% Egypt 7 5.4% 15.6% France 8 5.3% 6.8% Belgium 9 5.3% 4.3% Norway 10 5.1% 3.1% Japan 11 4.8% 2.7% Wales 12 4.7% 2.8% Poland 13 3.3% 4.2% Senegal 14 2.3% 1.4% Netherlands 15 1.4% 4.5% Portugal Round of 16: # HA+ HA- Opponent 1 28.7% 20.4% Belgium 2 17.8% 24.6% Argentina 3 9.4% 6.9% Algeria 4 8.3% 6.3% Iran 5 6.8% 4.9% Norway 6 6.7% 5.2% Austria 7 6.0% 7.8% Uruguay 8 3.9% 2.8% Senegal 9 2.1% 1.8% Egypt 10 2.1% 2.9% Spain 11 1.8% 4.4% Germany 12 1.3% 1.8% Korea Republic 13 1.1% 0.8% France 14 0.9% 2.2% Ecuador 15 0.6% 0.9% Canada Quarter-Finals: # HA+ HA- Opponent 1 45.4% 35.5% Spain 2 9.5% 10.2% Portugal 3 9.4% 8.8% Colombia 4 7.4% 6.1% Croatia 5 5.4% 4.3% England 6 3.5% 5.8% Argentina 7 3.1% 1.2% Canada 8 2.5% 2.6% Uruguay 9 2.5% 2.1% Austria 10 1.8% 3.9% Switzerland 11 1.7% 2.9% Italy 12 1.4% 3.8% Netherlands 13 1.1% 1.2% Panama 14 0.9% 0.9% Algeria 15 0.6% 1.7% Morocco Semi-Finals: # HA+ HA- Opponent 1 24.9% 18.4% France 2 11.2% 8.2% Netherlands 3 8.4% 8.8% Brazil 4 7.2% 5.8% Germany 5 5.8% 10.5% England 6 5.1% 3.3% Mexico 7 4.9% 4.0% Morocco 8 3.6% 3.5% Ecuador 9 3.4% 2.1% Switzerland 10 3.3% 9.4% Spain 11 3.1% 3.6% Senegal 12 2.9% 2.1% Denmark 13 2.8% 2.9% Japan 14 2.6% 1.8% Korea Republic 15 1.7% 1.0% Italy Final: # HA+ HA- Opponent 1 23.1% 18.8% Argentina 2 15.4% 20.4% Spain 3 9.7% 10.7% England 4 7.9% 9.9% France 5 6.9% 5.8% Brazil 6 5.7% 5.3% Portugal 7 3.6% 1.4% Mexico 8 3.1% 2.7% Colombia 9 3.0% 3.5% Netherlands 10 2.1% 2.2% Croatia 11 2.0% 2.0% Germany 12 1.8% 1.6% Uruguay 13 1.7% 1.5% Senegal 14 1.6% 2.1% Belgium 15 1.5% 1.6% Ecuador M104 - Final (Winners of M101 & M102) Jul 19, New York/New Jersey (MetLife Stadium) # HA+ HA- Teams 1 12.4% 13.3% Argentina v Spain 2 5.4% 5.6% Argentina v France 3 4.8% 6.4% England v Spain 4 3.3% 3.4% Brazil v Spain 5 2.8% 3.2% Portugal v Spain 6 2.5% 2.7% France v Spain 7 2.4% 0.2% Argentina v United States+ 8 2.3% 3.0% England v France 9 1.9% 0.7% Mexico v Spain 10 1.8% 1.9% Argentina v Netherlands 11 1.7% 1.7% Brazil v France 12 1.6% 0.2% Spain v United States+ 13 1.4% 1.7% Colombia v Spain 14 1.3% 1.5% France v Portugal 15 1.2% 1.4% Argentina v England 16 1.1% 1.2% Netherlands v Spain 17 1.1% 1.2% Spain v Uruguay 18 1.0% 0.1% England v United States+ 19 1.0% 0.4% France v Mexico 20 0.9% 1.0% Argentina v Germany # HA+ HA- Team 1 41.0% 43.2% Spain 2 32.4% 32.9% Argentina 3 21.1% 21.8% France 4 14.8% 18.3% England 5 10.8% 10.6% Brazil 6 10.2% 1.1% United States+ 7 9.0% 9.8% Portugal 8 7.9% 8.2% Netherlands 9 5.8% 6.5% Colombia 10 5.5% 2.6% Mexico 11 4.7% 4.9% Germany 12 4.2% 5.1% Croatia 13 3.3% 3.9% Belgium 14 3.0% 3.2% Senegal 15 2.9% 3.1% Uruguay 16 2.9% 3.3% Switzerland 17 2.9% 3.0% Morocco 18 2.7% 2.8% Ecuador 19 2.3% 2.4% Japan 20 2.2% 2.4% Italy M103 - Third Place (Losers of M101 & M102) Jul 18, Miami (Hard Rock Stadium) # HA+ HA- Teams 1 2.0% 2.2% Argentina v France 2 1.9% 0.7% France v Mexico 3 1.8% 1.8% Brazil v France 4 1.7% 2.1% England v France 5 1.5% 1.7% Argentina v Spain 6 1.2% 0.4% Mexico v Spain 7 1.2% 1.6% England v Spain 8 1.2% 1.3% Argentina v Netherlands 9 1.1% 1.2% Brazil v Spain 10 1.1% 1.0% Brazil v Netherlands 11 1.1% 1.2% France v Portugal 12 1.0% 0.4% Mexico v Netherlands 13 1.0% 1.0% France v Senegal 14 0.9% 1.2% England v Netherlands 15 0.8% 0.2% Argentina v United States+ 16 0.8% 0.8% Argentina v Germany 17 0.8% 0.9% Portugal v Spain 18 0.7% 0.7% France v Germany 19 0.7% 0.7% France v Netherlands 20 0.7% 0.7% Ecuador v France # HA+ HA- Team 1 19.4% 19.4% France 2 13.2% 14.0% Argentina 3 12.8% 12.6% Brazil 4 12.7% 13.4% Spain 5 11.7% 5.8% Mexico 6 11.7% 14.5% England 7 11.6% 11.5% Netherlands 8 8.7% 8.8% Germany 9 8.5% 2.7% United States+ 10 7.9% 9.1% Portugal 11 6.4% 6.6% Morocco 12 6.1% 6.3% Senegal 13 6.0% 7.1% Colombia 14 6.0% 6.1% Ecuador 15 5.7% 6.1% Switzerland 16 5.4% 6.8% Croatia 17 5.0% 5.2% Japan 18 4.6% 5.6% Belgium 19 4.5% 2.7% Canada 20 4.4% 4.8% Uruguay M102 - Semi-Final (W99 v W100) Jul 15, Atlanta (Mercedes-Benz Stadium) # HA+ HA- Teams 1 8.9% 11.7% Argentina v England 2 7.7% 7.7% Argentina v Brazil 3 6.7% 2.2% Argentina v Mexico 4 2.7% 3.8% England v Portugal 5 2.7% 2.7% Argentina v France 6 2.6% 2.7% Argentina v Senegal 7 2.3% 2.4% Argentina v Ecuador 8 2.3% 2.4% Brazil v Portugal 9 1.9% 0.7% Mexico v Portugal 10 1.8% 1.9% Argentina v Japan 11 1.8% 2.0% Argentina v Germany 12 1.6% 1.7% Argentina v Netherlands 13 1.5% 2.1% Argentina v Croatia 14 1.4% 1.6% Argentina v Morocco 15 1.2% 1.8% Colombia v England 16 1.2% 1.5% England v Spain 17 1.1% 1.5% England v Uruguay 18 1.1% 1.2% Argentina v

    3 Min.
  2. 7. JAN.

    As AI displaces jobs, the US government should create new jobs building affordable housing

    We have a housing shortage in the U.S., and it is arguably a major cause of long-term unrest about the economy. Putting aside whether AI will eliminate jobs on net, it will certainly displace a lot of them. And the displaced people are unlikely to be the same people who will secure the higher-tech jobs that get created. For example, are most displaced truck drivers going to get jobs in new industries that require a lot of education? Put these two problems together and maybe there is a solution hiding in plain sight: create millions of new jobs in housing. Someone has to build all the affordable homes we need, so why not subsidize jobs and training for those displaced by AI? These jobs will arguably offer an easier onramp and are sorely needed now (and likely for the next couple of decades as we chip away at this housing shortage). Granted, labor may not be the primary bottleneck in the housing shortage, but it is certainly a factor and one that is seemingly being overlooked. There are many bills in Congress aimed at increasing housing supply through new financing and relaxed regulatory frameworks. A program like this would help complete the package. None of this has been happening via market forces alone, so the government would therefore need to create a new program at a large scale, like the Works Progress Administration (WPA) at the end of the Great Depression, but this time squarely focused on affordable housing (and otherwise narrowly tailored to avoid inefficiencies). There are a lot of ways such a program could work (or not work), including ways to maximize the long-term public benefit (and minimize its long-term public cost), but this post is just about floating the high-level idea. So there you have it. I’ll leave you though with a few more specific thought starters: * Every state could benefit since every state has affordable housing issues. Programs become more politically viable when more states benefit from them. * Such a program could be narrowly tailored, squarely focused on affordable housing (as mentioned above), but also keeping the jobs time-limited (the whole program could be time-limited and tied to overall housing stock), and keeping the wages slightly below local market rates (to complement rather than compete with private construction). * It could also be tailored to those just affected by AI, but that doesn’t seem like the right approach to me. The AI job market impact timeline is unclear, but we can nevertheless start an affordable-housing jobs program now that we need today, which can also serve as a partial backstop for AI-job fallout tomorrow. It seems fine to me if some workers who join aren't directly displaced by AI, since the program still creates net new jobs we will need anyway and to some extent jobs within an education band are fungible. * We will surely need other programs as well to help displaced workers specifically (for example, increased unemployment benefits). Thanks for reading! Subscribe for free to receive new posts or get the audio version. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit gabrielweinberg.com

    4 Min.
  3. 13.12.2025

    Some surprising things about DuckDuckGo you probably don't know

    * We have hundreds of easter-egg logos (featuring our friendly mascot Dax Brown) that surface when you make certain queries on our search engine. Our subreddit is trying to catch ‘em all. They’ve certainly caught a lot, currently 504, but we keep adding more so it’s a moving target. The total as of this post is 594. I’m the one personally adding them in my spare time just for fun and I recently did a Duck Tales episode (our new podcast) with more details on the process. This incarnation of specialty logos is relatively new, so if you are a long-term user and haven’t noticed them, that’s probably why (aside from of course that you’d have to search one of these queries and notice the subtle change in logo). And, no promises, but I am taking requests. * There is a rumor continuously circulating that we’re owned by Google, which of course couldn’t be farther from the truth. I was actually a witness in the U.S. v. Google trial for the DOJ. I think this rumor started because Google used to own the domain duck.com and was pointing it at Google search for several years. After my public and private complaining for those same years, in 2018 we finally convinced Google to give us the duck.com domain, which we now use for our email protection service, but the rumor still persists. * We’ve been blocked in China since 2014, and are on-and-off blocked in several other countries too like Indonesia and India because we don’t censor search results. * We’ve been an independent company since our founding in 2008 and been working on our own search indexes for as many years. For over fifteen years now (that whole time) we’ve been doing our own knowledge graph index (like answers from Wikipedia), over ten years for local and other instant-answer indexes (like businesses), and in the past few years we’ve been ramping up our wider web index to support our Search Assist and Duck.ai features. DuckDuckGo began with me crawling the web in my basement, and in the early days, the FBI actually showed up at my front door since I had crawled one of their honeypots. * The plurality of our search traffic now comes from our own browsers. Yes, we have our own browsers with our search engine built in along with a ton of other protections. How do they compare to other popular browsers and extensions, you ask? We made a comparison page so you can see the differences. Our mobile browsers on iOS & Android launched back in 2018 (wow, that’s seven years ago), and our desktop browsers on Mac and Windows in 2022/23. Our iOS browser market share continues to climb and we’re now #3 in the U.S. (behind Safari and Chrome) and #4 on Android (behind Chrome, Samsung, and Firefox). People appreciate all the protections and the front-and-center (now customizable) fire button that quickly clears tabs and data in an (also customizable) animation of fire. * About 13% of U.S. adults self-report as a “current user” of DuckDuckGo. That’s way more than most people think. Our search market share is lower since all of those users don’t use us on all of their devices, especially on Android where Google makes it especially hard. Once you realize that then it is less surprising that we have the highest search market share on Mac at about 4% in the U.S., followed by iOS at about 3%. I’m talking about the U.S. here since about 44% of our searches are from the U.S., and no other country is double digits, but rounding out the top ten countries are Germany, the United Kingdom, France, Canada, India, the Netherlands, Indonesia, Australia, and Japan. * Our approach to AI differs from most other companies trying to shove it down your throat in that we are dedicated to making all AI features private, useful, and optional. If you like AI, we offer private AI search answers at duckduckgo.com and private chat at duck.ai, which are built-into our browsers. If you don’t like or don’t want AI, that’s cool with us too. You can easily turn all of these features off. In fact, we made a noai.duckduckgo.com search domain that automatically sets those settings for you, including a recent setting we added that allows you to hide many AI-generated images within image search. Another related thing you might find surprising is search traffic has continued to grow steadily even since the rise of ChatGPT (with Duck.ai traffic growing even faster). * If you didn’t know we have a browser, you probably also don’t know we have a DuckDuckGo Subscription (launched last year), that includes our VPN, more advanced AI models in Duck.ai, and in the U.S., Personal Information Removal and Identity Theft Restoration. It’s now available in 30 countries with a similar VPN footprint and our VPN is run by us (see latest security audit and free trials). * Speaking of lots of countries, our team has been completely distributed from the beginning, now at over 300 across about 30 countries as well, with less than half in the U.S. And we’re still hiring. We have a unique work culture that, among other things, avoids standing meetings on Wednesdays and Thursdays. We get the whole company together for a week once a year. * We played a critical role in the Global Privacy Control standard and the creation of search preference menus. I have a graduate degree in Technology and Public Policy and so we’ve done more of this kind of thing than one might expect, even going so far to draft our own Do Not Track legislation before we got GPC going. We also donate yearly to like-minded organizations (here’s our 2025 announcement), with our cumulative donations now at over $8 million. Check our donations page for details going back to 2011. We can do this since we’ve been profitable for about that long, and more recently have even started investing in related startups as well. If this hodge-podge of stuff makes you think of anything, please let me know. I’m not only taking requests for easter-egg logo ideas, but also for stuff to write about. Thanks for reading! Subscribe for free to receive new posts or get the audio version. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit gabrielweinberg.com

    8 Min.
  4. 29.11.2025

    What GLP-1 drug price is cost neutral to Medicare?

    As GLP-1s are studied more, their benefit profile is expanding rapidly. Acknowledging that many questions remain, a recent journal article titled The expanding benefits of GLP-1 medicines puts it like this: GLP-1 medicines, initially developed for blood glucose and weight control, improve outcomes in people with cardiovascular, kidney, liver, arthritis, and sleep apnea disorders, actions mediated in part through anti-inflammatory and metabolic pathways, with some benefits partly independent of the degree of weight loss achieved. Many millions of Americans would benefit from taking these drugs, but limited insurance coverage and high out-of-pocket costs limit their use. However, if the price was low enough to match their cost savings, then wider coverage could be justified. What price would that need to be? How can a drug be cost neutral (pay for itself)? If a drug reduces future care expenditures by more than it costs, then it pays for itself (is cost neutral). Modeling this out can get complicated, especially for drugs whose benefits accrue over many years. That’s because you need to at least consider how those cost savings unfold as well as people who stop taking the drug (adherence rate). What about GLP-1s? The Congressional Budget Office (CBO) looked into this question in detail in 2024, using these approximate assumptions: * 9-year time horizon (2026-2034) * 35% adherence (continuation) in first year, ramping up to 50% by year 9 * 80% yearly continuation rate after first year of continuous use * Available to Medicare patients who are classified as obese or overweight with at least one weight-related comorbidity * $5,600/year cost (implying about ~$625/month cost if you assume a 75% reimbursement) * Savings from reduced care of $50/year in 2026, reaching $650/year in 2034 CBO concludes in their report that these assumptions lead to expanding GLP-1 coverage to be very costly to the Federal government. Doesn’t Medicare prescribe GLP-1s now? Yes, but not for obesity writ large, which about doubles the qualified population. From the CBO report: In 2026, in CBO’s estimation, 29 million beneficiaries would qualify for coverage under the illustrative policy. About half of that group, or 16 million people, would have access to those medications under current law for indications such as diabetes, cardiovascular coverage, and other indications approved by the FDA in the interim. Still, CBO only expects a small percentage of eligible patients to use the drugs, due to activation and adherence. In the final year of their model (2034) they predict “about 1.6 million (or 14 percent) of the newly eligible beneficiaries would use an AOM [anti-obesity medication].” What break-even price does the CBO report imply? CBO doesn’t calculate a break-even price. They just say they expect $50 in average savings in year 1, rising to $650 in year 9, implying a 9% offset rate overall. If we assume a progression of increasing yearly savings to match these assumptions, you get a cumulative savings of about $4,000, or about $445 per year. If you assume on average the government picks up 75% of the bill, that implies a break-even drug price of about $50/month. What has changed since 2024 that would modify this CBO estimate? * Time Horizon. The CBO time horizon of 9 years is too low. They acknowledge that “from 2035 to 2044…the savings from improved health would be larger than they would be from 2026 to 2034”. So, let’s add 10 years (for a total of 19), and stipulate the last ten years average $800 in savings, rising from the year 9 savings of $650. That implies an increased average savings per year of about 1.4x. * Emerging Benefits. The CBO only accounted for weight-loss benefits, using comparisons to Bariatric surgery and other weight-loss evidence, noting that “CBO is not aware of any direct evidence showing that treatment of obesity with GLP-1-based products reduces spending on other medical services.” However, the other emerging benefits reduce conditions that are very costly to Medicare like kidney, heart, and sleep apnea complications (e.g., dialysis, heart surgery, CPAP, etc.). I think we can speculatively call this a 2x multiplier. So, then what break-even price does that imply today? $50/month (CBO original estimate) x 1.4 (for increased time horizon) x 2 (for increased benefits) =~ $140/month. That is, at $140/month, we would expect the Medicare costs to roughly equal the cost savings, and net out to 0 (be cost neutral). That’s still well below the recently negotiated prices starting in 2027 (for example, Ozempic at $274). Why are you thinking about this again? I’m seeing the expanding benefit profile and thinking we have to find a way to get these benefits to more people, as a way to generally increase our average standard of living (in this case by greatly increasing health-span/quality of life). The best way I can see to get the benefits to the most people is if it were government subsidized/provided. But obviously health care costs are a major barrier to that method, and so framing expanding benefits as cost neutral seems most politically viable. What if the price were $100/month? At $100/month, then it would be a no-brainer (assuming the above math is correct) to make available to qualified Medicare patients (say, using at least the CBO obesity criteria) since it would then be clearly making the government money. Additionally, at that price, I think you could start expanding it well beyond Medicare in waves, monitoring outcomes and cost savings. For example, you could start with programs where the government similarly runs both the cost and benefits like Medicare, such as for the military and other federal workers. Then you could expand to Medicaid / disability (with cross-state subsidies). Ultimately there could be justification to subsidize a subset of the public at large, for example people aged 55+ who will be on Medicare within the next ten years, such that the savings will be realized by the federal government and the whole program could still be cost neutral. OK great, but how to you get GLP-1s at $100/month? This may be a half-baked idea, but one approach is to offer up the market a yearly contract for expanded Medicare, and whoever shows up first gets it (to be renegotiated yearly). I don’t think this is that crazy because the manufacturing cost is estimated to be a small fraction of the list price, and the UK previously had negotiated pricing in this ballpark. The volumes would be huge, and as more companies enter the market, I imagine eventually one of them would take the offer. Thanks for reading! Subscribe for free to receive new posts or get the audio version. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit gabrielweinberg.com

    9 Min.
  5. 22.11.2025

    One approach to a heavily curated information diet

    Disclaimer: This approach works for me. It may not work for you, but also maybe gives you some ideas. I find the signal to noise ratio on social media and news sites/apps too low for me to have a consistently good experience on them. So, I developed an alternative, heavily curated approach to an information diet that I’m laying out here in hope people will give me suggesions over time to improve it. It involves four main inputs: * RSS, skewed towards “most upvoted” feeds I use Reeder because it has a polished user interface, formats the feed in an aggregated, chronological timeline across devices, and has native reddit and filtering support, but there are many other RSS readers too. I subscribe to around 25 feeds and 25 subreddits through Reeder. To increase the signal to noise, I try to find “most upvoted” feeds where possible. For example, for subreddits, I usually use the top posts for the week, which you can get for any subreddit like this: https://www.reddit.com/r/economics/top.rss?t=week (just replace ‘economics’ with the subreddit of your choice). Doing so will get you on the order of five top posts per day, but you can also change ‘week’ to ‘day’ to increase that number to about twenty or to ‘month’ to decrease it to about one, which I do for some feeds. To find generally interesting subreddits I looked through the top few hundred subreddits, and then I also added some niche subreddits for specific interests I have. Below is part of my reddit list (alphabetical). You can see I have some really large subreddits (technology, science, todayilearned) mixed in with more niche ones (singularity, truereddit), as well as communities (rootsofprogress, slatestarcodex) and hobbies (phillyunion, usmnt). Getting about twenty five across a range of your interests makes a good base feed. Many publications still have RSS feeds if you search for publication name+RSS. If they don’t, it’s likely RSS.app or Feedspot has made one you can use instead. There is usually support through one of these methods for sub-section publication feeds, for example the tech section. Here are some other examples of non-reddit “most upvoted” feeds that might be more widely appealing: * Hacker News RSS - for example, I added the 300 and 600 points feeds, meaning you get notified when a story hits 300 or 600 points (you can pick any number). * NYT RSS - they have most emailed/shared/viewed * Techmeme RSS - curated by the techmeme team * LessWrong RSS - they have a curated feed Then I also just consume the main RSS feeds of some really high signal publications like Ars Technica (full articles come through for subscribers), The Information, Quanta, etc. Even with all this curation, the signal to noise for me isn’t that great. I skim through the timeline mostly, but I do end up getting a bunch of interesting articles this way every day. I do use the filtering feature of Reeder to drop out some really low hit keywords. * Podcasts I subscribe to about 20 podcasts via Overcast. I like the Overcast “Voice Boost (clear, consistent volume)” and “Smart Speed (shorter silences)” features as well as the ability to do a custom playback speed for each podcast. The signal to noise ratio is better here than the RSS feeds, but I still don’t listen to every episode, and for ones I do I often skip around. I like having a queue to listen to in the car and at the gym. I find new podcast discovery pretty hard. I’ve looked through the Overcast top podcasts lists in all the different categories, and tried lots of them, but not many stick for me. * Email newsletters I subscribe to about the same amount (20-25) of email newsletters, some daily but most weekly or less. Signal/noise is less than podcasts, but greater than the RSS feeds. I’d guess my hit rate is about 20% in terms of reading them through vs. maybe 50% for podcasts listening through and 5% for the full RSS amalgamation reading through. About half of the email newsletters I subscribe to are through Substack and half are direct from websites/organizations. * People sending me links I really appreciate when people send me curated links, which happens less than I’d like but I can’t complain because the signal to noise here is the highest with a hit ratio maybe 80%. I try to encourage it by saying thank you and responding when I have thoughts. With those four inputs, I feel decently covered, but sometimes I do wonder what I’m missing out on and occasionally relapse back to going directly to a news or social media app and skimming the front page. This method of course depends on having a good list of feeds, podcasts, and newsletters. But in general, I’m personally happier with this approach, though of course your mileage my vary. If you’re doing something similar and have any ideas on process tweaks or specific recommendations for feeds, podcasts, or newsletters, I’d love to hear them. Thanks for reading! Subscribe for free to receive new posts or get the audio version. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit gabrielweinberg.com

    6 Min.
  6. 15.11.2025

    China has a major working-age population advantage through at least 2075

    In response to “A U.S.-China tech tie is a big win for China because of its population advantage,” I received feedback along the lines of shouldn’t we be looking at China’s working-age population and not their overall population? I was trying to keep it simple in that post, but yes, we should, and when we do, we find, unfortunately, that China’s population advantage still persists. Here’s the data: Currently, China’s working-age population is over 4 times the U.S. According to Our World in Data, China’s working-age population is 983 million to the U.S.’s 223 million, or 4.4x. In 2050, despite being in rapid decline, China’s working-age population is still projected to be over 3 times the U.S.’s The projections put China’s 2050 working-age population at 745 million to the U.S.’s 232 million, or 3.2x. In 2075, noting projections are more speculative, China’s working-age population is still projected to be about double the U.S.’s The projections put China’s 2075 working-age population at 468 million to the U.S.’s 235 million, or 2.0x. Noah Smith recently delved into this rather deeply in his post “China’s demographics will be fine through mid-century” noting: China’s economic might is not going to go “poof” and disappear from population aging; in fact, as I’ll explain, it probably won’t suffer significant problems from aging until the second half of this century. And even in the second half, you can’t count on their demographic decline then either, both because even by 2075 their working-age population is still projected to be double the U.S.’s under current conditions, but also because those conditions are unlikely to hold. As Noah also notes: Meanwhile, there’s an even greater danger that China’s leaders will panic over the country’s demographics and do something very rash…All in all, the narrative that demographics will tip the balance of economic and geopolitical power away from China in the next few decades seems overblown and unrealistic. OK, why does this demographic stuff matter again? Check out my earlier article for details, but here’s a summary. [A] U.S.-China tech tie is a big win for China because of its population advantage. China doesn’t need to surpass us technologically; it just needs to implement what already exists across its massive workforce. Matching us is enough for its economy to dwarf ours. If per person output were equal today, China’s economy would be over 4× America’s because China’s population is over 4× the U.S. That exact 4× outcome is unlikely given China’s declining population and the time it takes to diffuse technology, but 2 to 3× is not out of the question. China doesn’t even need to match our per-person output: their population will be over 3× ours for decades, so reaching ⅔ would still give them an economy twice our size since 3 × ⅔ = 2. …With an economy a multiple of the U.S., it’s much easier to outspend us on defense and R&D, since budgets are typically set as a share of GDP. …What if China then starts vastly outspending us on science and technology and becomes many years ahead of us in future critical technologies, such as artificial superintelligence, energy, quantum computing, humanoid robots, and space technology? That’s what the U.S. was to China just a few decades ago, and China runs five-year plans that prioritize science and technology. …Our current per person output advantage is not sustainable unless we regain technological dominance. …[W]e should materially increase effective research funding and focus on our own technology diffusion plans to upgrade our jobs and raise our living standards. Thanks for reading! Subscribe for free to receive new posts or get the audio version. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit gabrielweinberg.com

    4 Min.
  7. 01.11.2025

    Total Factor Productivity needs a rebrand (and if you don't know what that is you probably should).

    If you don’t know about Total Factor Productivity (TFP), you probably should. It’s an economic concept that is arguably the most important driver of long-term economic prosperity. An International Monetary Fund (IMF) primer on TFP explains it like this (emphasis added): It’s a measure of an economy’s ability to generate income from inputs—to do more with less…If an economy increases its total income without using more inputs…it is said to enjoy higher TFP [Total Factor Productivity]. TFP is an important macroeconomic statistic [because] improvements in living standards must come from growth in TFP over the long run. This is because living standards are measured as income per person—so an economy cannot raise them simply by adding more and more people to its workforce. Meanwhile, economists have amassed lots of evidence that investments in capital have diminishing returns. This leaves TFP advancement as the only possible source of sustained growth in income per person, as Robert Solow, the late Nobel laureate, first showed in a 1957 paper. So, it’s important. Critically important to long-term progress. To learn more about TFP, check out the full IMF primer referenced above and then this post I wrote about TFP titled “The key to increasing standard of living is increasing labor productivity,” which also has more links embedded in it. It explains how the only sustainable way to increase TFP is to “to invent new technology that enables workers to do more per hour.” And this is why I’m always going on and on about increasing research funding. Let’s assume for a second that most people want more prosperity and that long-term prosperity does indeed primarily flow through Total Factor Productivity. Then why aren’t we talking about TFP a lot more? Why isn’t Total Factor Productivity front and center in our political agendas? I think there are a host of reasons for that, including those I outlined in the paradox of progress. But another even simpler reason has to be that Total Factor Productivity is a terrible, inscrutable name, at least from the perspective of selling the concept to the mainstream public. Every word of it isn’t great. It starts with “total,” which isn’t as off-putting as the other words, but doesn’t add much especially as the first word, let alone the fact that economists quibble that it isn’t an actual total. “Factor” seems like a math word and doesn’t add much either. And then you have “productivity,” which is confusing to most people because it has an unrelated colloquial meaning, and from a political perspective it also codes as job-cutting which is inherently unappealing. Now, lots of economics jargon has similar problems, case in point “Gross Domestic Product” (GDP). Given GDP hasn’t been rebranded, I doubt TFP will either. That said, I think for anyone trying to communicate this concept to the public, we shouldn’t take the TFP name or acronym as a given, but try to use something more appealing and inherently understandable. I’m looking to switch to something else but not sure to exactly what. My thinking so far has led me to work in the words “prosperity” or “innovation” directly like: * Prosperity Driver * Prosperity Component * Innovation Multiplier Do you have any other suggestions? Thanks for reading! Subscribe for free to receive new posts or get the audio version. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit gabrielweinberg.com

    4 Min.

Info

Tech, policy, and business insights from the DuckDuckGo founder and co-author of Traction and Super Thinking. gabrielweinberg.com