Gabriel Weinberg's Blog

Gabriel Weinberg

Tech, policy, and business insights from the DuckDuckGo founder and co-author of Traction and Super Thinking. gabrielweinberg.com

  1. 4 小時前

    What banning AI surveillance should look like, at a minimum

    I previously called on Congress to ban AI surveillance because of its heightened potential to easily manipulate people, both for commercial and ideological ends. Essentially, we need an AI privacy law. Yet Congress has stalled on general privacy legislation for decades, even in moments of broad public privacy focus, like after the Snowden revelations and the Cambridge Analytica scandal. So, instead of calling for another general privacy bill that would encompass AI, I believe we should focus on an AI-specific privacy bill. Many of the privacy frameworks floated over the years for general privacy regulation could essentially be repurposed to apply more narrowly to AI. For example, one approach is to enumerate broad consumer AI rights, such as rights of access, correction, deletion, portability, notice, transparency, opt-out, human review, etc., with clear processes to exercise those rights. Another approach is to create legally binding duties of care and/or loyalty on organizations that hold AI data, requiring them to protect consumers' interests regarding this data, such as to minimize it, avoid foreseeable harm, prohibit secondary use absent consent or necessity, etc. There are more approaches out there and they are not mutually exclusive. While I have personal thoughts on some of them, my overriding goal is to get something, anything useful passed, and so I remain framework-agnostic. However, I believe within whatever framework Congress adopts, certain fundamentals are non-negotiable: * Ban a set of clearly harmful practices. Start with what (I hope are) universal agreement items, like identity theft, deceptive impersonation, unauthorized deepfakes, etc. The key is explicitly defining this as a category so that we can debate politically harder cases like personalized pricing and predictive policing (both of which I think should also be banned). * Practices near the ban threshold should face higher scrutiny. For example, if we can’t manage to outright ban using AI to assist in law enforcement decisions, at the very least this type of use should always be subject to human review, reasonable auditing procedures, etc. Using AI for consequential decisions, like for loan approvals, or for processing sensitive data, like health information, should at least be in this category. And many practices within this category, especially with regards to consumer AI, should be explicitly opt-in. * Make everything else transparent and optional. Outside the bright-line bans and practices subject to higher scrutiny, any other AI profiling must be transparent and at least come with the ability to opt-out, with only highly limited exceptions where opt‑outs would defeat the purpose, like for legal compliance. Consumers also need meaningful transparency, including prominent disclosures that indicate clearly when you are interacting with an AI system. That means not just generic data collection notices or folding into existing privacy policies, but plain-language explanations shown (or spoken) prominently at the time of processing, which detail what AI systems are inferring and deciding. * States must maintain authority to strengthen, not undermine, federal minimums. I wrote a whole post about why, with the gist being AI is changing rapidly, the federal government doesn’t react to these changes quickly enough, and states have shown they will act, both in AI and privacy. Finally, these protections won't stifle progress. Some oppose any AI regulation because they believe it will hinder AI adoption or innovation. In terms of innovation, privacy makes a good analogy: Despite fears that a “patchwork” of state privacy laws would wreak havoc on innovation by going too far, they haven’t. Innovation hasn’t stalled, and neither have Big Tech privacy violations. In terms of adoption, the backlash against AI is real and rising, and smart regulation can help build the trust necessary for sustained AI adoption, not hinder it. We can get the productivity benefits of AI without the privacy harms. Thanks for reading! Subscribe for free to receive new posts or get the audio version. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit gabrielweinberg.com

    5 分鐘
  2. 5 天前

    On reddit, roughly 500 views = 1 click

    A couple weeks ago I wrote a post titled AI survelliance should be banned while there is still time. Someone submitted it to Hacker News where it got over 600 upvotes, so I decided to submit it myself to reddit (on /r/technology) where it got over 1,100 upvotes. Because I submitted it, I was able to get “Post Insights” (pictured above, left) that indicated the post got 175,000 views. Similarly, substack reports “Traffic sources” (pictured above, right) and shows 310 views came from reddit. This roughly 1:500 ratio is consistent with others I’ve gathered across several different posts and subreddits, so I don’t think it is particularly anomalous. Reddit views count impressions (when posts appear in feeds), making this ratio also comparable to other platforms. The bottom line is lots of views on social doesn’t equate to lots of clicks, and certainly not lots of email subscribers, which experiences another 1:100 type of ratio, that is, clicks to email subscribers. My takeaways: * Social ≠ list growth. Social posts don't build email lists: social post views to new email subscribers is likely less than 50,000 to 1 (500 x 100). * Optimize the headline. If you do chase social views, nail the headline since that's where 99% of the value lives given almost nobody clicks through. For example, you could expose your brand name or logo, or just raise awareness for a crisp point or concept you can fit in a headline. 0.2% is common for ads; I expected higher for a top organic post on a popular subreddit, but this data suggests otherwise. Of course, your mileage may vary, but I thought it would nevertheless be helpful to put out a real data point I found interesting. Thanks for reading! Subscribe for free to receive new posts or get the audio version. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit gabrielweinberg.com

    3 分鐘
  3. 9月13日

    A U.S.-China tech tie is a big win for China because of its population advantage

    China’s population is declining, but UN projections show it will remain at least twice the size of the U.S. for decades. So what? Population size matters because economy size (GDP) matters and GDP = population × output per person. For example: * 100 million people × $50,000 per year = a $5 trillion economy * 1 billion people × $50,000 per year = a $50 trillion economy Then why hasn’t China’s economy already dwarfed America’s? It’s because China’s output per person is still much lower than America’s. China has about 4× the people but about ¼ the output per person. 4 × ¼ = 1. This means that by any standard GDP measure, such as market exchange rates or Purchasing Power Parity, the two economies are in the same ballpark today. OK, but why is China’s economic output per person so much lower than America’s? A primary reason is that large swaths of its workforce aren’t yet at the technological frontier. About 23% of Chinese workers are in agriculture vs. about 1½% in the U.S. However, if China continues to educate its population, mechanize its workforce, and diffuse technology across it, that gap will continue to narrow and per-worker output will continue to climb. Only a decade ago, over 30% of China’s workforce was in agriculture, and per-person output has grown much faster than in the U.S. for decades. Technology is the driving force enabling China to catch up with the U.S. in economic output per person. As long as China diffuses increasingly sophisticated technology through its workforce significantly faster than in the U.S., then it will keep raising output per person relative to the U.S., growing its economy faster. Diffusion is not automatic; it depends on continued private-sector dynamism and sound policy. It isn’t guaranteed, but it is certainly plausible, if not likely. Put another way, a U.S.-China tech tie is a big win for China because of its population advantage. China doesn't need to surpass us technologically; it just needs to implement what already exists across its massive workforce. Matching us is enough for its economy to dwarf ours. If per person output were equal today, China’s economy would be over 4× America’s because China’s population is over 4× the U.S. That exact 4× outcome is unlikely given China’s declining population and the time it takes to diffuse technology, but 2 to 3× is not out of the question. China doesn't even need to match our per-person output: their population will be over 3× ours for decades, so reaching ⅔ would still give them an economy twice our size since 3 × ⅔ = 2. Some may recall similar predictions about Japan in the 1980s that never materialized. But China is fundamentally different: Japan's population peaked at less than ½ the U.S., while China's is over 4× ours. Japan’s workforce had already reached the technological frontier when it stalled out, while China is still far behind with massive room to catch up. And what does China win exactly? China wins a much bigger economy. With an economy a multiple of the U.S., it’s much easier to outspend us on defense and R&D, since budgets are typically set as a share of GDP. Once China’s economy is double or triple ours, trying to keep up would strain our economy and risk the classic guns-over-butter trap. (This is the same trap that contributed to the Soviet Union’s collapse: too much of its economy steered toward military ends.) Alliances could help offset raw population scale, but only if we coordinate science, supply chains, and procurement, which we have not achieved at the needed scale. What if China then starts vastly outspending us on science and technology and becomes many years ahead of us in future critical technologies, such as artificial superintelligence, energy, quantum computing, humanoid robots, and space technology? That’s what the U.S. was to China just a few decades ago, and China runs five-year plans that prioritize science and technology. What can we do about it? Our current per person output advantage is not sustainable unless we regain technological dominance. By dominance, I don’t mean a few months ahead like today’s AI cycles. I mean many years ahead in developing, diffusing, and commercializing frontier science and technology. My takeaway: we need to recognize how quickly we are losing our privileged position to China. If its economy doubles or triples ours, it can outspend us to lock in technological and military dominance. That may not happen, but we shouldn’t bet on it. Instead, we should materially increase effective research funding and focus on our own technology diffusion plans to upgrade our jobs and raise our living standards. What about AI automation? The net job effect of AI automation is hotly debated, but any outcome doesn’t change this calculus. If employment levels remain about the same then the status quo population advantage remains. If net jobs drop dramatically due to an AI-dominated economy, staying ahead in AI systems becomes even more important. So, either way, doing more effective research and development is critical. This should be the most important and bipartisan political issue. Research and technology diffusion isn’t everything, but it is the cornerstone of future prosperity. If we don’t get it right, we definitely lose, and we’re currently not getting it right. Thanks for reading. Subscribe for free to receive new posts or get the audio feed. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit gabrielweinberg.com

    7 分鐘
  4. 9月6日

    AI surveillance should be banned while there is still time.

    All the same privacy harms with online tracking are also present with AI, but worse. While chatbot conversations resemble longer search queries, chatbot privacy harms have the potential to be significantly worse because the inference potential is dramatically greater. Longer input invites more personal information to be provided, and people are starting to bare their souls to chatbots. The conversational format can make it feel like you’re talking to a friend, a professional, or even a therapist. While search queries reveal interests and personal problems, AI conversations take their specificity to another level and, in addition, reveal thought processes and communication styles, creating a much more comprehensive profile of your personality. This richer personal information can be more thoroughly exploited for manipulation, both commercially and ideologically, for example, through behavioral chatbot advertising and models designed (or themselves manipulated through SEO or hidden system prompts) to nudge you towards a political position or product. Chatbots have already been found to be more persuasive than humans and have caused people to go into delusional spirals as a result. I suspect we’re just scratching the surface, since they can become significantly more attuned to your particular persuasive triggers through chatbot memory features, where they train and fine-tune based on your past conversations, making the influence much more subtle. Instead of an annoying and obvious ad following you around everywhere, you can have a seemingly convincing argument, tailored to your personal style, with an improperly sourced “fact” that you’re unlikely to fact-check or a subtle product recommendation you’re likely to heed. That is, all the privacy debates surrounding Google search results from the past two decades apply one-for-one to AI chats, but to an even greater degree. That’s why we (at DuckDuckGo) started offering Duck.ai for protected chatbot conversations and optional, anonymous AI-assisted answers in our private search engine. In doing so, we’re demonstrating that privacy-respecting AI services are feasible. But unfortunately, such protected chats are not yet standard practice, and privacy mishaps are mounting quickly. Grok leaked hundreds of thousands of chatbot conversations that users thought were private. Perplexity’s AI agent was shown to be vulnerable to hackers who could slurp up your personal information. Open AI is openly talking about their vision for a “super assistant” that tracks everything you do and say (including offline). And Anthropic is going to start training on your chatbot conversations by default (previously the default was off). I collected these from just the past few weeks! It would therefore be ideal if Congress could act quickly to ensure that protected chats become the rule rather than the exception. And yet, I’m not holding my breath because it’s 2025 and the U.S. still doesn’t have a general online privacy law, let alone privacy enshrined in the Constitution as a fundamental right, as it should be. However, there does appear to be an opening right now for AI-specific federal legislation, despite the misguided attempts to ban state AI legislation. Time is running out because every day that passes further entrenches bad privacy practices. Congress must move before history completely repeats itself and everything that happened with online tracking happens again with AI tracking. AI surveillance should be banned while there is still time. No matter what happens, though, we will still be here, offering protected services, including optional AI services, to consumers who want to reap the productivity benefits of online tools without the privacy harms. Thanks for reading! Subscribe for free to get new posts or get the podcast. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit gabrielweinberg.com

    4 分鐘
  5. 8月30日

    Progress isn't automatic

    Everyone living today has lived in a world where science and technology, globally, have progressed at a relatively high rate compared to earlier times in human history. For most of human history, a random individual could expect to use roughly the same technology in one decade that they did the previous decade. That’s obviously not the case today. In fact, most of us alive today have little to no personal experience with such a degree of technological stagnation. That’s a good thing because long-term technological stagnation puts an upper bound on possible increases in our collective standard of living. From an earlier post: [W]ithout new technology, our economic prosperity is fundamentally limited. To see that, suppose no breakthroughs occur from this moment onward; we get no new technology based on no new science. Once we max out the hours we can work, the education people will seek, and the efficiency with existing technology, then what? We’d be literally stuck. Fundamentally, if you don’t have new tools, new technology, new scientific breakthroughs, you stagnate. That is, standard of living is fundamentally a function of labor productivity. To improve your standard of living, you need to make more money so you can buy better things, like housing, healthcare, leisure, etc. Once you get the best education you can, and maximize your hours, you are then limited in how much you can make based on how much you can produce, your output. How do you increase your output? Through better technology. At an economy-wide level, therefore, if we’re not introducing new technology, we will eventually hit a maximum output level we cannot push beyond. This is a counterintuitive and profound conclusion that I think gets overlooked because we take technological progression for granted. Science and technology don’t just progress on their own. There were many periods in history where they essentially completely stagnated in parts of the world. That’s because it takes considerable effort, education, organization, and money to advance science and technology. Without enough of any one of those ingredients, it doesn’t happen. And, if technological progression can go slower, perhaps it could also go faster, by better attuning the level of effort, education, organization, and money. For example, I’ve been arguing in this blog that the political debate now around science funding has an incredible amount of status quo bias embedded in it. I believe reducing funding will certainly slow us down, but I also believe science funding was already way too low, perhaps 3X below optimal levels. Put another way, I think a primary goal of government and society should be to increase our collective standard of living. You simply can’t do that long-term without technological progression. A couple of quick critiques I may tackle more in-depth in the future. Some people are worried that we’re just producing more stuff for the sake of producing more stuff, and that’s not really increasing the standard of living. First, with technological progression, the stuff gets both better and cheaper, and that is meaningful, for example, take medicines. Better medicines mean better health spans, and cheaper medicines mean more access to medicine. Second, people buy things, for the most part, on their own free will, and maybe people do want more stuff, and that’s not a bad thing in and of itself, as long as we can control for the negative externalities. Third, controlling for those negative externalities, like combating climate change effectively, actually requires new science and technology. Another common critique is that technology causes problems, for example, privacy problems. As someone who started a privacy-focused company, I’ve been holding this position for decades and continue to do so. But we shouldn’t throw the baby out with the bathwater. We need to do a more effective job regulating technology without slowing down its progression. Thanks for reading! Subscribe for free to receive new posts or get the podcast. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit gabrielweinberg.com

    5 分鐘
  6. 8月23日

    Musings on evergreen content

    FYI: This post is a bit meta—about writing/blogging itself—so it may not be your cup of tea. I’ve been having some cognitive dissonance about blogging. On the one hand, I don’t believe in doing things primarily for legacy purposes, since in the long arc of history, hardly anything is likely to be remembered or matter that much, and I won’t be here regardless. On the other hand, I also don’t like spending a lot of my writing time on crafting for the ephemeral—like a social media post—because it seems that same writing time could be spent on developing something more evergreen. I stopped blogging a decade ago following that same logic, focusing my writing time on books instead, which are arguably more evergreen than blogging. But, I’m obviously back here blogging again, and with that context, here are some dissonant thoughts I’m struggling with: Is the chasing of more evergreen content just a disguised form of chasing legacy? I think not because long-term legacy is about after you’re dead, and I’m not looking for something that will last that long. I’m more looking to avoid the fate of most content that has a half-life of one day, such that my writing can have more of an impact in my lifetime. That is, it’s more about maximizing the amount of impact per unit time of writing than any long-term remembrance. Is there more value in more ephemeral content than I previously thought? I’m coming to believe yes, which is why I’ve started blogging again, despite most blog posts still having that short half-life I’m trying to avoid. Specifically, I think there can be cumulative value in more ephemeral content when it: * Builds to something like a movement of people behind a thematic idea that can spring into action collectively at some point in the future, which is also why I started this up again on an email-first (push) platform. * Helps craft a more persuasive or resonant argument, given feedback from smaller posts, such as how comedians build up their comedy specials through lots of trial and error. This last piece reminds me of Grounddog Day (the movie) where he keeps revising his day to achieve the perfect day, much like you can try to keep refining your argument until it perfectly resonates. In any case, it’s hard to achieve occasional evergreen content if you don’t have an audience to seed it with and if you don’t have a fantastic set of editors to help craft it (which hardly anyone does except in a professional context). That is, putting out more ephemeral content can be seen as part of the process of putting out quality evergreen content, both in terms of increasing its quality (from continuous feedback) and in terms of increasing its reach (from continuous audience building). Should I spend as much time editing these posts as I do? Probably not, given that it is very rare for one of these posts to go viral / become evergreen. The problem is, I like editing. However, trying to stick roughly to a posting frequency and using formats like this one (Q/A headings) really helps me avoid my over-editing tendencies. What is the relationship between blogging frequency and evergreen probability? There’s no doubt that some blog posts are evergreen in that people refer back to them years after they were written (assuming they are still accessible). Does the probability of becoming evergreen have any relationship to the frequency of posting? You can make compelling arguments for both sides: * If you post more, you have more chances to go viral, and most people in a viral situation don’t know your other posts anyway, so the frequency isn’t seemingly inhibiting any particular post from going viral. * If you post less, you will likely spend more time crafting each post, increasing each post’s quality, and thus increasing its chances of virality, which I think (though I am not sure) is a necessary condition of evergreenness. My current sense is that if you post daily, then you are unlikely to be creating evergreen content in those posts. Still, you can nevertheless have a significant impact (and perhaps more) by being top of mind in a faster-growing audience and influencing the collective conversation through that larger audience more frequently. That’s because there does seem to be a direct relationship between posting frequency and audience growth. However, posting daily is a full-time job in and of itself, and one I can’t personally do (since I already have a full-time job) and one I don’t want to do (since I don’t like being held to a schedule and also like editing/crafting sentences too much). So, yes, I do think there is a relationship between frequency and evergreenness, and there is probably some sweetspot in the middle between weekly and monthly that maximizes your evergreen chances. You need to be top of mind enough to retain and build an audience (including through recommendations), you need enough posting to get thorough feedback to improve quality, and you need enough time with each post to get it to a decent quality in the first place. The full-timers also have other options, like daily musings paired with more edited weekly or monthly posts. But, isn’t there a conflict between maximizing audience size and maximizing evergreen probability? Yes, I think there is. If you want to maximize audience size, the optimal post frequency is at least daily, vastly increasing the surface area with which your audience can grow, relative to a weekly posting schedule (or even less). But, that frequency, as previously stated, is not the optimal frequency for optimizing the probability of producing evergreen content. So, you have a tradeoff—audience size vs. evergreen probability. And it is a deeper tradeoff than just frequency, since I also think the kind of content that best grows audience size is much shallower than the kind of content that is more likely to go evergreen. As noted, you can relax this tradeoff with more time input, which I don’t have. So, for right now, acknowledging this tradeoff, I think I’m going to stick to a few, deeper posts a month, maybe edited a bit less though. I’d rather build a tighter audience that wants to engage more deeply in ideas that can last than a larger audience that wants to consume shallower content that is more ephemeral. I hope you agree and I could also use more feedback! Thanks for reading. Subscribe for free to receive new posts or get the podcast. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit gabrielweinberg.com

    7 分鐘
  7. 8月10日

    Rule #3: Every decision involves trade-offs

    One of the universal rules of decision-making is that every decision involves trade-offs. By definition, when making a decision, you are trading off one option for another. But other, less obvious trade-offs are also lurking beneath the surface. For example, whatever you choose now will send you down a path that will shape your future decisions, a mental model known as path dependence. That is, your future path is both influenced and limited by your past decisions. If you enroll your child in a language immersion school, you significantly increase the chances that they will move to an area of the world where that language is spoken, decades from now. Maybe you're OK with that possible path, but you should at least be aware of it. In a real sense, you aren’t just trading the immediate outcomes of one option for another, but one future path for another. From that perspective, you can consider path-related dimensions, such as how the various options vary in terms of future optionality or reversibility. It’s not that more optionality or reversibility is always better, as often they come at a cost, but just that these less-obvious trade-offs should be considered. A related, less-obvious trade-off involves opportunity cost. If you pick from the options in front of you, what other opportunities are you forgoing? By explicitly asking this question, more options might reveal themselves. And, you should also always ask specifically what opportunities you are forgoing by deliberating on the decision. Sometimes waiting too long means you miss out on the best option; other times, putting off the decision means more options will emerge. Again, one side isn’t always better, since every situation is different. But the opportunity costs, including from waiting, should be explored. Another common but less-obvious trade-off concerns the different types of errors you can make in a decision. From our book Super Thinking: [C]onsider a mammogram, a medical test used in the diagnosis of breast cancer. You might think a test like this has two possible results: positive or negative. But really a mammogram has four possible outcomes…the two possible outcomes you immediately think of are when the test is right, the true positive and the true negative; the other two outcomes occur when the test is wrong, the false positive and the false negative. These error models occur well beyond statistics, in any system where judgments are made. Your email spam filter is a good example. Recently our spam filters flagged an email with photos of our new niece as spam (false positive). And actual spam messages still occasionally make it through our spam filters (false negatives). Because making each type of error has consequences, systems need to be designed with these consequences in mind. That is, you have to make decisions on the trade-off between the different types of error, recognizing that some errors are inevitable. For instance, the U.S. legal system is supposed to require proof beyond a reasonable doubt for criminal convictions. This is a conscious trade-off favoring false negatives (letting criminals go free) over false positives (wrongly convicting people of crimes). To uncover other less obvious trade-offs, you can brainstorm and explicitly list out (for example, in a spreadsheet) the more subtle dimensions on which options differ. Think of a comparison shopping page that compares numerous features and benefits. The obvious ones — such as cost — may immediately come to mind, but others may take time to surface, like how choosing one option might impact your personal quality of life in the future. The point is not to overthink decisions, but to be conscious about inherent trade-offs, especially the less obvious yet consequential ones. Just as I think you should take the time to write out assumptions explicitly, I also believe you should do the same for trade-offs. See other Rules. The Transporter (2002) Thanks for reading! Subscribe for free to receive new posts or get the podcast. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit gabrielweinberg.com

    4 分鐘
  8. 7月31日

    9 ways DuckDuckGo's Search Assist Differs from Google’s AI Overviews

    At DuckDuckGo, our approach to AI is to only make AI features that are useful, private, and optional. If you don’t want AI, that’s cool with us. We have settings to turn off all of our AI features, and even a new setting to help you avoid AI-generated images in our search results. At the same time, we know a lot of people do want to use AI if it is actually useful and private (myself included). Our private chat service at duck.ai has the highest satisfaction ratings we’ve seen in a new feature, and Search Assist, our equivalent of Google’s AI Overviews, is currently our highest-rated search feature. Our goal with Search Assist is to improve search results, not to push AI. We’ve been continually evolving it in response to feedback, seeking better UX, and here’s how we’re thinking about that UX right now, relative to Google’s AI Overviews: * You can turn Search Assist off or turn it up—your preference. * When it does show, Search Assist keeps vertical space to a minimum so you can still easily get to other search results. * The initial Search Assist summary is intentionally short, usually two brief sentences. This brevity keeps hallucinations to a minimum since less text means less surface area to make things up. You also get the complete thought without having to click anything. However, you can still click for a fuller explanation. This is a subtle but important distinction: clicking more on Google is getting more of the same, longer summary; clicking more on DuckDuckGo is getting a new, completely independent generation. * You can use the Assist button to either generate an answer on demand if one isn’t showing automatically, or collapse an answer that is showing to zero vertical space. * When we don’t think a Search Assist answer is better than the other results, we don’t show it on top. Instead, we’ll show it in the middle, on the bottom, or not at all. This flexibility enables a more fine-tuned UX. * All source links are always visible, not hidden behind any clicks or separated from the answer. We’ve also been keeping sources to a minimum (usually two) to both increase answer quality (since LLMs can get confused with a lot of overlapping information) and increase source engagement. * Our thumbs up/down is also visible by default, not hidden behind a click. This anonymous feedback is extremely valuable to us as a primary signal to help us find ways to improve. * To generate these answers, we have a separate search crawling bot for Search Assist answers called DuckAssistBot that respects robots.txt headers. By separating DuckAssistBot from our normal DuckDuckBot, and unlike Google, we allow publishers to opt-out of just our Search Assist feature. * Like all of our search results, Search Assist is anonymous. We crawl sites and generate answers on your behalf, not exposing your personal information in the process or storing it ourselves. I’m sure our Search Assist UX will evolve further from here as we’re actively working on it every day. For example, we’re working now on making it easier to enter a follow-up question in-line, which allows you to more easily stay in context when entering your question. That is to say, the above is not set in stone and the answers for these queries will surely change over time, but I hope this post helps illustrate how we’re approaching Search Assist to be consistent with our overall approach to AI to be useful, private, and optional. Feedback is welcomed! Thanks for reading! Subscribe for free to receive new posts or get the podcast. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit gabrielweinberg.com

    5 分鐘

簡介

Tech, policy, and business insights from the DuckDuckGo founder and co-author of Traction and Super Thinking. gabrielweinberg.com