Hi there, Welcome back to Untangled. It’s written by me, Charley Johnson, and supported by members like you. This week I’m sharing my conversation with Miranda Bogen (Director, AI Governance Lab, Center for Democracy & Technology) about what happens when your AI assistant becomes an advertiser. As always, please send me feedback on today’s post by replying to this email. I read and respond to every note. Don’t forget to sign up for The Untangled Collective — it’s my free community for tech & society leaders navigating technological change and changing systems, and the next event is coming up! 🏡Untangled HQ 🔦NEW: I’m teaming up with Aarn Wennekers (complexity expert and author of Super Cool & Hyper Critical) to launch Stewarding Complexity, a private, confidential gathering space for boards, executive teams, and organizational leaders to step outside formal governance structures, speak candidly with peers, and practice making sense of complexity — together. If that’s you, join us! 🚨Not New, But Important: Every organization I speak with is facing the same two questions: How do we build strategy for uncertainty—and what should we actually do about AI? My course, Systems Change for Tech & Society Leaders provides a structured approach to navigating both, helping leaders move beyond linear problem-solving and into systems thinking that engages emergence, power, and the relational foundations of change. Sign up for Cohort 6 today! Because why not: here’s a free diagnostic framework I use in the course to help you assess how your organization understands and uses technology across its strategy, programs, and operations. 🖇️ Some Links How Certain Is It? I’ve written a lot about why embracing uncertainty matters. Chatbots do the opposite—they collapse uncertainty into confident-sounding responses, packaging blind confidence as a feature. But what if we designed these tools differently? What would it take to preserve uncertainty rather than erase it? A new paper tackles this challenge, arguing we need to protect the messier, harder-to-quantify forms of uncertainty that professionals navigate through conversation and intuition. Their proposed fix? Create systems where professionals collectively shape how different forms of uncertainty get expressed and worked through. Blackbox Gets Subpoenaed Job applicants are suing Eightfold AI, claiming its hiring screening software should follow Fair Credit Reporting Act requirements—giving candidates the right to see what data is collected and dispute inaccuracies. Eightfold scores job applicants 1-5 using a database of over a billion professional profiles. Sound familiar? It’s essentially what credit agencies do: create dossiers, assign numeric scores, and determine eligibility. The lawsuit argues: if it works like a credit agency, it should be regulated like one. As David Seligman of Towards Justice put it: “There is no A.I. exemption to our laws. Far too often, the business model of these companies is to roll out these new technologies, to wrap them in fancy new language, and ultimately to just violate peoples’ rights.” Threatening Probabilities Every time a chatbot threatens or blackmails someone, my inbox fills with “proof” of sentience. But a new paper shows these behaviors aren’t anomalies—they’re just extreme versions of normal human interaction: price negotiation, power dynamics, ultimatums. Our surprise comes from assuming chatbots should only reproduce socially sanctioned behavior, not the full spectrum of how humans actually act. Threats and blackmail don’t signal consciousness. They signal the model is drawing from the complete statistical distribution of human behavior—including the parts we don’t like to acknowledge. It’s probabilities all the way down, even when they’re uncomfortable ones. 🧶When Your AI Assistant Becomes an Advertiser OpenAI just announced it will start testing ads in ChatGPT’s free tier. The press release was carefully worded—reassuring users that “ads will not change ChatGPT answers” and that “your chats are not shared with advertisers.” But as Miranda Bogen, director of the AI Governance Lab at the Center for Democracy and Technology, pointed out in a recent conversation, these statements are misleading and miss the entire point. What’s coming is a fundamental shift in who these systems serve—and what that means for people, privacy, and inequality. To understand why this matters, we need to look at three things: how AI changes advertising signals, what “privacy” really means in this context, and why this could be harder to detect than anything we’ve seen before. The Signal Problem The question is: what happens when your AI assistant becomes an advertiser? Answering that question, according to Miranda, starts by recognizing that advertising is all about high fidelity signals of intent—data that accurately predicts what you want to buy or do. When an ad interrupts your experience on Facebook, it’s hoping that you’ll care; that perhaps something you clicked awhile back will still be relevant. That’s not a great signal. Searching offers a better signal. You’re typically using Google because you want something. But ChatGPT is different. You’re not searching for information. You’re often thinking out loud, revealing what matters to you, what you’re struggling with, what you’re planning or hoping for. Each conversational turn reveals deeper context about your intent—creating rich data for advertisers. Now, OpenAI wants those signals but, if you read the press materials, they’re clearly concerned about losing users. For example, they bend over backwards to say that your chats won’t be “shared with advertisers.” But according to Miranda, this is technically accurate but completely misleading. The platform doesn’t need to send advertisers a list of your conversations. That’s the whole point of advertising infrastructure—OpenAI will target ads on behalf of advertisers, shielding your specific data while making the connection happen anyway. The press release also promises you can “turn off personalization” and “clear the data used for ads.” But there are multiple layers of personalization happening simultaneously (e.g. raw chat logs, explicit memory stored about you, etc.) and it’s unclear what exactly OpenAI is referring to. Plus, even if you did turn off all personalization and erased all memory in the system, the amount of information a chatbot has about you in a specific context window offers plenty of signal for advertisers. The Relationship Problem On Facebook or Google, it’s clear you’re dealing with an advertiser. Your intent is your own. The experience is transactional. But as Miranda argues, when your AI assistant or AI co-worker starts subtly suggesting new products or services, something fundamentally different is happening. It’s closer to influencer marketing where paid recommendations come wrapped in the veneer of authentic social connection. But an influencer’s audience typically knows that they’re being paid to sponsor a product. With an AI assistant, the lines start to blur. It has been helping you draft emails, think through career decisions, process relationship struggles. You’ve built relational trust with it over months, so when it suggests a therapist, lawyer, or contractor, you might perceive it as trusted advice without knowing, of course, which providers paid to be in the pool the AI draws from. The persuasion is invisible, wrapped in the same helpful tone the AI uses for everything else. The Visibility Problem Personalized ads and privacy harms are a big albeit old problem. These tools will of course propagate discrimination, exploit people at vulnerable moments, reinforce stereotypes and biases, and shape what opportunities people see (and don’t!). But this evolution of the advertising model brings something new: these harms will be even harder to identify. Why? Because these systems are being built to connect with each other. AI agents will call other tools, connect with your bank and service providers, exchange information across an ecosystem of interconnected systems. There will be money and incentives flowing through this network in ways that are nearly impossible to track. As Miranda put it: “Even just tracking where any of this is happening, where exchanges of money and incentives are happening behind the scenes and where that might be shaping people’s experiences will just be even more challenging to keep up with over time.” If your inner monologue so far is “this all sounds very bad,” well, I get it. But we didn’t end the conversation without imagining alternative business models and policy solutions. Listen to the end for these, and hear what Miranda would do to shift power back to users if she were advising our next (fingers crossed!) President four years from today. 👉 Before you go: 3 ways I can help * Advising: I help clients develop AI strategies that serve their future vision, craft policies that honor their values amid hard tradeoffs, and translate those ideas into lived organizational practice. * Courses & Trainings: Everything you and your team need to cut through the tech-hype and implement strategies that catalyze true systems change. * 1:1 Leadership Coaching: I can help you facilitate change — in yourself, your organization, and the system you work within. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit untangled.substack.com