Untangled

Charley Johnson

Untangled is a podcast about technology, people, and power. untangled.substack.com

  1. 3 MAY

    The World They're Building Toward

    Hi there, This week I’m sharing a conversation I had with ​Bo Young Lee​, CEO of ​AI4All​ about Silicon Valley imaginaries, rational refusal, and the futures we haven’t been offered. As always, please send me feedback on today’s post by replying to this email. I read and respond to every note. On to the show! Untangled HQ * Wednesday, May 5: I’m hosting a ​workshop on how to trace what must stay human ​when implementing AI responsibly. It will double as a preview of ​my new course on stewarding AI. ​ * Thursday, May 6: As part of ​The Facilitators’ Workshop​, Kate and I are hosting a ​workshop on how to turn stuck meetings into breakthrough moments. ​ * Tuesday, May 12: Aarn and I are hosting a workshop on the discipline of holding tension: how to name tension without personalizing it, slow the moment without stalling the meeting, and protect the disagreement that actually matters. Join us! Deep Dive The World They’re Building Toward Start with the bunkers. In the last several years, a number of Silicon Valley’s most powerful technologists have been quietly building survival infrastructure. ​Bunkers in New Zealand.​ ​Fortified compounds in remote locations.​ Escape hatches from the civilization their products are shaping. Bo Young Lee noticed this before most people were talking about it, and she asked the obvious question: if these are the imaginaries — the foundational visions of the future — animating the people building our most consequential technologies, what does that tell us about the products they’re building? And how does their imaginary constrain our imagination? An imaginary is not a fantasy. It’s the operative picture of the future that structures present decisions — the unstated assumptions about where the world is going that determine what problems are worth solving, what risks are worth taking, and what populations are worth designing for. Imaginaries are embedded. They show up in product decisions, in hiring, in what gets funded and what gets ignored. Bo argues that the dominant Silicon Valley imaginary is, at its core, a story about inevitability and survival. Civilization is fragile. Disruption is coming. The question isn’t whether things collapse but who gets to build what comes next. If that’s the picture of the future you’re working from — even unconsciously — you’re not going to prioritize safety, privacy, or good governance in the present. Those things just get in the way! As Bo explains, the products that follow are predictable. Why design for women when women don’t figure prominently in survival scenarios? Why prioritize people with disabilities when they’re among the first casualties of disaster-oriented futures? Why hold yourself accountable to the communities your technology harms when they’re not in the imaginary? This isn’t hyperbole. Bo is describing a logical coherence between worldview and product — a through-line from the bunker to the algorithm that becomes visible once you start looking for it. Take the supposed ‘​AI gender gap.​‘ The narrative goes something like this: women are underrepresented in AI adoption because they lack confidence, access, or awareness. All we need to close the gap is a li’l education, outreach, and encouragement! Bo argues that women’s skepticism about AI is rational. Not because women don’t understand the technology, but because they understand it clearly enough to recognize that it wasn’t built for them, doesn’t work as well for them, and in specific contexts actively harms them. Right, women face ​systematically harsher​ professional consequences than men for identical workplace errors — a well-documented asymmetry researchers call the “​tighter world​” phenomenon. Women are more likely to be fired for mistakes and less likely to find subsequent employment. When a high error rate tool like generative AI enters that context, the risks land differently. Men’s mistakes get absorbed as the cost of experimentation. Women’s mistakes land on a narrower margin. A woman who understands this and proceeds with caution is doing the math. Calling that a confidence problem is its own kind of imaginary! The “AI for good” movement is similarly trapped by the Silicon Valley imaginary, but they don’t see through it in the same way. As Bo argues, the AI for good world has largely accepted the imaginaries it inherited. Its animating question is how to reduce harm within the existing AI paradigm — how to make the technology that’s been built safer, fairer, less biased. For example, Bo describes a philanthropy that funded three separate organizations — at seven-figure grants each — to build AI agents that would coach and tutor low-income, first-generation college students. The goal was equity. But research shows that when you train LLMs to eliminate overt racism, the covert bias doesn’t disappear — it actually increases. Show the same model two pieces of writing, one in standard English and one in African American Vernacular English (AAVE), and the LLM will rate the AAVE writer as less intelligent and less educated. A coaching agent built on that model, deployed to help first-generation students many of whom communicate in AAVE, may well steer those students toward easier majors and less rigorous courses — without anyone noticing, without anyone intending it. This example starts from a present-tense imagination of what AI is and what it’s for, and works forward from there. To free ourselves from these constraints, we have to separate refusal of this AI from refusal of AI altogether. Because when we do that, we can ask the more generative question that rarely gets asked: what futures do we actually want — and what would it take to build toward them? Bo’s organization offers one path forward. AI4All trains the next generation of AI practitioners from underrepresented communities, asking them from the beginning to identify social problems they want to address and work backward to the role AI might play. Because changing the imaginaries requires changing who builds the technology and who gets to define what it’s for. A more diverse AI workforce is an epistemic necessity — different people imagining different futures producing genuinely different technology. We were not given these imaginaries. We don’t have to keep them. Tools for Weavers My conversation with Bo inspired me to distill a number of the articles I've written about ​imagination​, ​building alternative AI futures​, and ​mapping backwards from the future​ -- and turn them into a tool! Your strategy documents already contain a picture of the future. You probably haven’t named it. It’s embedded in your metrics, your hiring plans, your roadmaps — quietly nudging you toward a particular kind of future without anyone actively choosing it. Imagining Otherwise is a practice for naming that picture — and then building a different one. Backcasting, futures in plural, and the question most teams skip: what are we willing to stop? Working canvas included. The last page will make sense when you get there. “Remember to imagine and craft the worlds you cannot live without, just as you dismantle the ones you cannot live within.” - Ruha Benjamin Work With Me Here are 3 ways I can help: * ​​​​​​​​​Advising:​​​​​​​​ I can help you navigate uncertainty, make sense of AI, and steward change in your system. * ​​​​​​​​​Organizational Training:​​​​​​​​ Everything you and your team need to cut through the tech-hype and implement strategies that catalyze true systems change. (For either Stewarding AI or Systems Change for Tech & Society Leaders) * ​​​​​​​​​1:1 Leadership Coaching:​​​​​​​​ I can help you facilitate change — in yourself, your organization, and the system you work within. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit untangled.substack.com

    47 min
  2. 28 MAR

    Your data isn't exhaust. It's a belonging.

    Hi there, Welcome back to Untangled. It’s written by me, ​​Charley Johnson​​, and ​​supported​​ by members like you. ​Help me make it better?​ This week I’m sharing a conversation I had with Beth Rudden — founder of Bast AI, former chief data officer for a $34 billion division at IBM, and someone building a genuinely different vision of what AI could be. 🏡 Untangled HQ Coming Up * ​Stewarding Complexity:​ Our next ​session​ is about finding and using the agency you actually have — even inside institutions that weren’t designed for it. * ​Untangled Collective:​ Your expense approval workflow is making decisions. So is your classification system, your algorithm, and your org chart. ​This session gives you a map of all of it​ — and shows you where to actually push. * ​Stewarding AI: How to Build Responsible Principles, Workflows, and Practices​ will take place July 3, 10, 17, and 24. It will open to the waitlist tomorrow. Enrollment is capped - join the waitlist if you want dibs on signing up. 🧶 Deep Dive Your data isn’t exhaust. It’s a belonging. Even the tech CEOs with the most to lose from the narrative bubble popping are ​quietly conceding​ that ​the scaling law was never actually a law.​ We’ll eventually let go of the equally silly notion that intelligence — or AGI, or whatever we’re calling it this quarter — is simply an emergent property of scale. Probably around the same time we admit that attaching sensors to people’s extremities was not the path to ‘embodied intelligence.’ Anyway! In the meantime, the story props up the technology. And the technology keeps doing what it does — make up false information, encode historical biases as neutral truth, and generate a mix of sloppy and genuinely useful outputs. Because we’ve anointed a few tech CEOs as our AI-narrators-in-chief, they get to decide what the data represents and what it means. Knowledge! Intelligence! Truth! Beth is building an alternative system that allows meaning to form the old-fashioned way: through interactions between people and systems. The critique starts with a claim about data that sounds simple but isn’t: decontextualized data doesn’t contain meaning. It carries patterns and associations. This distinction is fundamentally about whose meaning and knowledge grounds the AI system. This might sound academic but it matters a great deal. Take health care as an example — as Beth notes, seventy percent of patients don’t fully understand their outpatient procedures. A caregiver asks “why is my husband acting weird after his accident?” The clinical record says “behavioral dysregulation.” The gap between those two descriptions is where comprehension lives — and it’s invisible to any system that treats both as equivalent tokens. When patients and caregivers interact with clinical information, they generate something that doesn’t exist anywhere else: a record of how humans actually try to understand medical knowledge, where they get stuck, what vocabulary they use, and what they’re really asking beneath the surface question. Beth calls this interaction data, and its where meaning lives. From this you can start to build an ontology — a formal map of what exists within a domain and how concepts relate to each other. Here are the concepts in this field, here is how they connect, here is where each piece of knowledge sits relative to everything else. Without something to understand against, AI systems simply produce statistical appropriation rather than understanding. They pattern-match from frequency with no principled sense of how the patterns relate. The ontology is what offers the system ground truth. This isn’t an approach without challenges. Every organization contains multiple competing ontologies. The C-suite has one map of how knowledge is organized. Frontline workers have another. These disagreements aren’t accidental — they reflect different positions in the power structure, different relationships to risk. When you formalize an ontology, you’re making a political choice about whose map becomes the standard. But I’d much rather make an intentional choice about what knowledge matters than no choice at all — and you can navigate through this complexity by triangulating across different perspectives representing different positionalities. Beth has long described data as an artifact of human experience — carrying the fingerprints of its making, the lineage of decisions. But during a recent museum visit in Vancouver, a curator explained how her institution approaches Indigenous collections: these aren’t artifacts in our care. As Beth ​explains​, they’re belongings. Artifacts can be extracted, cataloged, and owned. Belongings require consent and ongoing relationship with their communities of origin. Data isn’t an artifact of human experience. Data is a belonging. The current AI economy is built on the opposite assumption — harvesting people’s data without consent, using poorly compensated annotators, treating the exhaust of human experience as raw material. I couldn’t agree more with the alternative vision Beth is articulating: people whose data contributes to AI systems get compensated. They choose whether to monetize their experiences. The lineage and provenance aren’t overhead. They’re the infrastructure. That’s a long way from where we are. But I left the conversation feeling hopeful knowing someone is building toward it. 🙏 Share & Earn Help me build this community of people thinking differently about technology and earn free rewards (e.g. 1:1 coaching sessions, even free entry into one of my courses). ​Just share your personal link far and wide. ​ 💫 Work With Me Here are 4 ways I can help: * ​​​​​​Facilitation:​​​​​ I can help facilitate your team through complex and fraught dynamics, so that they can achieve their purpose. * ​​​​​​Advising:​​​​​ I can help you navigate uncertainty, make sense of AI, and facilitate change in your system. * ​​​​​​Organizational Training:​​​​​ Everything you and your team need to cut through the tech-hype and implement strategies that catalyze true systems change. (For either Stewarding AI or Systems Change for Tech & Society Leaders) * ​​​​​​1:1 Leadership Coaching:​​​​​ I can help you facilitate change — in yourself, your organization, and the system you work within. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit untangled.substack.com

    55 min
  3. 21 MAR

    The Age of Algorithmic Deference.

    Hi there, Welcome back to Untangled. It’s written by me, ​​Charley Johnson​​, and ​​supported​​ by members like you. ​Help me make it better?​ This week, I’m sharing a conversation I had with Hilke Schellmann — Emmy Award-winning investigative journalist, NYU professor, and author of The Algorithm — about her recent reporting on AI in hospitals. If you read ​my newsletter​ applying the STEWARD framework to AI in health care, you know her work was the spine of that piece. This conversation builds off of that, and goes a li’l deeper. On to the show! 🏡 Untangled HQ This Week * WEAVER: I opened enrollment for Cohort 7 of ​Systems Change for Tech & Society Leaders​. You can get 40% off through March 27 with the promo code UNTANGLED40. * Community: Kate and I hosted “Navigating Challenging Personalities at Work.” Join ​The Facilitators’ Workshop​ if you don’t want to miss the next event. * Help me, help you: I launched a ​short survey​ to help me improve Untangled. ​Complete it and get a free email course.​ (Most participants are completing it in under 2 minutes.) Coming Up * STEWARD: Next week I’m presenting my STEWARD framework to the ​Technology Association of Grantmakers Inclusion By Design Leadership Cohort. ​Be the first to hear when ​Stewarding AI launches. ​ * ​Untangled Collective: ​Power is everywhere. In the org chart, yes — but also in the intake form nobody questions, the metric everyone optimizes for, and the meeting that always ends the same way. ​Learn how to map it and identify and what you can actually do about it.​ 🧶 Deep Dive The Age of Algorithmic Deference. In my conversation with Hilke Schellmann, we opened with the story that anchors her piece: Adam Hart, a nurse at St. Rose Dominican Hospital in Nevada, at the bedside of a patient flagged by a sepsis alert. An algorithm generated an order to administer intravenous fluids. Hart noticed a dialysis catheter and knew fluids would harm her. His charge nurse tells him to comply. He refuses. A physician overhears, steps in, and orders dopamine instead — raising her blood pressure without adding fluid volume. The patient was fine. Nobody in that room had ill intent. In fact, the system worked as it was designed -- and that’s the problem. What stayed with me from this part of the conversation was Hilke’s reflection that Hart’s actions took genuine courage. Because it did! The charge nurse treated the algorithm with legitimacy and neutrality, and the alert became a verdict. Hart had years of experience and judgement underpinning his conviction -- but what about nurses earlier career, less confident in their own judgment? Then there’s Melissa Beebe and the BioButton at UC Davis — a wearable chest sensor that tracked vital signs continuously and generated alerts Beebe found vague, way too frequent, and hard to act on. Beebe asked to understand why the device was producing the outputs it was. She was a union rep with seventeen years of experience asking a completely reasonable question. But because we live in a culture obsessed with innovation -- and not one obsessed with patient outcomes -- she was labeled as resistant to technology. Hilke and I talked about what she was actually raising and why it wasn’t heard — and about what happens when it isn’t. Tools arrive with press releases and fanfare, get piloted for a year, quietly get shelved. Nobody shares what went wrong. And, as a result, the next health system starts from scratch. Mount Sinai offered a different picture. They brought AI development in-house, stopped trusting vendor promises, and found that the real work shifted from algorithm selection to trust, adoption, and workflow fit. Their most successful tool — a wound-care prediction model — came from a bedside nurse who identified the problem, helped build the solution, and trained her own colleagues. The catch: this only works if you have deep pockets and in-house expertise. Smaller and rural hospitals don’t. As Hilke argued, a two-tier system is developing, and the most vulnerable patients are on the wrong side of it. We went back to Hart’s story to pull on something implicit throughout: the hospital system never trained staff on what these systems actually are and what they aren’t. Which led us into the question of what must remain human. Knowing a patient’s baseline. Reading the room. Catching the slurred speech that doesn’t show in the labs or on the monitor. These tools don’t have access to that data. Workflow was the final thread. In most of the cases Hilke documented, the AI was simply added to an existing practice rather than prompting a redesign. Nobody asked what should happen when the alert is wrong, who has the authority to override it, or what a legitimate override even looks like. Those questions need to be answered before deployment — not discovered afterward. We closed with what Hilke would change about how AI is being implemented in work contexts. Her answer: stop treating stakeholder participation as an afterthought. Start treating it as a design requirement. 🖇️ Some Links The myth of the crowd: People are now betting real money on who gets voted off Survivor — a show that was filmed months ago and exists entirely on a hard drive somewhere. The New York Times ​reports ​this is creating obvious incentives for “insider” information, which is a very polite way of saying: someone who knows a producer is about to become very wealthy. Whether that counts as market manipulation apparently depends on your definition of “market,” “manipulation,” and possibly “reality.” (​More on prediction markets​) Growth over kids: ​Meta knew.​ That’s the thing that should make you put down whatever you’re holding. Internal documents — surfaced during New Mexico’s lawsuit — show that Meta’s own people repeatedly flagged that Instagram’s recommendation and contact systems were steering teenagers toward predatory accounts and enabling serious harm. They documented it. They had meetings about it. And then they ran the numbers on what stronger safety defaults would cost in growth and engagement. They chose growth and engagement over the safety of young people — and they always will. Pro-worker AI: ​A new paper​ sorts technological change into five categories, only one of which — “new task-creating” — is unambiguously good for workers. The other four range from “fine, probably” to “you’re being replaced by a script.” The authors note that pro-worker AI is chronically underinvested, which will surprise no one who has noticed that “we built a tool that makes humans more capable and irreplaceable” does not slap the same way AGI hype does. (​More on AI & labor.​) 📧 Learn With Me My ​email courses​ break big, messy topics into small, digestible, actionable steps and practices -- everyone comes with practical tools and frameworks I’ve created that you can apply immediately. (Or just complete​ the short survey​ and get one for free!) 💫 Work With Me Here are 4 ways I can help: * ​​​​​Facilitation:​​​​ I can help facilitate your team through complex and fraught dynamics, so that they can achieve their purpose. * ​​​​​Advising:​​​​ I can help you navigate uncertainty, make sense of AI, and facilitate change in your system. * ​​​​​Organizational Training:​​​​ Everything you and your team need to cut through the tech-hype and implement strategies that catalyze true systems change. (For either Stewarding AI or Systems Change for Tech & Society Leaders) * ​​​​​1:1 Leadership Coaching:​​​​ I can help you facilitate change — in yourself, your organization, and the system you work within. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit untangled.substack.com

    40 min
  4. 7 FEB

    What If We Regulated Chatbots Like Any Other Product?

    Hi there, Welcome back to Untangled. It’s written by me, ​Charley Johnson​, and ​valued by members like you.​ Today, I’m sharing my conversation with Ben Winters, Director of AI and Privacy at Consumer Federation of America, about ​The People First Chatbot Bill​—model legislation for regulating chatbots that’s been endorsed by over 70 organizations. As always, please send me feedback on today’s post by replying to this email. I read and respond to every note. 🔦Untangled HQ The ​Untangled Collective​ held its third community event earlier this week. Here’s what one participant had to say: On Tuesday, I’m launching another community with Aarn Wenneckers: ​Stewarding Complexity.​ This one is for boards, CEOs, and organizational leaders who need to step outside formal governance structures and practice making sense of complexity in real time—together. ​Join us?​ 🧶 Chatbots Don’t “Just Happen.” Companies Make Choices. Tech companies have successfully made chatbots seem like mystical, uncontrollable entities while simultaneously claiming they can be trusted without regulation. Yet, as Ben points out, every aspect of a chatbot—from training data to interface design to what responses get blocked—represents a series of choices by companies. When those choices foreseeably lead to harm, companies should be held accountable. In our conversation, Ben and I dug into the key provisions in the Bill, including: * Product liability: The bill leverages centuries of product liability law to hold companies accountable for design choices, rather than treating chatbots as neutral tools. * Data minimization over consent: Instead of relying on checkbox fatigue, the bill prohibits using personal data from outside chatbot interactions. * Private right of action: Harmed individuals can sue directly, not just rely on overwhelmed state attorneys general. We also discussed how lessons from failed social media regulation informed this Bill —why content-neutral design matters, how consent-based models cement the status quo, and what it takes to overcome platform lobbying that claims regulation will “kill innovation.” But more than any specific recommendation, the Bill serves as a reminder of the kind of world we could live in. It articulates an alternative future that we could inhabit. And here’s the good news: we know how to get there and state legislators are increasingly receptive. As civil society organizations look for what policies to push, and as states face push back from companies saying regulation will stifle innovation or that chatbots are too complex or that China will win etc., I hope they pick up a copy of ​The People First Chatbot Bill.​ It’s a lot simpler than the mystique that surrounds these bots — we just need to treat them like the products they actually are. 👉Before you go: 3 ways I can help * ​​Advising:​ I help clients develop AI strategies that serve their future vision, craft policies that honor their values amid hard tradeoffs, and translate those ideas into lived organizational practice. * ​​Courses & Trainings:​ Everything you and your team need to cut through the tech-hype and implement strategies that catalyze true systems change. * ​​1:1 Leadership Coaching:​ I can help you facilitate change — in yourself, your organization, and the system you work within. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit untangled.substack.com

    41 min
  5. 24 JAN

    When Your AI Assistant Becomes an Advertiser

    Hi there, Welcome back to Untangled. It’s written by me, ​Charley Johnson​, and ​supported​ by members like you. This week I’m sharing my conversation with Miranda Bogen (Director, AI Governance Lab, Center for Democracy & Technology) about what happens when your AI assistant becomes an advertiser. As always, please send me feedback on today’s post by replying to this email. I read and respond to every note. Don’t forget to sign up for ​The Untangled Collective​ — it’s my free community for tech & society leaders navigating technological change and changing systems, and ​the next event is coming up! 🏡Untangled HQ 🔦NEW: I’m teaming up with Aarn Wennekers (complexity expert and author of Super Cool & Hyper Critical) to launch ​Stewarding Complexity​, a private, confidential gathering space for boards, executive teams, and organizational leaders to step outside formal governance structures, speak candidly with peers, and practice making sense of complexity — together. ​If that’s you, join us!​ 🚨Not New, But Important: Every organization I speak with is facing the same two questions: How do we build strategy for uncertainty—and what should we actually do about AI? My course, ​Systems Change for Tech & Society Leaders​ provides a structured approach to navigating both, helping leaders move beyond linear problem-solving and into systems thinking that engages emergence, power, and the relational foundations of change. ​Sign up for Cohort 6 today!​ Because why not: here’s a free ​diagnostic framework​ I use in the course to help you assess how your organization understands and uses technology across its strategy, programs, and operations. 🖇️ Some Links How Certain Is It? I’ve written ​a lot about why embracing uncertainty matters​. Chatbots do the opposite—they collapse uncertainty into confident-sounding responses, packaging blind confidence as a feature. But what if we designed these tools differently? What would it take to preserve uncertainty rather than erase it? A ​new paper​ tackles this challenge, arguing we need to protect the messier, harder-to-quantify forms of uncertainty that professionals navigate through conversation and intuition. Their proposed fix? Create systems where professionals collectively shape how different forms of uncertainty get expressed and worked through. Blackbox Gets Subpoenaed Job applicants are suing Eightfold AI, claiming its hiring screening software should follow Fair Credit Reporting Act requirements—giving candidates the right to see what data is collected and dispute inaccuracies. Eightfold scores job applicants 1-5 using a database of over a billion professional profiles. Sound familiar? It’s essentially what credit agencies do: create dossiers, assign numeric scores, and determine eligibility. The lawsuit argues: if it works like a credit agency, it should be regulated like one. As David Seligman of Towards Justice put it: “There is no A.I. exemption to our laws. Far too often, the business model of these companies is to roll out these new technologies, to wrap them in fancy new language, and ultimately to just violate peoples’ rights.” Threatening Probabilities Every time a chatbot threatens or blackmails someone, my inbox fills with “proof” of sentience. But a ​new paper​ shows these behaviors aren’t anomalies—they’re just extreme versions of normal human interaction: price negotiation, power dynamics, ultimatums. Our surprise comes from assuming chatbots should only reproduce socially sanctioned behavior, not the full spectrum of how humans actually act. Threats and blackmail don’t signal consciousness. They signal the model is drawing from the complete statistical distribution of human behavior—including the parts we don’t like to acknowledge. It’s probabilities all the way down, even when they’re uncomfortable ones. 🧶When Your AI Assistant Becomes an Advertiser OpenAI just announced it will start testing ads in ChatGPT’s free tier. The press release was carefully worded—reassuring users that “ads will not change ChatGPT answers” and that “your chats are not shared with advertisers.” But as Miranda Bogen, director of the AI Governance Lab at the Center for Democracy and Technology, pointed out in a recent conversation, these statements are misleading and miss the entire point. What’s coming is a fundamental shift in who these systems serve—and what that means for people, privacy, and inequality. To understand why this matters, we need to look at three things: how AI changes advertising signals, what “privacy” really means in this context, and why this could be harder to detect than anything we’ve seen before. The Signal Problem The question is: what happens when your AI assistant becomes an advertiser? Answering that question, according to Miranda, starts by recognizing that advertising is all about high fidelity signals of intent—data that accurately predicts what you want to buy or do. When an ad interrupts your experience on Facebook, it’s hoping that you’ll care; that perhaps something you clicked awhile back will still be relevant. That’s not a great signal. Searching offers a better signal. You’re typically using Google because you want something. But ChatGPT is different. You’re not searching for information. You’re often thinking out loud, revealing what matters to you, what you’re struggling with, what you’re planning or hoping for. Each conversational turn reveals deeper context about your intent—creating rich data for advertisers. Now, OpenAI wants those signals but, if you read the press materials, they’re clearly concerned about losing users. For example, they bend over backwards to say that your chats won’t be “shared with advertisers.” But according to Miranda, this is technically accurate but completely misleading. The platform doesn’t need to send advertisers a list of your conversations. That’s the whole point of advertising infrastructure—OpenAI will target ads on behalf of advertisers, shielding your specific data while making the connection happen anyway. The press release also promises you can “turn off personalization” and “clear the data used for ads.” But there are multiple layers of personalization happening simultaneously (e.g. raw chat logs, explicit memory stored about you, etc.) and it’s unclear what exactly OpenAI is referring to. Plus, even if you did turn off all personalization and erased all memory in the system, the amount of information a chatbot has about you in a specific context window offers plenty of signal for advertisers. The Relationship Problem On Facebook or Google, it’s clear you’re dealing with an advertiser. Your intent is your own. The experience is transactional. But as Miranda argues, when your AI assistant or AI co-worker starts subtly suggesting new products or services, something fundamentally different is happening. It’s closer to influencer marketing where paid recommendations come wrapped in the veneer of authentic social connection. But an influencer’s audience typically knows that they’re being paid to sponsor a product. With an AI assistant, the lines start to blur. It has been helping you draft emails, think through career decisions, process relationship struggles. You’ve built relational trust with it over months, so when it suggests a therapist, lawyer, or contractor, you might perceive it as trusted advice without knowing, of course, which providers paid to be in the pool the AI draws from. The persuasion is invisible, wrapped in the same helpful tone the AI uses for everything else. The Visibility Problem Personalized ads and privacy harms are a big albeit old problem. These tools will of course propagate discrimination, exploit people at vulnerable moments, reinforce stereotypes and biases, and shape what opportunities people see (and don’t!). But this evolution of the advertising model brings something new: these harms will be even harder to identify. Why? Because these systems are being built to connect with each other. AI agents will call other tools, connect with your bank and service providers, exchange information across an ecosystem of interconnected systems. There will be money and incentives flowing through this network in ways that are nearly impossible to track. As Miranda put it: “Even just tracking where any of this is happening, where exchanges of money and incentives are happening behind the scenes and where that might be shaping people’s experiences will just be even more challenging to keep up with over time.” If your inner monologue so far is “this all sounds very bad,” well, I get it. But we didn’t end the conversation without imagining alternative business models and policy solutions. Listen to the end for these, and hear what Miranda would do to shift power back to users if she were advising our next (fingers crossed!) President four years from today. 👉 Before you go: 3 ways I can help * ​Advising:​ I help clients develop AI strategies that serve their future vision, craft policies that honor their values amid hard tradeoffs, and translate those ideas into lived organizational practice. * ​Courses & Trainings:​ Everything you and your team need to cut through the tech-hype and implement strategies that catalyze true systems change. * ​1:1 Leadership Coaching:​ I can help you facilitate change — in yourself, your organization, and the system you work within. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit untangled.substack.com

    36 min
  6. 17 JAN

    What Happens When Your Coworkers Are AI Agents

    Hi there, Welcome back to Untangled. It’s written by me, ​Charley Johnson​, and ​supported​ by members like you. This week, I’m sharing my conversation with Evan Ratliff, journalist and host of the thought-provoking podcast, Shell Game. As always, please send me feedback on today’s post by replying to this email. I read and respond to every note. On to the show! 🔦 Untangled HQ I launched ​The Facilitators’ Workshop,​ a community of practice for leaders who want to perfect the craft of facilitating groups through conflict and ambiguity—so they can actually achieve their purpose. Our event on January 23, ​”From Conflict to Clarity & Connection,”​ will give you a structured process for diagramming conflict—a way to slow down, make invisible dynamics visible, and understand what’s actually happening before deciding what to do next. I’m spinning up a lot of new things that I’m excited to tell you about. The best way to stay up to date on upcoming events and workshops is by joining ​The Untangled Collective​. In season 1 of Shell Game, Evan cloned his voice, hitched it to an AI agent, and then put it in conversation with scammers and spammers, a therapist, work colleagues, and even his friends and family. ​You can listen to that conversation here.​ In season 2 of Shell Game, Evan explores what its like to run a company with AI agents as employees. A real company building a real product with users and interest from venture capitalists. This is the future that Silicon Valley is actively trying to bring into existence. Sam Altman recently shared that some of his fellow tech CEOs are literally betting on when the first one-person, billion-dollar company will appear. Now, all the hype would make you believe that we should welcome this future with open arms. Productivity will skyrocket. Time will feel abundant. Work will become frictionless and maximally efficient. That’s the story, anyway. You won’t be surprised to find that the gap between the hype and reality is, uh, massive. Evan and I talk about that gap, but Shell Game helps us see around the corner to what it might actually feel like to work with AI agents. It’s a story about: * What’s lost when an organizational culture becomes sycophantic. * What its like when your colleague regularly make stuff up, commits it to memory, and then repeats that thing in the future as if its real. * Why words like ‘agent’ and ‘agentic’ belie the reality that these large language models don’t really do anything on their own. * The costs and complexities of anthropomorphizing agents, and how we’re voluntarily tricking ourselves. * What humans are uniquely good at, and what it means for automation and the evolution of work. * What Silicon Valley misunderstands about the world they’re creating and what’s at stake in confusing fluency and judgement. Shell Game is smart, thought-provoking, and really funny. I can’t recommend it enough. I hope you enjoy my human to human conversation with Evan Ratliff. 🧶Want to go deeper? If you finished our conversation thinking, “Okay… I need to think about this more,” let me help. * ​Flattery as a Feature: Rethinking ‘AI Sycophancy’​ * ​There’s no such thing as ‘fully autonomous’ agents​ * ​It’s okay to not know the answer​ * ​AI isn’t ‘hallucinating.’ We are.​ That’s it for now, Charley This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit untangled.substack.com

    41 min
  7. 09/11/2025

    The Universe Called. It Says Your Theory of Change Is Cute.

    If you’ve sensed a shift in Untangled of late, you’re not wrong. I’m writing a lot more about ‘complex systems.’ To name a few: * What even is a ‘complex system’ and how do you know if you’re in one. * How to act interdependently and do the next right thing in a complex system. * Why if/then theories of change that assume causality are bonkers — and how to map backward from the future. * How do you act amidst uncertainty — if you truly don’t know how your system will respond to your intervention, what do you do? * How should we think about goals in an uncertain world? * Here’s a fun diagnostic tool I developed to help you assess how your organization thinks, acts, and learns under complexity. I am obsessed with complex systems because the world is uncertain and unpredictable — and yet all of our strategies pretend otherwise. We crave certainty, so we build plans that presume causality, control, and predictability. We know in our gut that the systems we’re trying to change won’t sit still for our long-term plans, yet our instinct to cling to control amid uncertainty is too strong to resist. And honestly, in 2025, this shouldn’t be a hard sell. Politics, climate change, and AI are laughing at your five-year strategy decks. Complexity thinking helps us see this clearly — that systems are dynamic, nonlinear, and adaptive — but it, too, has blind spots. First, it lacks a theory of technology. The closest we get is Brian Arthur’s brilliant book, The Nature of Technology: What It Is and How It Evolves, which explains how technologies co-evolve with economic systems. (Give it a read, or check out write-up in Technically Social). But Arthur was focused on markets, not on social systems — not on how technology is entangled with people and power. That’s where my course comes in. I’m trying to offer frameworks and practices for creating change across difference, amid uncertainty, in tech-mediated environments — approaches that honor both complexity and the mutual shaping of people, power, and technology. (And yes, Cohort 5 of Systems Change for Tech & Society Leaders starts November 19.) Second, complexity is hard to talk about simply and make practical (that’s why my Playbook turned into a 200 page monstrosity!) Every time I use the words “complex” or “system,” I can feel the distance between me and whoever I’m talking to widen. I’ve been searching for thinkers who bridge that gap — who write about systems with both clarity and depth — and recently came across the brilliant work of Aarn Wennekers, who writes the great newsletter Super Cool & Hyper Critical (Subscribe if you haven’t yet!) After reading his essay, Systems Thinking Isn’t Enough Anymore, I reached out and invited him onto the podcast. I’m thrilled to share that conversation — one that digs into the mindsets and muscles leaders need to navigate uncertainty and constant change, the need to collapse old distinctions between strategy and operations, and what it really means to act when the ground beneath us keeps shifting. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit untangled.substack.com

    48 min
  8. 02/11/2025

    "Autonomy or Empire"- Rethinking What AI Is For

    This week, I spoke with Harry Law, Editorial Lead at the Cosmos Institute and a researcher at the University of Cambridge, about AI and autonomy. Harry wrote a terrific essay on how generative AI might serve human autonomy rather than the empires Big Tech is intent on building. In our conversation, we explore: * What the Cosmos Institute is — and how it’s challenging the binary, deterministic thinking that dominates tech. * The difference between “democratic” and “authoritarian” technologies — and why it depends less on the tools themselves than on the political, cultural, and economic systems they’re embedded in. * The gap between agency (Silicon Valley’s favorite word) and autonomy, and why that difference matters. * How generative AI can collapse curiosity — closing the reflective space between question and answer — and what it might mean to design it instead for wonder, inquiry, and self-understanding. * Why removing friction and optimizing for efficiency often strips away learning, growth, and self-actualization. * The need for more “philosophy builders” — technologists designing systems that expand our capacity to think, choose, and act for ourselves. * Harry’s provocative idea of personalized AIs grounded in our own values and second-order preferences — a radically different vision from today’s “personalization” built for engagement. The conversation around generative AI has gone stale. Everyone is interpreting it through their own frames of meaning — their own logics, values, incentives, and worldviews — yet we still talk about “AI” as if it’s a single, coherent, inevitable thing. It’s not. My conversation with Harry is an attempt to move beyond the binary — to imagine alternative pathways for technology that place human autonomy, curiosity, and moral imagination at the center. If you’re fed up with imagining alternative futures and want to do the hard, strategic work of changing the system you’re in, and set it — and you! — on a fundamentally new path, sign up for Cohort 5 of my course, Systems Change for Tech & Society Leaders. It kicks off in three weeks and there are still a few spots available. https://www.charley-johnson.com/sociotechnicalsystemschange Before you go: 3 ways I can help * Systems Change for Tech & Society Leaders - Everything you need to cut through the tech-hype and implement strategies that catalyze true systems change. * Need 1:1 help aligning technology with your vision of the future. Apply for advising & executive coaching here. * Organizational Support: Your organizational playbook for navigating uncertainty and making sense of AI — what’s real, what’s noise, and how it should (or shouldn’t) shape your system. P.S. If you have a question about this post (or anything related to tech & systems change), reply to this email and let me know! This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit untangled.substack.com

    1hr 4min

About

Untangled is a podcast about technology, people, and power. untangled.substack.com

You Might Also Like