What does it actually mean to govern your content in the age of AI, and who’s really in control? In this episode, Sarah O’Keefe sits down with Patrick Bosek, CEO of Heretto, to unpack why the quality, accuracy, and structure of your content may be the most critical factors in what your users experience on the other side of an AI model. Patrick Bosek: In today’s world, you don’t have 100% control. There are a couple of different places where this needs to be broken up. One is the end user: what they physically get and what control they have versus what control you have. Then, there’s what control you have of how the AI model is going to behave based on your information and your inputs. Whether or that model is public, like a user accessing your documentation through Claude Desktop, or private, like a user accessing your documentation through your app or website, the governance piece comes down to what control you have immediately before the model. And that breaks down into a couple of things: completeness, accuracy, and structure of the content. Related links: AI and content: Avoiding disaster AI and accountability Structured content: a backbone for AI success Heretto Questions for Sarah and Patrick? Register for the Ask Me Anything session on April 8th at 11 am Eastern. LinkedIn: Sarah O’Keefe Patrick Bosek Transcript: This is a machine-generated transcript with edits. Introduction with ambient background music Christine Cuellar: From Scriptorium, this is Content Operations, a show that delivers industry-leading insights for global organizations. Bill Swallow: In the end, you have a unified experience so that people aren’t relearning how to engage with your content in every context you produce it. Sarah O’Keefe: Change is perceived as being risky; you have to convince me that making the change is less risky than not making the change. Alan Pringle: And at some point, you are going to have tools, technology, and processes that no longer support your needs, so if you think about that ahead of time, you’re going to be much better off. End of introduction Sarah O’Keefe: Hey everyone, I’m Sarah O’Keefe. I’m here today with Patrick Bosek, who is the CEO of Heretto. Hey Patrick. Patrick Bosek: Hey, Sarah. Long time no chat. SO: That is, I guess for certain values of long time. We decided today that we wanted to talk about AI and governance, except I promptly tried to come up with a synonym for governance because I’m afraid that when I say that particular word, our audience just walks off. So, okay, Patrick, what is governance? PB: Well, so first of all, thanks for having me on, and second of all, I’m excited about this one because based on our little bit of chat before the show, it sounds like we’re actually gonna have some things to argue about this time around. SO: I would never. PB: Well, usually we tend to agree right like I think that we’re generally pretty on the same page about stuff. So I’m excited. I’m pumped. Okay, so governance. I mean, obviously it has a ton of different meanings to different people but in the way that I want to talk about it today, because it was my suggestion. It’s related to the governance of content, specifically in the way of the inputs to AI systems. So you can think about the process of controlling for quality, accuracy, the things that matter in the actual content and information before it gets into the AI system. So it’s kind of the upstream quality, totality, structure, all of that checking and assurance ahead of whatever your experience is going to be downstream, of which one is the most contemporary and most interesting is AI. SO: Okay, so this is making sure that it is not garbage in so as to avoid garbage out. PB: Yeah, I would say that’s a fair statement. SO: Yeah. Okay. And can we use AI to do governance of the content we’re producing? PB: Well, that’s actually a very interesting question. And I think the short answer is somewhat right now. So before I go, okay, before I like fully answer that, I want to put a little disclaimer in here. The stuff with AI is changing so quickly that we should date-stamp this episode. SO: It is March 19th, 2026. And it’s nine-ish Eastern time. PB: Yeah, we are recording this on March 19th, 2026. Now I feel, yeah. Okay, so now that people know when it is that we’re talking about this, I feel a little bit safer in answering. So there are aspects of governance you can do with AI today, for sure. And there’s new capabilities coming online all the time. I actually think, broadly speaking, the thing that’s going to be most challenging about governance is going to be the pieces that can’t be done with AI continuing to not continuing to do them because it becomes like as the human part of the loop becomes smaller and smaller, it becomes so much easier and easier for the human to just click accept because like the AI gets it right, does it, the automation works that kind of thing. And you know, I’ll use like an AI coding analogy because that’s what I spend a lot of time with AI on. So I use Claude CLI. That’s my primary method of vibe coding or whatever you want to say. And I even find myself like just clicking accept sometimes. But I’m still forcing myself to like, get it, and read the code. And like, I had it write a shell script yesterday. And I was almost about to run it, and I was like, this is a shell script. I should not do that. I should definitely read what’s going on inside of this shell script, but it, gets to a point where like you start to trust it. SO: Yeah. PB: And as we start to inject AI into the governance layer. So like we build skills that check certain parts of our information architecture or, you know, they kind of act as linters if we’re in docs as code or, you know, whatever it might be. There’s going to be like a form of trust that gets built up. And because we kind of like, tend to think of these agents as like human, they’re not, we tend to prescribe like a human form of trust, you know, like when you have a coworker that does the right thing all the time, you tend to just let them work. And I think that’s kind of the challenge and in the human side of governance. So that’s a really long way of saying. You can build tools and skills and patterns and things like that in AI that will help with governance. But fundamentally, it’s my belief that for the type of documentation or content that you and I work on, and I think most of our audience works on, which is has to be right, has to be accurate, has to conform to standards, et cetera, et cetera, right? It’s product documentation. It’s critical information. I still think that every single word needs to be read and considered by a human being. So really long answer to that question. SO: Right, and then fundamentally, if the AI is right half the time, then I’m going to read everything pretty carefully, knowing that 50% is wrong and I need to fix it. The problem, I think, is when it gets to be 90% correct, you just sort of glaze over because you’re looking for that last 10%, right? So it’s the difference between like doing a developmental edit, where you’re going deep into the words and just rearranging everything and fundamentally changing everything, versus doing a final proofread, where it is far more difficult to read 100 pages and find one typo than it is to read 100 pages that are just trash. And you’re like, start over, rearrange this, reformat everything. We’re not even worried about the typos yet because this is just fundamentally wrong. And so to your point, as it gets closer and closer, you start to believe in the output that it’s generating, which then means almost certainly that one typo, which in your example could be a shell script gone rogue, could be really, really problematic. PB: Yeah. And that’s going to be the challenge of our times in a lot of ways. I think there’s still going to be some aspect of origination that’s going to be necessary for quite some time. even with like automated drafting and pipelines like that, coming online, because in certain places, those work really, really well. but in other places, they, they don’t really work very well yet. It’s going to be the process of like becoming orchestrators in a way where, you know, we’re not rubber stamps, and we’re like really truly adding value and actually defending against the challenges that are going to come up with the automation that we build. SO: Fundamentally, like I saw a reference to this this morning and somebody said, you can write essentially an extractor that’s going to generate your release notes, right? So there’s code updates and you just automate the generation of release notes. Now, I personally am not so sure that you actually need AI for this. Given properly commented code, you could just generate the release notes, right? But setting aside that particular small argument in here. You automate, you can automate the generation of release notes because release notes are essentially, this is the delta between version one and version 1.01 or, know, and here are the changes. It’s a change log. What that means though is that the changes were captured in the code. They’re in the code, like the logic or the information is already there. What we’re doing is extracting it and reformatting it into something that a human can look at on a single page and say, okay, I understand what the changes are and how these apply to me as the user of the software and whether or not I should upgrade. That’s different than we’re going to introduce a new feature into this code and I need to write about why this feature is interesting and relevant to you. The question to me is where is new information being introduced into the system? Where is that information encoded? And then once it’s encoded, we can extract it and process and do thin