Content Operations

Scriptorium - The Content Strategy Experts

The Content Operations podcast from Scriptorium delivers industry-leading insights for scalable, global, AI-optimized content.

  1. MAR 30

    Who controls your content? AI and content governance

    What does it actually mean to govern your content in the age of AI, and who’s really in control? In this episode, Sarah O’Keefe sits down with Patrick Bosek, CEO of Heretto, to unpack why the quality, accuracy, and structure of your content may be the most critical factors in what your users experience on the other side of an AI model. Patrick Bosek: In today’s world, you don’t have 100% control. There are a couple of different places where this needs to be broken up. One is the end user: what they physically get and what control they have versus what control you have. Then, there’s what control you have of how the AI model is going to behave based on your information and your inputs. Whether or that model is public, like a user accessing your documentation through Claude Desktop, or private, like a user accessing your documentation through your app or website, the governance piece comes down to what control you have immediately before the model. And that breaks down into a couple of things: completeness, accuracy, and structure of the content. Related links: AI and content: Avoiding disaster AI and accountability Structured content: a backbone for AI success Heretto Questions for Sarah and Patrick? Register for the Ask Me Anything session on April 8th at 11 am Eastern. LinkedIn: Sarah O’Keefe Patrick Bosek Transcript: This is a machine-generated transcript with edits. Introduction with ambient background music Christine Cuellar: From Scriptorium, this is Content Operations, a show that delivers industry-leading insights for global organizations. Bill Swallow: In the end, you have a unified experience so that people aren’t relearning how to engage with your content in every context you produce it. Sarah O’Keefe: Change is perceived as being risky; you have to convince me that making the change is less risky than not making the change. Alan Pringle: And at some point, you are going to have tools, technology, and processes that no longer support your needs, so if you think about that ahead of time, you’re going to be much better off. End of introduction Sarah O’Keefe: Hey everyone, I’m Sarah O’Keefe. I’m here today with Patrick Bosek, who is the CEO of Heretto. Hey Patrick. Patrick Bosek: Hey, Sarah. Long time no chat. SO: That is, I guess for certain values of long time. We decided today that we wanted to talk about AI and governance, except I promptly tried to come up with a synonym for governance because I’m afraid that when I say that particular word, our audience just walks off. So, okay, Patrick, what is governance? PB: Well, so first of all, thanks for having me on, and second of all, I’m excited about this one because based on our little bit of chat before the show, it sounds like we’re actually gonna have some things to argue about this time around.  SO: I would never. PB: Well, usually we tend to agree right like I think that we’re generally pretty on the same page about stuff. So I’m excited. I’m pumped. Okay, so governance. I mean, obviously it has a ton of different meanings to different people but in the way that I want to talk about it today, because it was my suggestion. It’s related to the governance of content, specifically in the way of the inputs to AI systems. So you can think about the process of controlling for quality, accuracy, the things that matter in the actual content and information before it gets into the AI system. So it’s kind of the upstream quality, totality, structure, all of that checking and assurance ahead of whatever your experience is going to be downstream, of which one is the most contemporary and most interesting is AI. SO: Okay, so this is making sure that it is not garbage in so as to avoid garbage out.  PB: Yeah, I would say that’s a fair statement. SO: Yeah. Okay. And can we use AI to do governance of the content we’re producing? PB: Well, that’s actually a very interesting question. And I think the short answer is somewhat right now. So before I go, okay, before I like fully answer that, I want to put a little disclaimer in here. The stuff with AI is changing so quickly that we should date-stamp this episode. SO: It is March 19th, 2026. And it’s nine-ish Eastern time. PB: Yeah, we are recording this on March 19th, 2026. Now I feel, yeah. Okay, so now that people know when it is that we’re talking about this, I feel a little bit safer in answering. So there are aspects of governance you can do with AI today, for sure. And there’s new capabilities coming online all the time. I actually think, broadly speaking, the thing that’s going to be most challenging about governance is going to be the pieces that can’t be done with AI continuing to not continuing to do them because it becomes like as the human part of the loop becomes smaller and smaller, it becomes so much easier and easier for the human to just click accept because like the AI gets it right, does it, the automation works that kind of thing. And you know, I’ll use like an AI coding analogy because that’s what I spend a lot of time with AI on.  So I use Claude CLI. That’s my primary method of vibe coding or whatever you want to say. And I even find myself like just clicking accept sometimes. But I’m still forcing myself to like, get it, and read the code. And like, I had it write a shell script yesterday. And I was almost about to run it, and I was like, this is a shell script. I should not do that. I should definitely read what’s going on inside of this shell script, but it, gets to a point where like you start to trust it.  SO: Yeah. PB: And as we start to inject AI into the governance layer. So like we build skills that check certain parts of our information architecture or, you know, they kind of act as linters if we’re in docs as code or, you know, whatever it might be. There’s going to be like a form of trust that gets built up. And because we kind of like, tend to think of these agents as like human, they’re not, we tend to prescribe like a human form of trust, you know, like when you have a coworker that does the right thing all the time, you tend to just let them work. And I think that’s kind of the challenge and in the human side of governance. So that’s a really long way of saying. You can build tools and skills and patterns and things like that in AI that will help with governance. But fundamentally, it’s my belief that for the type of documentation or content that you and I work on, and I think most of our audience works on, which is has to be right, has to be accurate, has to conform to standards, et cetera, et cetera, right? It’s product documentation. It’s critical information. I still think that every single word needs to be read and considered by a human being. So really long answer to that question. SO: Right, and then fundamentally, if the AI is right half the time, then I’m going to read everything pretty carefully, knowing that 50% is wrong and I need to fix it. The problem, I think, is when it gets to be 90% correct, you just sort of glaze over because you’re looking for that last 10%, right? So it’s the difference between like doing a developmental edit, where you’re going deep into the words and just rearranging everything and fundamentally changing everything, versus doing a final proofread, where it is far more difficult to read 100 pages and find one typo than it is to read 100 pages that are just trash. And you’re like, start over, rearrange this, reformat everything. We’re not even worried about the typos yet because this is just fundamentally wrong. And so to your point, as it gets closer and closer, you start to believe in the output that it’s generating, which then means almost certainly that one typo, which in your example could be a shell script gone rogue, could be really, really problematic. PB: Yeah. And that’s going to be the challenge of our times in a lot of ways. I think there’s still going to be some aspect of origination that’s going to be necessary for quite some time. even with like automated drafting and pipelines like that, coming online, because in certain places, those work really, really well. but in other places, they, they don’t really work very well yet. It’s going to be the process of like becoming orchestrators in a way where, you know, we’re not rubber stamps, and we’re like really truly adding value and actually defending against the challenges that are going to come up with the automation that we build. SO: Fundamentally, like I saw a reference to this this morning and somebody said, you can write essentially an extractor that’s going to generate your release notes, right? So there’s code updates and you just automate the generation of release notes. Now, I personally am not so sure that you actually need AI for this. Given properly commented code, you could just generate the release notes, right? But setting aside that particular small argument in here. You automate, you can automate the generation of release notes because release notes are essentially, this is the delta between version one and version 1.01 or, know, and here are the changes. It’s a change log. What that means though is that the changes were captured in the code. They’re in the code, like the logic or the information is already there. What we’re doing is extracting it and reformatting it into something that a human can look at on a single page and say, okay, I understand what the changes are and how these apply to me as the user of the software and whether or not I should upgrade. That’s different than we’re going to introduce a new feature into this code and I need to write about why this feature is interesting and relevant to you. The question to me is where is new information being introduced into the system? Where is that information encoded? And then once it’s encoded, we can extract it and process and do thin

    40 min
  2. MAR 23

    Good content = good AI: The fundamentals that never change

    Good content fundamentals have been the foundation of effective product content for decades, and those same principles are exactly what make content AI-ready today. In this episode, Bill Swallow and Alan Pringle explain how attending to your hierarchy of content needs is the key to AI success. Alan Pringle: Right now, AI is not going to fix bad content problems. It is going to regurgitate that bad information, giving your end users information that’s flat out wrong. If your content at the basic source level is wrong, your AI by extension is going to be wrong. And that is the unglossy, unvarnished, hard truth that is still, I don’t think, seeping in like it should across the corporate world. Bill Swallow: It really does come back to the fact that, despite the world changing on a day-to-day basis, the fundamentals have not changed. Related links: A hierarchy of content needs Technical Writing 101, 3rd edition Structured content: a backbone for AI success LinkedIn: Alan Pringle Bill Swallow Transcript: This is a machine-generated transcript with edits. Introduction with ambient background music Christine Cuellar: From Scriptorium, this is Content Operations, a show that delivers industry-leading insights for global organizations. Bill Swallow: In the end, you have a unified experience so that people aren’t relearning how to engage with your content in every context you produce it. Sarah O’Keefe: Change is perceived as being risky; you have to convince me that making the change is less risky than not making the change. Alan Pringle: And at some point, you are going to have tools, technology, and processes that no longer support your needs, so if you think about that ahead of time, you’re going to be much better off. End of introduction Bill Swallow: Hi, I’m Bill Swallow. Alan Pringle: And I’m Alan Pringle. BS: And in this episode, surprise surprise, we’re going to talk about content. AP: Really? Who would have thought? BS: But more specifically, what good content means today. Today, everything is all about AI. There is lots of change in progress with regard to AI tooling and content delivery with AI. But have the needs for content really changed? And I would say that off the bat, if you’re doing content right, you really don’t have to reinvent the wheel to make it AI acceptable. AP: No, in this crazy AI-hyped world we’re in, there’s some very basic foundational things that tend to get overlooked because they’re not sexy, and they’re not special and hot and whatever else. All that kind of marketing garbage that just sets me on complete edge and makes me want to say profane things in podcasts.  The bottom line is, there are things that the content world, and especially our little subdomain of it, product content world, has been doing for decades now. And I mean decades.  BS: Or should have been doing. AP: Correct. There are basic tenants that have been in place for decades. That if you’re following them, you are starting down the road of success with AI. I think to kind of prove our point, we’re going to step back and look at some of the things that Scriptorium has talked about and written in the past and see how it stacks up. And Bill, you found one. And let’s talk about that blog post that Sarah O’Keefe wrote. What was the date on that again? BS: It was 2014. And that is when we came up with the hierarchy of content needs. And it really wasn’t so much an invention as it was just a regurgitation of what it means to create good content. So we have a pyramid of content needs. At the bottom, we have available. So is content available? Does it exist? Can someone get to it? I think that we’ve mostly solved that problem given the dearth of information we have out on the internet. But as we know, that information is not always useful. So we go up a rung or a layer on that pyramid and see whether or not the content is accurate. And if it’s accurate, if it provides the correct information, that’s fantastic. Then we go up another level and see whether or not the content is actually appropriate. So it can be correct. It can exist. But is it appropriate? Does it meet a reader’s needs? And is it formatted in a way that works for the reader to ingest? Then we go up a step further and see whether or not the content is connected. And this is where we kind of get to the more modern aspect of content. Does it link out to correct additional resources? Is it available to people in a variety of means? And does it engage with the audience? And then finally, at the top of the pyramid, we have intelligent content. Is the content intelligent? And we’re not talking about AI here at all, but we are really talking about is the content fashioned in a way that it can be used intelligently across different media? AP: That it can be manipulated for different purposes. And that is quoting Sarah directly. And I think that is key here, because that is what AI does. It takes information and basically chops, slices, dices it, and provides it in a new way via a chatbot, for example. So that is that whole manipulation that Sarah is talking about. And we will post a link to the post in the show notes so you can read this at a greater detail to see how well this hierarchy of content needs has stood up. And she even talks about, for example, integrating database content, how you can pull in other information product specifications. If you think about it from an AI lens, I think that parallels pretty closely to the idea of retrieval augmented generation, where you are pulling content from other sources and kind of weaving it in with what an AI engine is providing you. So RAG is, I think, could be kind of interpreted as another way of integrating other information into the way that AI is processing that content. BS: Right, mean, because AI, I mean, it’s not really an audience, but it is a delivery point. There are some structural needs that need to happen there. But ultimately, you’re still writing for people. You might be writing in a way that it allows the AI to repurpose and refactor the information so that the audience gets exactly what they’re looking for. But it still needs to be somewhat tailored to the needs of people because AI in itself, it doesn’t care what the content is, but it’s going to try to produce something for an eventual person to be able to read. AP: I think that then in turn points to something else in our vast compendium of Scriptorium content. And that is a book that Sarah and I wrote, the first edition in 2000, which just kind of makes me shake my head. I know this is not a video podcast yet, but I’m shaking my head in disbelief. The book, Technical Writing 101, has three editions, published between 2000 and 2009. We will put a link in the show notes. You can still download the third edition. And by the way, it’s free. You can get a PDF or EPUB. It’s free. You can get it from our store with some more recent resources from the store.  But to me, I flipped through that book this morning. And I was genuinely surprised at how much of the advice on how to create good product content still is true in this AI era. Everything of talking about modular, writing things in a modular way, being very systematic and structuring things, even if you’re not using a structured authoring tool, use a template, make things very standardized. These are all things that, yes, they make for better, consistent, standard, tech-com, product content for the person reading it. But let’s pretend like AI is the person reading, and I’m doing air quotes here, reading it. It is going to do a better job of understanding, again, I’m sort of personifying here, and I know that’s sort of a no-no. But if you feed AI, a large language model, content that is very structured, that is very templatized, that is standardized, that is in bite-sized chunks, and also, this is very important, the idea of metadata, which we do talk about in that book briefly. We do talk about it. Because you need to be able to label it for different audiences, because I’m thinking about someone sitting, trying to use a product, trying to use a piece of software, talking to a chatbot. And the chatbot is going to ask it, what product are you using? What’s the model number? All of those kinds of things. And now we’re getting to this whole idea of labeling and breaking things apart so that a chatbot, just like a user of a product. Let’s say somebody has a printer that’s on the highest end of the scale. They’re going to have a lot more features that apply to their model than to someone who bought a more basic one. But the thing is, if your product content has not clearly labeled what are features in each of the models, the chatbot is going to spit out the wrong thing. So again, this idea of breaking things up in discrete chunks and labeling them in a way where someone who wants specific information about a specific model, they can get it. And it doesn’t matter if it’s from a web page, it’s from a PDF, a printed book, God forbid in 2026, or from an AI chatbot. Those rules still apply. Those fundamental principles are still there. BS: Mm-hmm. AP: I think one of the biggest problems here is when people do not have those fundamentals already in place, right? BS: If they don’t have those fundamentals in place, they can’t get to the top of that pyramid that Sarah was talking about. And really those fundamentals are those first three layers. Content is available, content is accurate and content is appropriate. If you can actually nail those three layers of the hierarchy of content needs, you are set to then jump to connected and intelligent fairly quickly because your content is already well written, standardized, and appropriate for different audiences. AP: So we’re right back to talking about the way you put content together, your content operations, and how you ha

    15 min
  3. FEB 16

    Check in on AI: The true measure of success for AI initiatives

    In this episode, Sarah O’Keefe and Alan Pringle explore how AI transforms content delivery from static documents into dynamic, consumer-driven experiences. However, the need for human-led governance is critical, and Sarah and Alan explore issues of accuracy, accountability, governance, and more. They challenge organizations to define AI success by its ability to deliver accurate, high-impact outcomes for the end user. Sarah O’Keefe: The metrics that are being used to measure the success of AI are all wrong. We should be measuring the success of various AI efforts based on, “Are people getting what they need? Are they having a successful outcome with whatever it is that they’re trying to do?” The metric we actually seem to be using is, “What percentage of your workflow is using AI? How many people can we get rid of because we’re automating everything with AI?” It’s the wrong metric. The question is, how good are the outcomes? Related links: Sarah O’Keefe: AI and content: Avoiding disaster Sarah O’Keefe: AI and accountability Alan Pringle: Structured content: a backbone for AI success Questions for Sarah and Alan? Register for our upcoming webinar, Ask Me Anything: AI in content ops. LinkedIn: Sarah O’Keefe Alan Pringle Transcript: This is a machine-generated transcript with edits. Introduction with ambient background music Christine Cuellar: From Scriptorium, this is Content Operations, a show that delivers industry-leading insights for global organizations. Bill Swallow: In the end, you have a unified experience so that people aren’t relearning how to engage with your content in every context you produce it. Sarah O’Keefe: Change is perceived as being risky; you have to convince me that making the change is less risky than not making the change. Alan Pringle: And at some point, you are going to have tools, technology, and processes that no longer support your needs, so if you think about that ahead of time, you’re going to be much better off. End of introduction Alan Pringle: Hey everybody, I’m Alan Pringle, and today I’m here with Sarah O’Keefe, and we want to do something I’ve kind of dreaded to be honest, to do a check-in on AI in the content space. I’m very ambivalent about this topic. There’s still even two, three years in, there’s still a lot of hype, but there’s also been some good things that have emerged. We need to talk about it fairly realistically. So, Sarah, get ready. Let’s see if I can not curse during this. We’ll try. I’ll try my best not to be like that in this. Legitimately, there are some things that we need to talk about, and also about the challenges because I don’t think the content world is completely ready for a lot of what’s going on right now. Sarah O’Keefe: You know that we have AI that can remove cursing from podcasts, so I feel like we’re good here. AP: Well, also, it’s a challenge to me to behave in a PG-13 more family-friendly kind of way. So I’ll do my best.  SO: I have no idea what you’re talking about. AP: Yeah. So let’s start with the good and where things are right now with the positives. What is AI doing well right now? And let’s kind of get beyond the summarization. I think we can say objectively right now, in general, AI does a very good job of summarizing existing content. But I think it’s doing a lot more beyond that, and we should touch on those things instead. SO: The first thing that I would say is that summarization, but specifically the use case of a chatbot or a large learning model, an LLM, so now we’re talking about Claude, Gemini, ChatGPT and all the rest of them, which has the ability to provide an end user with a way of accessing information, an information access point that is different than what we had previously. In the olden days, you had a book, and you had to sort of flip it open and look at a table of contents or maybe an index and navigate to a page. Fine. Then along comes online content, and you can do full text search, or you can then go into an internet search, right? You type into the search bar, you get a bunch of results, you click, and you sort of, no, that’s not quite it. You modify your search string, you search again, and you sort of navigate your way to where you’re trying to go. With the interactivity of the, you know, ChatGPT class of tools. What happens is that I ask it a question and it gives me an answer. And then I say, that’s not quite what I wanted. And I can sort of zero in on exactly what I’m looking for and tell it, but actually make this easier. Or I don’t understand the words you’re using. Use simpler language. Give me more. Give me less. Give me a summary. Use this as a source. Do not use that as a source. It’s a new way to access information. People love it. There is something psychologically helpful about a conversational search. Now, there’s obviously huge issues with this, particularly around people, you know, using chatbots as their therapists, which introduces all sorts of horrifying, horrifying ethical issues.  AP: Personifying them as a person on the other end. Right. SO: But in the big picture, used well, it allows you to get to the information you’re looking for and get at it in the way that you want.  AP: There’s a control issue here. I don’t think the content consumer has ever had this level of control. SO: Yeah, and as a content consumer, that speaks to me. That is helpful. We’re seeing increasing use of, I would say, guardrails. So, not just slam out the AI with a bunch of stuff, but rather we’ve put some guardrails around it, and there’s various kinds of technologies that you can employ there. And that has been very helpful. And then the third thing I would point to is when we talk about generative AI and generating content, there’s a lot you can do in that sort low fidelity bucket. And what I mean here is I need an image for a presentation, but the background is the wrong color, so I can just swap it out. Now, I can do that with Photoshop. Well, some people can do that with Photoshop. AP: Well, I was about to say, don’t think you or I should be saying we can do the Photoshop because we kind of can’t. SO: Right. Well, and that’s exactly it. So it’s lowered the bar, right? Because I can tell the AI to swap out the background, and it will. And it applies a mid-level Photoshop capability to this image. And now I have the image that I need with a dark background so that the white text shows up in my presentation, that kind of thing.  AP: Right. Yeah. SO: We can do low-stakes synthetic audio if this podcast, which for the record we are recording with actual human beings, but let’s say that Alan curses extensively and we need to swap it out well, we could pretty easily generate some synthetic audio that sounds like him and that PG of eyes the original wording into something that is You know cleaner it would be way funnier to just bleep it. So I don’t know why we would do this but… AP: Correct. Well, and it may come to that. The bottom line is what you’re talking about here is things that have very low risk. This is more fun stuff, the thought of doing some of what we’re talking about and stuff that describes how to use a medical device, for example. Not sure I want to go there with that. But for something low stakes like some one-off presentation that you’re giving, maybe some humor is involved, I totally think that’s an acceptable use because there’s no risk there. SO: That’s really the key point because let’s say you’re writing content for a new medical device. Now you probably have a version one of said medical device, and you’re doing a version two. So, okay, fine. We take the version one content and we sort of, you know, say add color because that’s what we added, you know, in version two, and update all this stuff automatically. But it then becomes very important to actually read that, look at that information, look at all the images, make sure that everything is correct. And by the time you do that super carefully, you may have given back all the time that you saved on the back end when you basically made a copy and said generate the new version. There’s some, you have to be really careful with that, especially depending on what your stakes are in terms of regulating regulatory or compliance stuff.  You can, of course, get away with using AI, as you said, for low-stakes stuff. Now, the big risk you run there, and we’re seeing this in my favorite example of low-stakes content, which is video games, the video game industry has seen huge amounts of pushback against AI-generated game content, because it’s not fun. It’s not creative. It feels flat. It’s not art, and it’s not fun to play. And so it just becomes a slog. Again, same thing. Did you use it for maybe some backgrounds here and there? Okay. Did you use it to drive the story that you’re trying to establish or set up? You know, the enemies that you’re hypothetically fighting, and then they all have a certain sameness, or they all, you know, you’re sort of stealthing your way around the map. And it turns out that the AI-generated things are really dumb in that once they turn their back, you can do literally anything and they won’t notice because it was poorly designed. AP: Right, yeah. And that’s true even in the film entertainment industry. There’s been a tremendous amount of pushback for the very reason I read a review recently talking about a series of clips about history on, I believe it’s on YouTube, by a fairly well-known director I will not name. SO: Mm-hmm. AP: And some of the AI is frankly not done well. And one reviewer basically said that a lot of the people, when you look at the back of these AI-generated, like an AI-generated King George, the back of his head looks like a melted candle. This is not what we want here. If you’re so foc

    32 min
  4. JAN 26

    From black box to business tool: Making AI transparent and accountable

    As AI adoption accelerates, accountability and transparency issues are accumulating quickly. What should organizations be looking for, and what tools keep AI transparent? In this episode, Sarah O’Keefe sits down with Nathan Gilmour, the Chief Technical Officer of Writemore AI, to discuss a new approach to AI and accountability. Sarah O’Keefe: Okay. I’m not going to ask you why this is the only AI tool I’ve heard about that has this type of audit trail, because it seems like a fairly important thing to do. Nathan Gilmour: It is very important because there are information security policies. AI is this brand-new, shiny, incredibly powerful tool. But in the grand scheme of things, these large language models, the OpenAIs, the Claudes, the Geminis, they’re largely black boxes. We want to bring clarity to these black boxes and make them transparent, because organizations do want to implement AI tools to offer efficiencies or optimizations within their organizations. However, information security policies may not allow it. Related links: Sarah O’Keefe: AI and content: Avoiding disaster Sarah O’Keefe: AI and accountability Writemore AI LinkedIn: Nathan Gilmour Sarah O’Keefe Transcript: Introduction with ambient background music Christine Cuellar: From Scriptorium, this is Content Operations, a show that delivers industry-leading insights for global organizations. Bill Swallow: In the end, you have a unified experience so that people aren’t relearning how to engage with your content in every context you produce it. Sarah O’Keefe: Change is perceived as being risky; you have to convince me that making the change is less risky than not making the change. Alan Pringle: And at some point, you are going to have tools, technology, and processes that no longer support your needs, so if you think about that ahead of time, you’re going to be much better off. End of introduction Sarah O’Keefe: Hey everyone. I’m Sarah O’Keefe. Welcome to another episode. I am here today with Nathan Gilmour, who’s the Chief Technical Officer of Writemore AI. Nathan, welcome. Nathan Gilmour: Thanks, Sarah. Happy to be here. SO: Welcome aboard. So tell us a little bit about what you’re doing over there. You’ve got a new company and a new product that’s, what, a year old? NG: Give or take, yep. SO: Yep. So what are you up to over there? Is it AI-related? NG: It is actually AI-related, but not AI-related in the traditional sense. Right now, we’ve built a product or tool that helps technical authoring teams convert from traditional Word or PDF formats, which would make up the bulk of much of the technical documentation ecosystem and help convert it to structured authoring. Meaning that they can get all of the benefits of reuse, easier publishing, high compatibility with various content management systems. And can do it in minutes where traditional conversions could take hours. So it really helps authoring teams get their content out to the world at large in a much more efficient and regulated fashion. SO: So I pick up a corpus of 10 or 20 or 50,000 pages of stuff, and you’re going to take that, and you’re going to shove it into a magic black box, and out comes, you said, structured content, DITA? NG: Correct. SO: Out comes DITA. Okay. What does this actually … Give us the … That’s the 30,000-foot view. So what’s the parachute level view? NG: Perfect. Underneath the hood, it’s actually a very deterministic pipeline. Deterministic pipeline means that there is a lot more code supporting it. It’s not an AI inferring what it should do. There’s actual code that guides a conversion process first. So going from, let’s say, Word to DITA, there are tools within the DITA Open Toolkit that allow and facilitate that much more mechanically, rather than trusting an AI to do it. We know that AI does struggle with structure, especially as context windows expand. It becomes more and more inaccurate. So if we feed these models with far more mechanically created content, they become much more accurate. You’re not trusting them to do much more, more of the nuanced parts of the process. So there’s a big difference between determinism and probabilism. Where determinism is the mechanical conversion of something, probabilism is allowing the AI to infer a process. So that’s where we differ is our process is much more deterministic versus allowing the AI to do everything on its own. SO: So is it fair to say that you combined the … And for deterministic, I’m going to say scripting. But is it fair to say that you combined the DITA OT scripting processing with additional AI around that to improve the results? NG: Correct. It also expedites the results so that instead of having a human do much of the semantic understanding of the document, we allow the AI to do it in a far more focused task. Machines can read faster. SO: Okay. And so for most of us, when we start talking about AI, most people think large language model and specifically ChatGPT, but that’s not what this is. This is not like a front-end go play with it as a consumer. This is a tool for authors. NG: Correct. And even further to that, it’s a partner tool for authors. It allows them to continue authoring in a format that they’re familiar with. Well, let’s take Microsoft Word, for example. Sometimes the shift from Word to structured authoring could be considered an enormous upheaval. Allowing authors to continue authoring in a format that they’re good at and they’re familiar with, and then have a partner tool that allows them to expedite the conversion process to structured authoring so that they can maintain a single source of truth, makes things a little bit better, more manageable, and more reliable in the long run. So instead of having to effectively cause a riot with the authoring teams, we can empower them to continue doing what they’re good at. SO: Okay. So we drop the Word file in and magically DITA comes out. What if it’s not quite right? What if our AI doesn’t get it exactly right? I mean, how do I know that it’s not producing something that looks good, but is actually wrong? NG: Great question. And that’s where, prior to doing anything further, there is a review period for the human authors. So in the event that the AI does make a mistake, it is not only completely transparent, so the output, the payload, as we describe it, comes with a full audit report. So every determination that the AI makes is traced and tracked and explained. And then further to that, the humans are even able to take that payload out anyway, open it up in an XML editor. So at this point in time, the content is converted, it is ready to go into the CCMS. Prior to doing that, it can go into a subject matter expert who is familiar with structured authoring to do a final validation of the content to make sure that it is accurate. The biggest differentiator, though, is the tool never creates content. The humans need to create content because they are the subject matter experts within their field. They create the first draft. The tool takes it, converts it, but doesn’t change anything. It only works with the material as it stands. And then once that is complete, it goes back into another human-centered review so that there are audit trails, it is traceable. And there is a final touchpoint by a human prior to the final migration into their content management system. SO: So you’re saying that basically you can diff this. I mean, you can look at the before and the after and see where all the changes are coming in. NG: Correct. SO: Okay. I’m not going to ask you why this is the only AI tool I’ve heard about that has this type of audit trail, because it seems like a fairly important thing to do. NG: It is very important because there are information security policies. AI is this brand-new, shiny, incredibly powerful tool. But in the grand scheme of things, these large language models, the OpenAIs, the Claudes, the Geminis, they’re largely black boxes. Where we want to come in is to bring clarity to these black boxes. Make them transparent, I guess you can say. Because organizations do want to implement AI tools to offer efficiencies or optimizations within their organizations. However, information security policies may not allow it. One of the added benefits that we have baked into the tool from a backend perspective is its ability to be completely internet-unaware. Meaning if an organization has the capital and the infrastructure to host a model, this can be plugged directly into their existing AI infrastructure and use its brain. Which, realistically, is what the language model is. It’s just a brain. So if companies have invested the time, invested the capital in order to build out this infrastructure, the Writemore tool can plug right into it and follow those preexisting information security policies. Without having to worry about something going out to the worldwide web. SO: So the implication is that I can put this inside my very large organization with very strict information security policies and not be suddenly feeding my entire intellectual property corpus to a public-facing AI. NG: That is entirely correct. SO: We are not doing that. Okay. So I want to step back a tiny bit and think about what it means, because it seems like the thing that we’re circling around is accountability, right? What does it mean to use AI and still have accountability? And so, based on your experience of what you’ve been working on and building, what are some of the things that you’ve uncovered in terms of what we should be looking for generally as we’re building out AI-based things? What should we be looking for in terms of accountability of AI? NG: The major accountability of AI is what could it look like if a business model changes? Let’s kind of focus on the large players in the market right now. There

    21 min
  5. 11/17/2025

    Futureproof your content ops for the coming knowledge collapse

    What happens when AI accelerates faster than your content can keep up? In this podcast, host Sarah O’Keefe and guest Michael Iantosca break down the current state of AI in content operations and what it means for documentation teams and executives. Together, they offer a forward-thinking look at how professionals can respond, adapt, and lead in a rapidly shifting landscape. Sarah O’Keefe: How do you talk to executives about this? How do you find that balance between the promise of what these new tool sets can do for us, what automation looks like, and the risk that is introduced by the limitations of the technology? What’s the roadmap for somebody that’s trying to navigate this with people that are all-in on just getting the AI to do it? Michael Iantosca: We need to remind them that the current state of AI still carries with it a probabilistic nature. And no matter what we do, unless we add more deterministic structural methods to guardrail it, things are going to be wrong even when all the input is right. Related links: Scriptorium: AI and content: Avoiding disaster Scriptorium: The cost of knowledge graphs Michael Iantosca: The coming collapse of corporate knowledge: How AI is eating its own brain Michael Iantosca: The Wild West of AI Content Management and Metadata MIT report: 95% of generative AI pilots at companies are failing LinkedIn: Michael Iantosca Sarah O’Keefe Transcript: Introduction with ambient background music Christine Cuellar: From Scriptorium, this is Content Operations, a show that delivers industry-leading insights for global organizations. Bill Swallow: In the end, you have a unified experience so that people aren’t relearning how to engage with your content in every context you produce it. SO: Change is perceived as being risky; you have to convince me that making the change is less risky than not making the change. Alan Pringle: And at some point, you are going to have tools, technology, and processes that no longer support your needs, so if you think about that ahead of time, you’re going to be much better off. End of introduction Sarah O’Keefe: Hey everyone, I’m Sarah O’Keefe. In this episode, I’m delighted to welcome Michael Iantosca to the show. Michael is the Senior Director of Content Platforms and Content Engineering at Avalara and one of the leading voices both in content ops and understanding the importance of AI and technical content. He’s had a longish career in this space. And so today we wanted to talk about AI and content. The context for this is that a few weeks ago, Michael published an article entitled The coming collapse of corporate knowledge: How AI is eating its own brain. So perhaps that gives us the theme for the show today. Michael, welcome. Michael Iantosca: Thank you. I’m very honored to be here. Thank you for the opportunity. SO: Well, I appreciate you being here. I would not describe you as anti-technology, and you’ve built out a lot of complex systems, and you’re doing a lot of interesting stuff with AI components. But you have this article out here that’s basically kind of apocalyptic. So what are your concerns with AI? What’s keeping you up at night here?  MI: That’s a loaded question, but we’ll do the best we can to address it. I’m a consummate information developer as we used to call ourselves. I just started my 45th year in the profession. I’ve been fortunate that not only have I been mentored by some of the best people in the industry over the decades, but I was very fortunate to begin with AI in the early 90s when it was called expert systems. And then through the evolution of Watson and when generative AI really hit the mainstream, those of us that had been involved for a long time were… there was no surprise, we were already pretty well-versed. What we didn’t expect was the acceleration of it at this speed. So what I’d like to say sometimes is the thing that is changing fastest is the rate at which the rate of change is changing. And that couldn’t be more true than today. But content and knowledge is not a snapshot in time. It is a living, moving organism, ever evolving. And if you think about it, the large language models, they spent a fortune on chips and systems to train the big large language models on everything that they can possibly get their hands and fingers into. And they did that originally several years ago. And the assumption is that, especially for critical knowledge, is that that knowledge is static. Now they do rescan the sources on the web, but that’s no guarantee that those sources have been updated. Or, you know, the new content conflicts or confuses with the old content. How do they tell the difference between a version of IBM database 2 of its 13 different versions, and how you do different tasks across 13 versions? And can you imagine, especially when it comes to software where most of us, a lot of us work, the thousands and thousands of changes that are made to those programs in the user interfaces and the functionality? MI: And unless that content is kept up-to-date and not only the large language models, reconsume it, but the local vector databases on which a lot of chatbots and agenda workflows are being based. You’re basically dealing with out-of-date and incorrect content, especially in many doc shops. The resources are just not there to keep up with that volume and frequency of change. So we have a pending crisis, in my opinion. And the last thing we need to do is reduce the people that are the knowledge workers to update, not only create new content, but deal with the technical debt, so that we don’t collapse on this, I think, is a house of cards. SO: Yeah, it’s interesting. And as you’re saying that, I’m thinking we’ve talked a lot about content debt and issues of automation. But for the first time, it occurs to me to think about this more in terms of pollution. It’s an ongoing battle to scrub the air, to take out all the gunk that is being introduced that has to, on an ongoing basis, be taken out. Plus, you have this issue that information decays, right? In the sense that when, I published it a month ago, it was up to date. And then a year later, it’s wrong. Like it evolved, entropy happened, the product changed. And now there’s this delta or this gap between the way it was documented versus the way it is. And it seems like that’s what you’re talking about is that gap of not keeping up with the rate of change. MI: Mm-hmm. Yeah. I think it’s even more immediate than that. I think you’re right. But now we need to remember that development cycles have greatly accelerated. Now, when you bring AI for product development into the equation, we’re now looking at 30 and 60-day product cycles. When I started, a product cycle was five years. Now it’s a month or two. And if we start using AI to draft new content, for example, just brand new content, forget about the old content or update the old content. And we’re using AI to do that in the prototyping phase. We’re moving that more left upfront. We know that between then and CodeFreeze that there’s going to be a numerous number of changes to the product, to the function, to the code, to the UI. It’s always been difficult to keep up with it in the first place, but now we’re compressed even more. So we now need to start looking at AI to how does it help us even do that piece of it, let alone what might be a corpus that is years and years old, that’s not ever had enough technical writers to keep up with all the changes. So now we have a dual problem, including new content with this compressed development cycle. SO: So the, I mean, the AI hype says we essentially, we don’t need people anymore and the AI will do everything from coding the thing to documenting the thing to, I guess, buying the thing via some sort of an agentic workflow. But what, I mean, you’re deeper into this than nearly anybody else. What is the promise of the AI hype, and what’s the reality of what it can actually do? MI: That’s just the question of the day. Because those of us that are working in shops that have engineering resources, I have direct engineers that work for me and an extended engineering team. So does the likes of Amazon, other serious, not serious, but sizable shops with resources. We have a lot of shops that are smaller. They don’t have access to either their own dedicated content systems engineers or even their IT team to help them. First, I want to recognize that we’ve got a continuum out there, and the commercial providers are not providing anything to help us at this point. So it’s either you build it yourself today, and that’s happening. People are developing individual tools using AI where the more advanced shops are looking at developing entire agentic workflows.  And what we’re doing is looking at ways to accelerate that compressed timeframe for the content creators. And I want to use content creators a little more loosely because as we move the process left, and we involve our engineers, our programmers in the early, earlier in the phase, like they used to be, by the way, they used to write big specifications in my day. Boy, I want to go into a Gregorian chant. “Oh, in my day!” you know, but, but they don’t do that anymore. And basically the, the role of the content professional today is that of an investigative journalist. And you know what we do, right? We, we scrape and we claw. We test, we use, we interview, we use all of the capabilities of learning, of association, assimilation, synthesis, and of course, communication. And turns out that writing’s only 15% roughly of what the typical writer does in an information developer or technical documentation professional role, which is why we have a lot of different roles, by the way, that if we’re gonna replace or accelerate with people with AI, have to handle all those capabilities of a

    33 min
  6. 11/03/2025

    The five stages of content debt

    Your organization’s content debt costs more than you think. In this podcast, host Sarah O’Keefe and guest Dipo Ajose-Coker unpack the five stages of content debt from denial to action. Sarah and Dipo share how to navigate each stage to position your content—and your AI—for accuracy, scalability, and global growth. The blame stage: “It’s the tools. It’s the process. It’s the people.” Technical writers hear, “We’re going to put you into this department, and we’ll get this person to manage you with this new agile process,” or, “We’ll make you do things this way.” The finger-pointing begins. Tech teams blame the authors. Authors blame the CMS. Leadership questions the ROI of the entire content operations team. This is often where organizations say, “We’ve got to start making a change.” They’re either going to double down and continue building content debt, or they start looking for a scalable solution. — Dipo Ajose-Coker Related links: Scriptorium: Technical debt in content operations Scriptorium: AI and content: Avoiding disaster RWS: Secrets of Successful Enterprise AI Projects: What Market Leaders Know About Structured Content RWS: Maximizing Your CCMS ROI: Why Data Beats Opinion RWS: Accelerating Speed to Market: How Structured Content Drives Competitive Advantage (Medical Devices) RWS: The all-in-one guide to structured content: benefits, technology, and AI readiness LinkedIn: Dipo Ajose-Coker Sarah O’Keefe Transcript: Introduction with ambient background music Christine Cuellar: From Scriptorium, this is Content Operations, a show that delivers industry-leading insights for global organizations. Bill Swallow: In the end, you have a unified experience so that people aren’t relearning how to engage with your content in every context you produce it. SO: Change is perceived as being risky; you have to convince me that making the change is less risky than not making the change. Alan Pringle: And at some point, you are going to have tools, technology, and processes that no longer support your needs, so if you think about that ahead of time, you’re going to be much better off. End of introduction Sarah O’Keefe: Hey, everyone. I’m Sarah O’Keefe and I’m here today with Dipo Ajose-Coker. He is a Solutions Architect and Strategy at RWS and based in France. His strategy work is focused on content technology. Hey, Dipo. Dipo Ajose-Coker: Hey there, Sarah. Thanks for having me on. SO: Yeah, how are you doing? DA-C: Hanging in there. It’s a sunny, cold day, but the wind’s blowing. SO: So in this episode, we wanted to talk about moving forward with your content and how you can make improvements to it and address some of the gaps that you have in terms of development and delivery and all the rest of it. And Dipo’s come up with a way of looking at this that is a framework that I think is actually extremely helpful. So Dipo, tell us about how you look at content debt. DA-C: Okay, thanks. First of all, I think before I go into my little thing that I put up, what is content debt? I think it’d be great to talk about that. It’s kind of like technical debt. It refers to that future work that you keep storing up because you’ve been taking shortcuts to try and deliver on time. You’ve let quality slip. You’ve had consultants come in and out every three months, and they’ve just been putting… I mean writing consultants. SO: These consultants. DA-C: And they’ve been basically doing stuff in a rush to try and get your product out on time. And over time, those sort of little errors, those sort of shortcuts will build up and you end up with missing metadata or inconsistent styles. The content is okay for now, but as you go forward, you find you’re building up a big debt of all these little fixes. And these little fixes will eventually add up and then end up as a big debt to pay. SO: And I saw an interesting post just a couple of days ago where somebody said that tech debt or content debt, you could think of it as having principle and interest and the interest accumulates over time. So the less work you do to pay down your content debt, the bigger and bigger and bigger it gets, right? It just keeps snowballing and eventually you find yourself with an enormous problem. So as you were looking at this idea of content debt, you came up with a framework for looking at this that is at once shiny and new and also very familiar. So what was it? DA-C: Yeah, really familiar. I think everyone’s heard of the five stages of grief, and I thought, “Well, how about applying that to content debt?” And so I came up with the five stages of content debt. So let’s go into it. I’m not going to keep referring to the grief part of it. You can all look it up, but the first stage is denial. “Our content is fine. We just need a better search engine. We can actually put it into this shiny new content delivery platform and it’s got this type of search,” and so on and so forth. Basically what you’re doing is you’re ignoring the growing mess. You’re duplicating content. You’ve got outdated docs. You’re building silos, and then you’re ignoring that these silos are actually getting even further and further apart. No one wants to admit that the CMS or whatever system, bespoke system that you’ve put into place, is just a patchwork of workarounds. This quietly builds your content debt until, actually the longer denial lasts, the more expensive that cleanup is. As we said in that first bit, you want to pay off the capital of your debt as quickly as possible. Anyone with a mortgage knows that. You come into a little bit of money, pay off as much capital as you can so that you stop accruing that debt, the interest on the debt. SO: And that is where when we talk about AI-based workflows, I feel like that is firmly situated in denial. Basically, “Yeah, we’ve got some issues, but the AI will fix it. The AI will make it all better.” Now, we painfully know that that’s probably not true, so we move ourselves out of denial. And then what? DA-C: There we go into anger. SO: Of course. DA-C: “Why can’t we find anything? Why does every update take two weeks?” And that was a question we used to get regularly where I used to work at a global medical device manufacturer. We had to change one short sentence because a spec change and it took weeks to do that. Authors are wasting time looking for reusable content if they don’t have an efficient CCMS. Your review cycles drag through because all you’re doing is giving the entire 600-page PDF to the reviewer without highlighting what’s in there. Your translation costs balloon and your project managers or leadership gets angry because, “Well, we only changed one word. Can’t you just use Google Translate? It should only cost like five cents.” Compliance teams then start raising flags. And if you’re in a regulated industry, you don’t want the compliance teams on your back, and especially you don’t want to start having defects out in the field. So eventually, productivity drops, your teams feel like they’re stuck. And the cracks are now starting to show across other departments and you’re putting a bad name on your doc team. SO: Yeah. And a lot of this, what you’ve got here, is the anger that’s focused inward to a certain extent. It’s the authors that are angry at everybody. I’ve also seen this play out as management saying, “Where are our docs? We have this team, we’re spending all this money, and updates take six months.” Or people submit update requests, tickets, something, the content doesn’t get into the docs, the docs don’t get updated. There’s a six-month lag. Now the SOP, the standard operating procedure, is out of sync with what people are actually doing on the factory floor, which it turns out, again, if you’re in medical devices, is extremely bad and will lead to your factory getting shut down, which is not what you want generally. DA-C: Yeah, it’s not a good position to be in. SO: And then there’s anger. DA-C: Yeah. SO: “Why aren’t they doing their job?” And yet you’ve got this group that’s doing the best that they can within their constraints, which are, as you said, in a lot of cases, very inefficient workflows, the wrong tool sets, not a lot of support, etc. Okay, so everybody’s mad. And then what? DA-C: Everyone’s mad, and eventually, actually this is a closed little loop because all you then do is say, “Okay, well, we’re going to take a shortcut,” and you’ve just added to your content debt. So this stage is actually one of the most dangerous of the parts of it because all you end up trying to do without actually solving the problem is just add to the debt. “Let’s take a shortcut here, let’s do this.” The next stage is now the blame stage. “It’s the tools. It’s the process. It’s the people.” These here and then you get calls of technical writers or, “Well, we’re going to put you into this department and we’ll get this person to rule you with this new agile process,” or, “We’ll get you to be doing it in this way.” The finger-pointing begins. Tech teams will blame the authors. Authors will blame the CMS. Leadership questions the ROI of the entire content operations team. This is often where organizations see that we’ve got to start making a change. They’re either going to double down and continue building that content debt or they start looking for a scalable solution. SO: Right. And this is the point at which people look at it and say, “Why can’t we just use AI to fix all of this?” DA-C: Yep, and we all know what happens when you point AI at garbage in. We’ve got the saying, and this saying has been true from the beginning of computing, garbage in, garbage out, GIGO. SO: Time. DA-C: Yeah. I changed that to computing. SO: Yeah. It’s really interesting thou

    27 min
  7. 10/20/2025

    Balancing automation, accuracy, and authenticity: AI in localization

    How can global brands use AI in localization without losing accuracy, cultural nuance, and brand integrity? In this podcast, host Bill Swallow and guest Steve Maule explore the opportunities, risks, and evolving roles that AI brings to the localization process. The most common workflow shift in translation is to start with AI output, then have a human being review some or all of that output. It’s rare that enterprise-level companies want a fully human translation. However, one of the concerns that a lot of enterprises have about using AI is security and confidentiality. We have some customers where it’s written in our contract that we must not use AI as part of the translation process. Now, that could be for specific content types only, but they don’t want to risk personal data being leaked. In general, though, the default service now for what I’d call regular common translation is post editing or human review of AI content. The biggest change is that’s really become the norm. —Steve Maule, VP of Global Sales at Acclaro Related links: Scriptorium: AI in localization: What could possibly go wrong? Scriptorium: Localization strategy: Your key to global markets Acclaro: Checklist | Get Your Global Content Ready for Fast AI Scaling Acclaro: How a modular approach to AI can help you scale faster and control localization costs Acclaro: How, when, and why to use AI for global content Acclaro: AI in localization for 2025 LinkedIn: Steve Maule Bill Swallow Transcript: Introduction with ambient background music Christine Cuellar: From Scriptorium, this is Content Operations, a show that delivers industry-leading insights for global organizations. Bill Swallow: In the end, you have a unified experience so that people aren’t relearning how to engage with your content in every context you produce it. SO: Change is perceived as being risky; you have to convince me that making the change is less risky than not making the change. Alan Pringle: And at some point, you are going to have tools, technology, and processes that no longer support your needs, so if you think about that ahead of time, you’re going to be much better off. End of introduction Bill Swallow: Hi, I’m Bill Swallow, and today I have with me Steve Maule from Acclaro. In this episode, we’ll talk about the benefits and pitfalls of AI in localization. Welcome, Steve. Steve Maule: Thanks, Bill. Pleasure to be here. Thanks for inviting me. BS: Absolutely. Can you tell us a little bit about yourself and your work with Acclaro? SM: Yeah, sure, sure. So I’m Steve Maule, currently the VP of Global Sales at Acclaro, and Acclaro is a fast-growing language services provider. So I’m based in Manchester in the UK, in the northwest of England, and I’ve been now in this industry, and I say this industry, the language industry, the localization industry for about 16 years, always in various sales, business development, or leadership roles. So like I say, we’re a language services provider. And I suppose the way we try and talk about ourselves is we try and be that trusted partner to some of the world’s biggest brands and the world’s fastest growing global companies. And we see it Bill as our mission to harness that powerful combination of human expertise with cutting edge technology, whether it be AI or other technology. And the mission is to put brands in the heads, hearts, and hands of people everywhere. BS: Actually, that’s a good lead in because my first question to you is going to be where do you see AI and localization, especially with a focus of being kind of the trusted partner for human-to-human communication? SM: My first answer to that would be it’s no longer the future. AI is the now. And I think whatever role people play in our industry, whether you’re like Acclaro, you’re a language services provider, offering services to those global brands, whether you are a technology provider, whether you run localization, localized content in an enterprise, or even if you’re what I’d call an individual contributor, maybe you’re a linguist or a language professional. I think AI is already changed what you do and how you go about your business. And I think that’s only going to continue and to develop. So I actually think we’re going to stop talking at some stage relatively soon about AI. It’s just going to be all pervasive and all invasive. BS: It’ll be the norm. Yeah. SM: Absolutely. We don’t talk any more about the internet in many, many industries, and we won’t talk about AI. It’ll just become the norm. And localization, I don’t think is unique in that respect. But I do think that if you think about the genesis of large language models and where they came from, I think localization is probably one of the primary and one of the first use cases for generative AI and for LLMs. BS: Right. The industry started out decades ago with machine translation, which was really born out of pattern matching, and it’s just grown over time. SM: Absolutely. And I remember when I joined the industry, what did I say? So 2009, it would’ve been when I joined the industry. And I had friends asking me, what do you mean people pay you for translation and pay for language services? I’ve just got this new thing on my phone, it’s called Google Translate. Why are we paying any companies for translation? So you’re absolutely right, and I think obviously machine translation had been around for decades before I joined the industry. So yeah, I think that question has come into focus a lot more with every sort of, I was going to say, every year that passes, quite honestly, it’s every three months. BS: If that. SM: Exactly, yeah. Why do companies like Acclaro still exist? And I think there are probably a lot of people in the industry who actually, if you think about the boom in Gen I over the last two, two and a half years, there’s a lot of people who see it as a very real existential threat. But more and more what I’m seeing amongst our client base and our competitors and other actors in the industry, the tech companies, is that there’s a lot more people who are seeing it as an opportunity actually for the language industry and for the localization industry. BS: So about those opportunities, what are you seeing there? SM: I think one of the biggest things, it doesn’t matter what role you play, whether you’re an individual linguist or whether you’re a company like ours, I think there’s a shift in roles and the traditional, I suppose most of what I dealt with 16 years ago was a human being doing translation, another human being doing some editing. There were obviously computers and tools involved, but it was a very human-led process. I think we’re seeing now a lot of those roles changing. Translators are becoming language strategists; they’re becoming quality guardians. Project managers are becoming sort of almost like solutions architects or data owners. So I think that there’s a real change. And personally, I don’t think, and I guess this is what this podcast is all about. I don’t see the roles of a few things going away, but I do see those roles changing and developing. And in some cases, I think it’s going to be for the better. And I think what we’re seeing is a lot of, because there’s all this kind of doubt and uncertainty and sort of threat, people are wanting to be shown the way, and people are wanting companies like our company and other companies like it to sort of lead the way in terms of how people who manage localized content can kind of implement AI. BS: Yeah. We’re seeing something similar in the content space as well. I know there was a big fear, certainly a couple of years ago, or even last year, that, oh, AI is going to take all the writing jobs because everyone saw what ChatGPT could do until they really started peeling back the layers and go, well, this is great. It spit out a bunch of words, it sounds great, but it really doesn’t say anything. It just kind of glosses over a lot of information and kind of presents you with the summary. But what we’re seeing now is that a lot of people, at least on the writing side, yeah, they’re using AI as a tool to automate away a lot of the mechanical bits of the work so that the writers can focus on quality. SM: We’re seeing exactly the same thing. I had a customer say to me she wants AI to do the dishes while she concentrates on writing the poetry. So it is the mundane stuff, the stuff that has to be done, but it’s not that exciting. It’s mundane, it’s repetitive. Those have always been the tasks that have been first in line to be automated, first in line to be removed, first in line, to be improved. And I think that’s what we’re seeing with AI.  BS: So on the plus side, you have AI potentially doing the dishes for you, while you’re writing poetry or learning to play the piano, what are some of the pitfalls that you’re seeing with regard to AI and translation? SM: I think there’s a few, and I think it depends on whereabouts AI is used, Bill, in the workflow. I think the very active translation itself is a very, very common use now of AI. But I think there’s some kind of a, I’m going to call them translation adjacent tasks as well, like we’ve mentioned with the entire workflow. So I think the answer would depend on that. But I think one of the biggest pitfalls of AI, and it was the same again, 2009 when I joined the industry and friends of mine had this new thing in their pocket called Google Translate. One of the pitfalls was, well, it’s not always right. It’s not always accurate. And even though the technology has come on leaps and bounds since then, and you had neural NT before large language models, it still isn’t always accurate. And I think you mentioned it before, it does almost always sound smooth and fluid and almost like it sounds like it’s very polished, and it

    34 min
  8. 10/06/2025

    From classrooms to clicks: the future of training content

    AI, self-paced courses, and shifting demand for instructor-led classes—what’s next for the future of training content? In this podcast, Sarah O’Keefe and Kevin Siegel unpack the challenges, opportunities, and what it takes to adapt. There’s probably a training company out there that’d be happy to teach me how to use WordPress. I didn’t have the time, I didn’t have the resources, nothing. So I just did it on my own. That’s one example of how you can use AI to replace some training. And when I don’t know how to do something these days, I go right to YouTube and look for a video to teach me how to do it. But given that, there are some industries where you can’t get away with that. Healthcare is an example—you’re not going to learn how to do brain surgery that someone could rely on with AI or through a YouTube video. — Kevin Siegel Related links: Is live, instructor-led training dying? (Kevin’s LinkedIn post) AI in the content lifecycle (white paper) Overview of structured learning content IconLogic LinkedIn: Kevin Siegel Sarah O’Keefe Transcript: Introduction with ambient background music Christine Cuellar: From Scriptorium, this is Content Operations, a show that delivers industry-leading insights for global organizations. Bill Swallow: In the end, you have a unified experience so that people aren’t relearning how to engage with your content in every context you produce it. SO: Change is perceived as being risky; you have to convince me that making the change is less risky than not making the change. Alan Pringle: And at some point, you are going to have tools, technology, and processes that no longer support your needs, so if you think about that ahead of time, you’re going to be much better off. End of introduction SO: Hi, everyone, I’m Sarah O’Keefe. I’m here today with Kevin Siegel. Hey, Kevin. KS: Hey, Sarah. Great to be here. Thanks for having me. SO: Yeah, it’s great to see you. Kevin and I, for those of you that don’t know, go way back and have some epic stories about a conference in India that we went to together where we had some adventures in shopping and haggling and bartering in the middle of downtown Bangalore, as I recall. KS: I can only tell you that if you want to go shopping in Bangalore, take Sarah. She’s far better at negotiating than I am. I’m absolutely horrible at it. SO: And my advice is to take Alyssa Fox, who was the one that was really doing all the bartering. KS: Really good. Yes, yes. SO: So anyway, we are here today to talk about challenges in instructor-led training, and this came out of a LinkedIn post that Kevin put up a little while ago, which will include in the show notes. So Kevin, tell us a little bit about yourself and IconLogic, your company and what you do over there. KS: So IconLogic, we’ve always considered ourselves to be a three-headed dragon, three-headed beast, where we do computer training, software training, so vendor-specific. We do e-learning development, and I write books for a living as well. So if you go to Amazon, you’ll find me well-represented there. Actually, one of the original micro-publishers on this new platform called Amazon with my very first book posted there called, “All This PageMaker, the Essentials.” Yeah, did I date myself for that reference? Which led to a book on QuarkXPress, which led to Microsoft Office books. But my bread and butter books on Amazon even today are books on Adobe Captivate, Articulate Storyline, and TechSmith Camtasia. I still keep those books updated. So publishing, training, and development. And the post you’re talking about, which got a lot of feedback, I really loved it, was about training and specifically what I see as the demise of our training portion of our business. And it’s pretty terrifying. I thought it was just us, but I spoke with other organizations similar to mine in training, and we’re not talking about a small fall-off of training. 15, 20% could be manageable. You’re talking 90% training fall off, which led me to think originally, “Is it me?” Because I hadn’t talked to the other training companies. “Is it us? I mean, we’re dinosaurs at this point. Is it the consumer? Is it the industry?” But then I talked to a bunch of companies that are similar to mine and they’re all showing the same thing, 90% down. And just as an example of how horrifying that is, some of our classes, we’d expect a decent-sized class, 10, a large class, 15 to 18. Those were the glory days. Now we’re twos and threes, if anyone signs up at all. And what I saw as the demise of training for both training companies and trainers, if you’re a training company and you’re hiring a trainer, one or two people in the room isn’t going to pay the bills. Got to keep the lights on with your overhead running 50%, 60%, you know this as a business person, but you’ve got to have five or six minimum to pay those bills and pay your trainer any kind of a rate. SO: So we’re talking specifically about live instructor-led, in-person or online? KS: Both, but we went more virtual long before the pandemic. So we’ve been teaching more virtual than on-site for 30 years. Well, not virtual 30 years, virtual wasn’t really viable until about 20 years ago. So we’ve been teaching virtual for 20 years. The pandemic made it all the more important. But you would think that training would improve with the pandemic, it actually got even worse and it never recovered. So the pandemic was the genesis of that spiral down. AI has hastened the demise. But this is instructor-led training in both forms, virtual and on-site. I think even worse for on-site. SO: So let’s start with pandemic. You’re already doing virtual classes, along comes COVID and lockdowns and everything goes virtual. And you would think you’d be well-positioned for that, in that you’re good to go. What happened with training during the pandemic era when that first hit? KS: When that pandemic first hit, people panicked and went home and just hugged their families. They weren’t getting trained on anything. So it wasn’t a question of, were we well-positioned to offer training? Nobody wanted training, period. And this was, I think if you pull all training companies, well, there are certain markets where you need training no matter what. Healthcare as an example, they need training. Security, needed training. But for the day-to-day operations of a business, people went home and they didn’t work for a long time. They were just like, “The world is ending.” And then, oh, the world didn’t end. So now they’ve got to go back to work, but they didn’t go back to work for a long time. Eventually people got back to work. Now, are you on-site back to work or are you at home? That’s a whole nother thing to think about. But just from a training perspective, when panic sets in, when the economy goes bad, training is one of the first things, you get rid of it. Go teach yourself. And the teaching yourself part is what has led to the further demise of training, because you realize I can teach myself on YouTube. At least I think I can. And I think when you start teaching yourself on your own and you think you can, it becomes, the training was good enough. So if you said, “Let’s focus on the pandemic.” That’s what started it, the downward spiral. But we even saw the downward spiral before the pandemic, and it was the vendors that started to offer the training that we were offering themselves. SO: So instead of a third-party, certainly a third-party, mostly independent organization offering training on a specific software application, the vendors said, “We’re going to offer official training.” KS: Correct. And it started with some of these vendors rolling out their training at conferences. And I attended these conferences as a speaker. I won’t name the software, I won’t name the vendor, but I would just tell you I would go there and I would say, “Well, what’s this certificate thing you’re running there?” It’s a certificate of participation. But as I saw people walking around, they would say, “I’m now certified.” And I go, “You’re not certified after a three-hour program. You now have some knowledge.” They thought they were certified and experts, but they wouldn’t know they weren’t qualified until told to do a job. And then they would find out, “I’m not qualified to do this job.” But that certificate course, which was just a couple of hours by this particular vendor, morphed into a full day certificate. They were charging now a lot of money for it, which morphed into a multi-day thing, which now has destroyed any opportunity for training that we have. And that’s when I started noticing a downward spiral. Tracking finances, it would be your investments going down, down, down, down this thing. It’s like a plane, head and nose down. SO: And we’ve seen something similar. I mean, back in the day, and I do actually… So for those of you listening at home that are not in this generation, PageMaker was the sort of grandparent of InDesign. I am also familiar with PageMaker and I think my first work in computer stuff was in that space. So now we’ve all dated ourselves. But back in the day we did a decent amount of in-person training. We had a training classroom in one of our offices at one point. Now, we were never as focused on it as you are and were, but we did a decent business of public-facing, scheduled two-day, three-day, “Come to our office and we’ll train you on the things.” And then over time, that kind of dropped off and we got away from doing training because it was so difficult. And this is longer ago than you’re talking about. So the pattern that you’re describing where instructor-led in-person training, a classroom training with everybody in the same room kind of got disrupted a while back. We made a decent l

    32 min

Ratings & Reviews

4.3
out of 5
7 Ratings

About

The Content Operations podcast from Scriptorium delivers industry-leading insights for scalable, global, AI-optimized content.

You Might Also Like