I've spent a lot of years arguing that most organizations have the wrong mental model of what a UX team is for. In the vast majority of organizations, UX is dramatically underinvested. You have one UX person, or at most a small team, supporting an organization with dozens of developers, product managers, and business analysts. Or a small digital team made up of a variety of disciplines and generalists, supposed to raise the quality of every digital touchpoint across an organization of several thousand. In that environment, expecting UX to own and shape the entire user experience is not a strategy. It is wishful thinking dressed up as one. The only approach that actually makes sense is democratization. Instead of trying to do everything yourselves, your job is to spread the capability: set the standards, train people, and give everyone who touches digital the knowledge and tools to apply UX best practice on their own. I've written about this for years, and most UX professionals I talk to agree with the principle. The problem has always been the execution. The playbook was the best answer we had For the past decade or so, the most sensible response to this challenge has been the digital playbook. A playbook, in this context, is a collection of policies, principles, standard operating procedures, and training material that documents how the organization should approach digital work. Done well, it does several things at once: it educates people who don't have a UX background, it standardizes how work gets done, and it gives the UX or digital team something to point at when a stakeholder wants to skip testing or cram twelve things onto a homepage. The UK Government Digital Service manual is probably the best public example of this. Comprehensive, well-structured, and genuinely useful. It also took a significant amount of work to produce, and presumably even more work to get people to actually use. The UK Government Digital Service Manual is probably the best example of a digital playbook. That last part is the problem with most playbooks. They ask a lot of the people you want to reach. If a product manager wants to run a quick survey to inform a decision, they now need to find the right section of the playbook, absorb methodology they've never thought about before, learn to apply it to their specific situation, and avoid the dozen ways this kind of thing typically goes wrong. That is a reasonable request if surveys are their job. It is a significant ask if they have three other priorities and a deadline on Friday. The playbook shifts the burden of UX knowledge from the UX team onto everyone else. In theory, fine. In practice, people are busy, and busy people take shortcuts. I say this having spent years advocating for playbooks, so make of that what you will. What AI changes about this picture I've been building out a library of AI skills for my own consulting practice over the past year or so, and somewhere along the way I realized these are doing the same job as a playbook, just in a radically different form. An AI skill, if you haven't come across the term, is a reusable standard operating procedure that an AI can follow on demand. You write it once, document the process in enough detail that an AI can apply it reliably, and from that point on anyone can use it without needing to understand the underlying methodology. This is what makes them interesting at an organizational level. A well-designed AI skills library doesn't ask your product manager to read the playbook before running a survey. It lets them say, "I need to design a survey to find out why users are dropping off at checkout," and have an AI walk them through the process, applying your organization's standards as it goes. The best practice is embedded in the skill. The person using it doesn't need to have absorbed it first. That is a qualitatively different proposition from anything a static playbook can offer. What an organizational AI skills library actually looks like The specific skills worth building will vary depending on the organization. But for a UX or digital team trying to extend their influence, the candidates tend to cluster around the tasks that non-specialists most often get wrong. Survey design is an obvious one. Writing questions that don't inadvertently bias the answers is harder than it looks, and most people who aren't researchers have no idea how their phrasing is leading respondents astray. A skill that guides someone through question design, flags leading language, and checks for common structural problems would save a lot of quietly-useless survey data from being collected. Prototype testing is another. The basics of a usability test, what to observe, what to ask, how to avoid putting words in a participant's mouth, are genuinely learnable. The problem is that someone needs to learn them before running the test, not during it. You could build skills for writing user stories that capture real intent rather than implementation detail. conducting a heuristic review of an interface. analyzing the results of an A/B test without drawing confident conclusions from a sample size of 40. assessing whether a proposed feature maps to an actual user need or is just something that sounded good in a meeting. Each of these represents expertise that currently lives in the heads of a few specialists and gets applied only when those specialists have capacity and are directly involved. An AI skills library changes that dynamic. The expertise is no longer gated by headcount or availability. It is available whenever someone in the organization needs it, in a form they can actually use. The compounding effect Building a skills library at an organizational level is different from building one for yourself. You're not just creating tools that save you time. You're creating tools that let anyone in the organization apply a consistent standard, without needing to be a specialist first. The UX team's influence is no longer bounded by their headcount. And this is still relatively early days. Most organizations haven't started thinking about AI skills at that level. The teams that build these libraries now will have a head start that gets harder to close over time. ---- Running an agency or working freelance? This is worth thinking about for your own practice too. A well-built skills library makes your service more consistent, helps you bring junior team members up to speed faster, and gives clients a reason to see you as something more than a pair of hands. If this is the kind of thing you'd like to work through alongside other freelancers and agency owners, my Agency Academy is probably the right place. It's a group coaching community where we get into exactly these kinds of challenges. £28 a month, cancel whenever. ---- Where to start If you're thinking about this for your own organization, the most practical starting point is to identify the five tasks that non-specialists most often get wrong, and that cause the most friction or quality problems when they do. For each one, document how it should actually be done, in as much detail as you can. What does the process involve? What are the most common errors? What does good output look like? Then work with an AI to turn that documentation into a skill, test it against real examples, and refine it based on what it gets wrong. The goal isn't perfection on the first pass. The goal is something good enough to use, that improves each time you use it. That is a manageable starting point, and one that tends to produce visible results quickly enough that people want to keep going. If you're working through this and would like a thinking partner, or if your organization is seriously considering building out an AI skills library and wants some help thinking through what that looks like, I'd genuinely enjoy that conversation. Book some time here and let's explore it.