If you've been reading this newsletter for a while, you'll have noticed I tend to focus on the big-picture stuff: organizational change, building design culture, getting stakeholder buy-in. This week I'm doing something different and getting into the weeds on generative imagery, a tool that's become part of my daily workflow. I'm genuinely curious whether you prefer the strategic content, the practical how-to pieces, or a mix of both. Hit reply and let me know. Generative imagery is quickly becoming an essential tool in the modern designer's toolkit. Whether you're a UI designer crafting interfaces, a UX designer building prototypes, or a marketer creating campaign visuals, the ability to generate exactly the image you need (rather than settling for whatever stock libraries happen to have) is genuinely useful. The Ethical DimensionThere's an ethical dimension here that makes me uncomfortable. Using generative imagery does, in theory, take work away from illustrators and photographers. I don't love that. But I also recognize that this is a pattern we've seen throughout history. Technology has consistently made certain professions more niche rather than making them disappear entirely. Blacksmiths still exist. Vinyl records still sell. And I suspect custom photography and illustration will follow the same path, becoming more specialized rather than vanishing completely. Besides, if we're being realistic, most of us weren't commissioning custom photography for every project anyway. We were pulling images from stock libraries, and I can't say I'll miss spending 45 minutes searching for a photo that almost works but has the person looking in the wrong direction. So with that acknowledged, let's get into the practical side of things. When to Avoid Generative ImageryBefore diving into how to use these tools well, it's worth noting when you shouldn't use them at all. Generative imagery has no place when you need to represent real people or real events. If you're showing your actual team, documenting a real conference, or depicting genuine customer stories, you need real photography. Anything else would be misleading, and your audience will likely spot it anyway. Why It Beats Stock LibrariesFor everything else, though, generative imagery offers some serious advantages over traditional stock. You can get exactly the pose you want, in exactly the style you need, matching your specific color palette. No more "this photo would be perfect if only the person was looking left instead of right" compromises. This matters more than you might think. Research suggests that users form initial impressions of a website in roughly 50 milliseconds. That's not enough time to read anything. Those snap judgments are based almost entirely on imagery, layout, color, and typography. The right image doesn't just look nice; it shapes how users feel about your entire site before they've processed a single word. Imagery also gives you a powerful tool for directing attention. A well-chosen image can guide users toward your key content or call to action in ways that feel natural rather than pushy. The right image composition can draw attention to critical calls to action. Copyright and Commercial UseBefore you start generating images for client work, you need to understand the legal landscape. And yes, it's a bit murky. The short version: most major AI image generators allow commercial use of the images you create, but the terms vary. Midjourney allows commercial use for paid subscribers. Adobe Firefly positions itself as "commercially safe" because it was trained on licensed content and Adobe Stock images. Google's Nano Banana Pro (accessible through Gemini) also permits commercial use. The murkier issue is around training data. Several ongoing lawsuits are challenging whether AI companies had the right to train their models on copyrighted images in the first place. These cases haven't been resolved yet, and depending on how they play out, the landscape could shift. For now, my practical advice is this: use reputable tools with clear commercial terms, avoid generating images that deliberately mimic a specific artist's recognizable style, and keep an eye on how the legal situation develops. For most standard commercial work (website imagery, marketing materials, UI mockups), you should be fine. Choosing the Right Tool: Style vs. InstructionsWhen selecting which AI model to use, you're essentially balancing two considerations: stylistic output and instructional accuracy. Stylistic OutputEvery model has its own aesthetic fingerprint. No matter how specific your prompts are, Midjourney images have a certain look, and Nano Banana images have a different one. You need to find a model whose default aesthetic works for your project. Instructional AccuracyThe other consideration is how well the model follows detailed instructions. If you need a specific composition (person on the left, looking right, holding a coffee cup, with a window behind them), some models handle that brilliantly while others will give you something that vaguely resembles your request but took creative liberties you didn't ask for. Use Multiple ModulesThe frustrating reality is that you rarely get both. The models with the most pleasing aesthetics tend to be worse at following precise instructions, and vice versa. This is why I often move between multiple models in a single workflow. I'll generate the initial image in Midjourney to get an aesthetic I like, then bring that image into Nano Banana Pro as a reference and use its stronger instruction-following capabilities to refine specific details. It's an extra step, but it gets you the best of both worlds. Tool RecommendationsThere are plenty of tools out there, but here are three I'd recommend depending on your needs and experience level. MidjourneyMidjourney produces what I consider the most aesthetically pleasing results, particularly for images of people and anything photographic. It's what I use on my own website. The downside is that Midjourney is terrible at following detailed instructions. Ask for something specific and you'll get something beautiful that bears only a passing resemblance to what you requested. It's also only available through its own website, so you can't access it through multi-model platforms. Nano Banana ProNano Banana Pro (Google's model, accessible through Gemini) is the opposite of Midjourney. It's remarkably good at following detailed prompts. You can specify gaze direction, facial expressions, items held, and positioning, and it will actually deliver something close to what you asked for. It can also produce transparent PNGs, which is genuinely useful for UI work where you need to overlay images on colored backgrounds. The aesthetic isn't quite as refined as Midjourney, but for many projects that trade-off is worth it. KreaKrea is where I'd recommend starting if you're new to all this. It gives you access to multiple models, letting you experiment and find which one works best for your particular needs. You can try different approaches without committing to a single tool's subscription. Unfortunately, Krea doesn't include Midjourney (since Midjourney doesn't make its model available to third parties), but it's still a great way to explore the landscape. Krea is great for beginners allowing you to experiment with different models to find which works best for you. Prompting StrategiesHow you write your prompts depends largely on which model you're using. For instruction-following models like Nano Banana Pro, you can be quite detailed. Describe the composition, the subject's position, their expression, what they're holding, the lighting, the background. The model will make a genuine attempt to deliver all of it. You won't get perfection every time, but you'll get something workable more often than not. For aesthetic-focused models like Midjourney, simpler prompts often work better. Focus on the overall mood, style, and subject matter rather than precise positioning. Fighting against the model's creative tendencies usually produces worse results than working with them. Reference Imagery for ConsistencyOne of the most useful techniques, particularly with models that struggle to follow detailed instructions, is using reference imagery. Most tools allow you to upload an "image prompt," which is an existing image that contains elements you want. The model will attempt to recreate those elements in whatever style you've specified, incorporating any changes you've requested. It's a way of showing the model what you want rather than trying to describe it in words. Even more valuable is the style reference feature. If you need to produce multiple images that all share a consistent visual identity (which you almost certainly do for any real project), create one image that nails the style you're after. Then use that image as a style reference for every subsequent generation. This keeps your visuals cohesive rather than having each image feel like it came from a different designer. I use a style reference image to keep my website illustrations consistent. Getting StartedIf you haven't experimented with generative imagery yet, now is a good time to start. Sign up for Krea, generate a few images for a project you're working on, and compare them to what you would have found in a stock library. You'll probably find that some results are worse, some are surprisingly good, and you'll start developing an intuition for what these tools can and can't do. That intuition is valuable. Generative imagery isn't going away, and the designers who learn to use it well will have a genuine advantage over those who don't. Not because AI replaces skill, but because it gives skilled designers another tool to work with.