Building the Canvas Above AI Models Ethan Proia is the Founding Head of Design at FLORA, a creative AI platform that’s unifying 50+ generative models across text, image, video, and audio into one coherent workspace built for flow, control, and creative agency. Before FLORA, Ethan spent years exploring spatial computing, mixed reality, and human-computer interaction, building immersive experiences and investigating how humans interact with increasingly intelligent systems. His background spans installation art, interactive technology, and interface design for emerging paradigms. FLORA’s founding manifesto makes a strong claim: current AI creative tools are made by non-creatives for other non-creatives to feel creative. Ethan’s designing something different, a tool that honors the history of creative software while building scaffolding on top of AI models, not just aggregating them. We brought him in to talk about FLORA’s design philosophy, the specific interface decisions that make 50+ models feel coherent instead of chaotic, and where creative tools are heading as generative AI becomes the dominant material designers work with. At a glance * FLORA’s core abstraction is modality (text, image, video, audio) rather than individual models, because modalities never change but models constantly evolve. * The node-based canvas makes creative workflows visual and repeatable, turning the process itself into the deliverable, not just the output. * Every new primitive on the canvas has to fight for its life because complexity kills the beauty of node-based thinking for newcomers. * Context and intent are the twin engines of UX in the AI era, every design decision comes down to answering those two questions. * Ethan uses ChatGPT to dump months of Flora context and Cursor with Figma MCP to prototype directly in code, moving away from Figma as source of truth. * FLORA isn’t just creative software, it’s positioning itself as a creative operating system with no allegiance to any form factor or surface area. * The role of designers is changing fundamentally, expect more work directly in code, higher fidelity prototypes that don’t take forever, and a shared language with engineers. * When hiring designers, Ethan looks for proficiency across multiple creative tools, experience with current AI creative tools, and strong opinions about what works and what doesn’t. Topics Creative tools must respect established workflows while innovating where it matters Ethan says a tool made by creatives for creatives needs to fundamentally acknowledge the history it’s coming from, and it needs to know when to abide by those rules and when to break them. That philosophy is core to how he thinks through FLORA and how to expand it, always asking how they can pay homage to and develop what’s already been done, improve it where it needs to be improved, augment it where it needs to be augmented, specifically with new technologies. It needs to actually work, be scalable, be enjoyable, and be something you can build a personal relationship with. What strikes Ethan when talking to creatives is how personal people’s relationships are with their tools, which is interesting because tools are made to be adopted by lots of different people, yet there’s such an individualistic experience when using them. The way one person uses Figma is different than how another uses it, and that extends all the way down. So FLORA needs to be universally approachable and understandable and adoptable, but also something you can build a relationship with. Models are building blocks, not the intelligence itself, and the abstraction should reflect that Ethan thinks we shouldn’t be thinking about models as intelligent in their own right because they’re not, they’re basically input output machines that are very good and novel in the way they’re doing that. He believes we should be using the models themselves as the tools we’re building a foundation on top of, where it’s less about the individual model and more about the structures and scaffolding on top of them. Right now everyone is model focused, new models come out all the time and people post about which one can do what with better prompt adherence, but Ethan thinks the models are more fundamental than that. We should actually be building structures on top of the models, and that’s really core to the philosophy of FLORA and this next generation of creative tools and how they’re integrating and building on these new abilities. Professional users care about models only insofar as it lets them get to the creative output they want, so if FLORA could guarantee the same output without mentioning a model at all, Ethan bets the vast majority of creative professionals wouldn’t care. The reason model names and hype around new model drops is still important is because it’s much more explicit about the kind of control that enables, but always the point is the output, what am I trying to do, what will it look like, what will it feel like, how do I get the result I want. Modality became FLORA’s abstraction because it’s the only thing that never changes When FLORA was figuring out the common language and substrate they’re building on top of, Ethan explains they realized text to image models were easy to categorize because they all take text input and output an image, and you could add complexity like what aspect ratio or resolution you get, but it’s still manageable. But then multimodal models threw the whole paradigm out the window because a model that can support both text and image but needs to be both or some permutation of the inputs makes that abstraction a lot messier. The conclusion they came to, and what Ethan thinks has contributed to FLORA being successful so far, is you need to pick an abstraction that has nothing to do with the models. The abstraction they settled on was the modality, and Ethan is obviously biased but thinks that’s the correct abstraction we should be building on top of because that’s never going to change. We will always have text, image, video, audio, 3D models, and whatever those atomic units are, the building blocks, that’s why they call them blocks on the canvas. Then you can build on top of that. They’re always stress testing that foundation to make sure it’s compatible with new models that come out, and so far it’s held up, and people really respond well to it because it’s more intuitive than using nonsense model names for people just coming into this. Node-based canvases encourage divergence and convergence while making the creative process visible Ethan loves node-based canvases because they’re inherently spatial, and we are spatial creatures who think in dimensionality and relativity, which is why infinite canvases have become so popular in design tools because you can place things and organize them. A node-based canvas has all those benefits but then the fact that you’re actually connecting your train of thought together makes it very easy to follow and encourages the theme of the double diamond, divergence and convergence, which is very visual when you’re looking at a node-based canvas because you’re quite literally connecting your thoughts together. What FLORA enables specifically and what they’re excited about is that FLORA is allowing you to codify and visualize the process, so the creative process then becomes the material you’re working with, the deliverable, not just the output. You could generate a poster of whatever, but what if FLORA could give you the creative process and make it repeatable and scalable, that’s the new deliverable, the process. Ethan was convinced this paradigm has persisted, it’s been kind of niche but there’s got to be a reason for that, and it’s all those things he explained, but the reality still stands that it is confusing for people who are used to traditional interfaces. By simplifying and reducing and abstracting away the other complexity, he thinks they can let the beauty of that way of making shine through. Every new primitive on the canvas has to fight for its life to prevent overwhelming complexity Ethan is always trying to make sure every new primitive they consider introducing to the canvas has to fight for its life. He hates what he’s seen in past node-based tools, and he says this from a place of love because he’s done a lot of work in Touch Designer, Max MSP, and Pure Data, but if you need a place to put something it’s just make another node, put another node on the canvas. That’s cool for people who understand and are in the universe of the software, but for a new person coming in that’s chaos, you’re telling them there are 400 nodes that all do different things and they have to know what they do and how to connect them and in what combinations. There’s been really beautiful emergent community from that in tools like Blender’s geometry nodes, Unity’s shader graph, and Unreal Engine’s blueprints, with people not knowing what’s going on necessitating coming together and making and sharing and knowledge sharing, which is lovely. But Ethan thinks that can exist without needing to have so many primitives on the canvas, and when he says primitives he means the basic atomic units you’re stitching together to build something bigger, in their case a creative process or workflow. Nodes create causal relationships and “noodles” are where the context-sharing actually happens The nodes inherently have a causal relationship and there’s a chronology to them, Ethan explains, which is different from laying things out in Figma where you might mentally paste horizontally for one iteration and vertically for something else. The nodes implicitly imply this thing and then this thing, you start with text and then get an image and then the image turns into a video and that video turns into something else. The question becomes what is the thing that’s actually