Double Diamond

Double Diamond

Chatting product, design, and the future - live in NYC. doublediamondnyc.substack.com

Episodes

  1. John Allen - CEO & Co-Founder of Layo

    5 MAR

    John Allen - CEO & Co-Founder of Layo

    Building the platform layer for AI-native apps People don’t open apps the way they used to. Increasingly, they tell an AI what they need and expect the right tool to just appear. John Allen, Co-Founder and CEO of Layo, is building the platform that makes that possible - turning existing products into AI-native experiences that live inside ChatGPT and other AI systems. From OpenAI’s app announcement in October to having paying enterprise customers in early 2026, John has moved with urgency matched to his conviction: the next operating system is a conversation. We sat down to talk about how he’s building the infrastructure underneath it. At a glance John sees ChatGPT apps as the beginning of a fundamental platform shift, comparable to the early iPhone app store but happening at internet speed. His thesis centers on the shift from screen-driven workflows to intent-driven conversations, where users express what they want naturally rather than navigate complex interfaces. Traditional web and mobile development tools can’t handle non-deterministic outputs or intent-based interaction patterns, creating a new category of infrastructure needs. Layo provides the visual builders, analytics, and configuration tools specifically designed for AI-native apps, helping enterprises ship into the ChatGPT ecosystem in days rather than months. The deeper transformation John envisions is software that adapts in real time to individual user preferences, enabled by AI platforms that maintain context across all applications. Enterprise customers consistently ask the same two questions: what should we build and when, with his answer being “the sooner the better” due to dramatically lower development costs and first-mover learning advantages. The analytics revolution enables something unprecedented in software history: full insight into user intent, measuring why people want to use your product rather than just what buttons they clicked. This leads toward a future of millions of interface variants rather than today’s A/B testing approach, where every user gets software built for exactly how they want to use it. Defining AI-native: intelligence and action as the foundation John draws a clear distinction between companies that “bolted on AI into their existing software” and those building AI-native products. “I think the real definition of AI-native are apps that actually have true intelligence as the baseline foundation. Intelligence and action, right?” He compares it to the data-native era of the 2010s, where companies competed on proprietary data. Now, data is getting commoditized and “all these apps are starting to talk to each other.” The value shifts to becoming “the system of action where people can go and do things” with global intelligence as the foundation rather than siloed data. The operating system shift: from apps you open to systems that respond John frames the current moment as comparable to previous platform transitions from desktop to web to mobile. The fundamental change is that “instead of just having the foundation of like we are the system of record, right now you’re like we’re the system of action, we’re embedded into experiences, meeting customers where they already are.” ChatGPT represents the first true AI operating system where users come to express intent. “People use software to go and do things or to get information” and AI interfaces can meet users at that moment of intent without forcing them to learn new interaction patterns for every application they use. Why the “mobile moment” parallel matters: winning through learning, not timing Drawing explicit parallels to the iPhone app store launch, John believes “the first thousand apps in the app store, they’re going to win the biggest. Not because they were the first thousand, but because they were the first to learn.” The ChatGPT app store launched December 17th, making this moment “like the cutting edge, like really day one.” The advantage comes from accumulating learning cycles about user intent patterns, optimization strategies, and interface design principles. “You never really get any product right on the first iteration” so the companies that start learning earliest will have the biggest advantage as the ecosystem matures. The infrastructure gap: why traditional dev tools fail for conversational interfaces John discovered the platform opportunity through direct experience building ChatGPT apps. After taking his team to upstate New York for a week of building, they initially tried creating a “super app” that could access everything. “We just realized like we are abstracting this stuff way too much” and “OpenAI already is the super connector.” The real insight came when they recognized that despite being technically skilled, “if we’re struggling with it then every company is going to struggle with it.” Traditional development tools assume deterministic outputs and screen-based interactions, making them poorly suited for conversational, non-deterministic AI interfaces that require entirely different architectural thinking. Enterprise speed: from months to days with 10x cost reduction The economics of AI-native development represent a dramatic shift for enterprise software teams. John explains that “most businesses once they’re established software companies they’re spending hundreds of thousands or millions of dollars on engineering” on their core products. With Layo, “they can go and actually use our product and spin up a chat GBT app in like days” with resource constraints “diminished by 10x.” This speed advantage enables rapid experimentation and learning cycles that weren’t economically feasible with traditional development approaches. The low cost of entry means companies can start learning immediately rather than waiting for perfect strategy. The analytics revolution: measuring intent for the first time in software history John identifies a fundamental measurement problem that creates unprecedented opportunity for product teams. Traditional analytics tell you “user clicked button X” but AI apps require understanding “user intended to do this. So your app was called. Why was it called?” This shift enables something entirely new: “this is probably the first time in history where you actually have full insight into why people want to use your product.” The new metrics focus on “invocation, follow-ups, success/error rates, depth, latency, and reliability rather than traditional pageviews or clicks.” Product teams can finally see the direct correlation between user intent and product performance. Customer acquisition through value demonstration: proving before asking John uses Clay as an example of how AI-native apps fundamentally change customer acquisition. When someone prompts for sales prospecting help, “Clay, if they’re well optimized, can just come up and say, ‘Great. We not only can serve you, but we actually already did the work for you. Do you like it? Great. Continue on.’” This approach proves value before asking for commitment, similar to how mortgage companies used calculators in 2016-2017. The result is warmer leads and higher conversion rates because “you’re proving to them that you’re adding value” rather than just telling them you exist. For companies like Clay, this opens access to ChatGPT’s 800 million weekly active users who may not know the product exists. From deterministic to non-deterministic: designing systems instead of flows John envisions software development fundamentally changing as outputs become unpredictable. He learned from Perplexity’s head of design that “you don’t really know the output. The only way to actually design for the output is to try to build enough outcomes, potential outcome surfaces that make sense.” This leads to a future where “that whole era of A to B testing where your designers and your developers build three different flows and spend six weeks testing which one performed better” gets replaced by “A-to-Z testing. You’re going to have a million variants constantly flowing.” The vision is “every single user will have software built exactly for how they want to use it” by 2030, enabled by AI that understands individual preferences and can adapt interfaces in real time. How product roles evolve: from screen designers to system architects The transformation from deterministic to non-deterministic software requires new types of product thinking. John sees “design engineers” emerging as tools merge: “every tool is merging into the same idea of you design and code in the same interface.” But the deeper change is conceptual: “You’re going to be the architect of outcomes and intents” rather than someone who builds specific screens or workflows. The new role requires “systematic thinkers” who can “build a million different components, a million different outputs so that the system can go and interact with your brand and your software in the way that is best suited for the user” rather than forcing everyone through the same predetermined flow. --- Thanks for reading. Stay in the loop on new episodes and upcoming events by subscribing. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit doublediamondnyc.substack.com

    57 min
  2. Carl Rivera - Chief Design Officer at Shopify

    19 FEB

    Carl Rivera - Chief Design Officer at Shopify

    At a glance… When Carl Rivera became Shopify’s first Chief Design Officer in eight years, he inherited a team of 170+ designers across one of the world’s largest commerce platforms. But rather than optimize the existing system, Rivera chose transformation. His approach combines philosophical clarity about design’s role with hands-on building, from shipping code himself to creating internal tools that replace industry standards like Figma presentations. What makes Rivera unique is how he operates simultaneously as strategic leader and individual contributor. During our evening at the Flatiron studio, he live-demoed internal tools his team built while explaining the organizational philosophy behind eliminating job titles, restructuring teams, and requiring all designers to ship code. This is leadership through example and systematic change. Rivera discovered massive latent talent at Shopify that was operating at 70% capacity due to permission culture, which he unlocked by simply stating that design matters and giving people permission to do their best work. The company abandoned universal design processes in favor of material-driven approaches where each project gets the methodology it needs rather than forcing everything through the same workflow. Over 50% of Shopify’s designers now actively ship code to production after completing mandatory engineer onboarding, with Rivera viewing technical fear as the primary barrier holding back the design discipline. Teams have been restructured around “rovers” who move fluidly between projects and agency-style deployments that can pivot quickly to emerging opportunities rather than permanent assignments. Rivera believes taste, aesthetics, and strong points of view become the key hiring differentiators as AI democratizes baseline design skills, favoring candidates who provoke strong reactions over safe consensus picks. The apprenticeship program reflects his commitment to developing junior talent while the industry faces a K-shaped distribution where seniors accelerate with AI but juniors struggle to find opportunities. Internal tools like Artifact demonstrate his philosophy of building what you need rather than accepting limitations, combining project management with presentation capabilities that seamlessly integrate Figma designs, live prototypes, and traditional slides. Permission culture: unlocking latent talent through leadership clarity Rivera’s first major insight as CDO was recognizing that Shopify had tremendous design talent operating below capacity. “There was a ton of latent talent in the company in design specifically. There were a lot of people that were really outstanding but were producing at about 70% of their capabilities,” he observed. The root cause was permission culture where designers felt constrained from doing their best work. Rivera’s solution was direct leadership: “Just saying that design actually really matters and we’re going to become the best at it. Just saying that, my experience was that a lot of people just immediately became better designers because they felt that someone said that they were allowed to.” This experience shaped his broader philosophy about claiming leadership before having the title. The transformation demonstrates how clear communication about priorities can unlock existing talent without hiring. Material-driven process: why Shopify abandoned design frameworks Rather than implementing universal design processes, Rivera took the opposite approach based on his belief that process should follow material. “Companies generally try to come up with the process because it feels like management, and then they take all of their material and squeeze it into that process,” he explained. At Shopify, working on Shopify Payments (processing billions of dollars) requires completely different methodology than launching experimental merchant tools. Rivera’s approach demands active management to understand each project’s unique needs and adapt accordingly. This philosophy extends to team structure, where “all problems are people problems” but solutions depend on having the right people working together in the right way for that specific challenge. The approach requires more sophisticated management but produces better outcomes by matching process to problem rather than forcing standardization. Technical transformation: requiring all designers to ship code Rivera identified fear as the primary barrier preventing designers from embracing new capabilities, particularly around shipping code. “Fear combined with startup cost. You open up cursor or you open up terminal and they really don’t feel like welcoming interfaces,” he observed. His solution combined support with requirements: all designers now complete devop (engineer onboarding) and ship code to production as part of joining Shopify. The results are dramatic: over 50% of 170+ designers actively merge code, with 1,800 pull requests in six months. Rivera emphasizes the goal is overcoming initial apprehension rather than turning designers into engineers. “Once you get past it, it’s very natural and fluid and very easy.” The transformation has fundamentally changed what design teams can accomplish and how they approach problems. Rivera noted an unexpected discovery: teams often start with code rather than wireframes, flipping traditional process assumptions. Structural fluidity: rovers and the end of permanent team assignments Rivera restructured Shopify’s design organization around flexibility rather than stability, abandoning the traditional model of permanent team assignments. “I wanted to achieve an organization that was much more fluid with far fewer people attached to single defined problems,” he explained. The new structure includes “rovers” who float between projects and agency-style teams that can be deployed quickly to emerging opportunities. The Molly acquisition exemplified this philosophy, introducing a new way of working where context moves around the organization and fresh perspectives can be applied to entrenched problems. This approach allows leadership to “flex resourcing up and down depending on what is the most important problem that you’re dealing with right now” without constant reorganization overhead. The model requires different management skills but enables rapid response to changing priorities and prevents teams from becoming stuck in solution spaces. Hiring philosophy: seeking taste and spiky perspectives over consensus As AI democratizes baseline design skills, Rivera has shifted hiring criteria toward qualities that remain uniquely human. “We hire for taste. For aesthetics. For a point of view. It’s the difference between utility and affinity. Anyone can generate a good baseline, designers reach for the ceiling.” His approach favors candidates who provoke strong reactions over those who generate safe consensus. “My favorite people were the ones and fours,” he explained, referring to interview ratings where most candidates receive neutral threes. “They have something that is like they’ll bring a point of view. It will maybe be a little difficult at times, but they’ll bring something that’s a little spicy.” This philosophy reflects his belief that memorable experiences require designers who won’t settle for AI-generated baselines but push toward truly exceptional outcomes. The approach supports building a culture where strong opinions and creative risk-taking are valued over playing it safe. Quality standards: building a no-average-work culture Rivera articulated an uncompromising stance on maintaining quality standards across a large organization: “There should be no space in our company for people that can’t produce amazing work.” When challenged on this bold statement, he compared it to sports teams where having the best person in each position would be uncontroversial. His responsibility to Shopify’s mission (14% of US commerce flows through the platform) requires building “the world’s best design team.” Rivera believes this creates a self-reinforcing culture where “when you get really great people into a room, they look around and they get super excited and they work really hard and they inspire each other.” The approach prevents the drift toward mediocrity that affects many large organizations but requires active leadership to maintain standards. Rivera’s philosophy extends to the apprenticeship program, which develops junior talent while maintaining high expectations for performance and growth. Internal tooling: building what you need rather than accepting limitations Rivera demonstrated Shopify’s approach to internal tooling through Artifact, a project management and presentation platform that has replaced Figma presentations company-wide. The tool seamlessly combines project discovery with presentation capabilities, allowing fluid transitions between Figma designs, live coded prototypes, and traditional slides within a single interface. What makes Artifact special is dual functionality: practical replacement for slide decks and discovery mechanism where designers find work happening across the organization. Rivera showed diverse projects from Sidekick’s “teach” mode (where AI guides users with its own cursor) to SimJim (sending AI agents to shop and provide feedback). The tool represents his philosophy of building what teams need rather than accepting existing tool limitations. This approach extends to creating their own brand generation tools and design systems that enable the specific workflows Shopify requires. Leading before having the title: claiming ownership through action Rivera’s strongest advice for aspiring design leaders centers on taking ownership before formal authority. “Be a leader. You can claim that space and you can be a leader well before you have people report into you in an org chart.” He emphasizes that le

    1h 21m
  3. Los - Appstar at Danger Testing

    3 FEB

    Los - Appstar at Danger Testing

    Software as Cultural Performance Los Toure from Danger Testing joined us for something completely different. Instead of our usual interview format, he performed three of his apps live, turning the audience into participants in his cultural experiments. Working with co-founder Marc Müller, Danger Testing operates on a radical thesis: with AI making software faster and cheaper to build, apps can become cultural artifacts rather than products built for retention. They drop a new app every Thursday, treating software development like a band releases songs. Their viral hits include vandalizing friend.com subway ads, a digital Labubu you hang on your hip, and an app that turns your camera roll into a TikTok feed with AI-generated captions. Software becomes a medium for cultural commentary and self-expression. At a glance Danger Testing has shipped 50+ apps in one year, dropping a new one every Thursday like a band releases songs. Their vandalizefriend.com project went viral by letting people digitally vandalize friend.com subway ads throughout NYC. Apps don’t need retention or traditional metrics when optimized for cultural resonance and shareability. AI tools like Claude enable building software fast enough to react to cultural moments while they’re still relevant. The “apper” category positions creators who use software as their artistic medium, not just utility. Making users “the main character” creates more engagement than passive consumption of traditional media. Speed beats polish when capturing cultural moments that might only last days or weeks. Designers who’ve never built apps before can contribute by treating interface design like other creative mediums. Topics The app as cultural artifact: why software can be disposable Traditional apps optimize for retention, daily active users, and long-term engagement. Danger Testing optimizes for cultural resonance and shareability instead. “Retention is the old world,” Los explained during the live performance. “Companies make their money by keeping your eyes on the phone as much as possible. I can make my dollar by providing you value in 3 minutes and you enriching your own life and feeling cool.” Their apps function more like songs or viral content than traditional software products. Users might engage for minutes rather than months, but those minutes create shareable moments worth posting on Instagram. Building at the speed of culture with AI acceleration The economics of software development have fundamentally shifted with AI tools. Where apps once required months of development, Danger Testing can build and ship in days using tools like Claude. “Sometimes Mark will text me Sunday night about a cultural moment, and we’ll scrap our whole planned app to build something new,” Los shared. This speed enables them to capture cultural moments while they’re still relevant. When Timothée Chalamet’s Zoom call went viral, they quickly built Zoom Timothée, letting users join a recreated version of that same chaotic meeting with AI-generated participants. App performances: Zoom Timothée, My Brainrot, and Girl Hinge Los demonstrated three apps live, treating each as a performance rather than a traditional demo. Zoom Timothée recreated Chalamet’s viral Zoom meeting, complete with his desktop and chaotic participants. My brainrot transformed personal camera rolls into TikTok-style feeds with AI commentary like “realizing I’m the only one who showed up to class.” Girl Hinge let male users experience dating apps as women, complete with terrible pickup lines from AI-generated characters. Each app made audience members the protagonist rather than passive observers. Making everyone the main character instead of passive observers Traditional media positions audiences as observers of other people’s stories. Danger Testing’s apps make users the protagonist of cultural moments. “When you saw Seinfeld and Frasier on TV, you sort of relate to George, but it’s not really you,” Los explained. Their approach transforms users from voyeurs into participants. The My brainrot app exemplified this perfectly, making users feel like their own lives were worth celebrating rather than comparing themselves to TikTok influencers. The weekly drop cadence: treating software like music releases Danger Testing operates like a band rather than a traditional startup, releasing new apps every Thursday. This consistent cadence creates anticipation in their community while helping the creators overcome creative paralysis. “The cadence helps me be less scared,” Los shared. “I’m not built different, I’m just building different.” The weekly rhythm forces rapid experimentation and prevents perfectionism from slowing down cultural commentary. Just like audiences anticipate what Saturday Night Live will cover in the news each week, users look forward to what Danger Testing will drop with software. From t-shirt designers to app creators: expanding who can build software AI tools are democratizing software creation beyond traditional engineers. Danger Testing works with designers who’ve never built apps before, treating interface design like any other creative medium. “Someone who made t-shirts designed one of our apps, which is actually very sick,” Los mentioned. A friend who’s typically a writer created the storylines for the terrible guys in Girl Hinge. The collaborative approach mirrors how bands work together, with different roles contributing to the final product. Los encourages others to start their own “app bands” rather than going solo. Overcoming creative fear through structured output Creative paralysis affects everyone, but Los found that consistent output helps overcome it. The weekly release schedule eliminates the pressure to make each app perfect, since another one drops seven days later. “Everyone in here has real life experience, you have a story to tell,” he told the audience. “Maybe I can use software as a way to tell my story. Maybe someone else can download my life.” The key insight: fear diminishes when you commit to regular creation rather than waiting for the perfect moment or idea. Why retention metrics miss the point of cultural software Traditional app metrics focus on keeping users engaged for as long as possible, but Danger Testing values impact over time spent. A three-minute experience that makes someone feel cooler and gives them something to share with friends can be more valuable than hours of passive consumption. Los compared their approach to movies: “You can derive the same value from 3 minutes of this than you would a 2-hour movie on Netflix that you actually don’t enjoy, that doesn’t empower you.” Cultural software succeeds when it creates moments worth capturing and sharing. Building the future category of appstars Los positions himself as an “apper” rather than a traditional founder, using software as his artistic medium the same way musicians use sound. He envisions a future where people graduate from computer science programs and choose to become appstars instead of joining traditional companies. “There’s going to be like an Odd Future of apps, or like Jack Boys, or like Brockhampton where everyone has a different role and something different to bring,” he predicted. The category combines technical skills with cultural intuition and creative storytelling. --- Thanks for reading. Stay in the loop on new episodes and upcoming events by subscribing. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit doublediamondnyc.substack.com

    33 min
  4. Ethan Proia - Head of Design at FLORA

    09/12/2025

    Ethan Proia - Head of Design at FLORA

    Building the Canvas Above AI Models Ethan Proia is the Founding Head of Design at FLORA, a creative AI platform that’s unifying 50+ generative models across text, image, video, and audio into one coherent workspace built for flow, control, and creative agency. Before FLORA, Ethan spent years exploring spatial computing, mixed reality, and human-computer interaction, building immersive experiences and investigating how humans interact with increasingly intelligent systems. His background spans installation art, interactive technology, and interface design for emerging paradigms. FLORA’s founding manifesto makes a strong claim: current AI creative tools are made by non-creatives for other non-creatives to feel creative. Ethan’s designing something different, a tool that honors the history of creative software while building scaffolding on top of AI models, not just aggregating them. We brought him in to talk about FLORA’s design philosophy, the specific interface decisions that make 50+ models feel coherent instead of chaotic, and where creative tools are heading as generative AI becomes the dominant material designers work with. At a glance * FLORA’s core abstraction is modality (text, image, video, audio) rather than individual models, because modalities never change but models constantly evolve. * The node-based canvas makes creative workflows visual and repeatable, turning the process itself into the deliverable, not just the output. * Every new primitive on the canvas has to fight for its life because complexity kills the beauty of node-based thinking for newcomers. * Context and intent are the twin engines of UX in the AI era, every design decision comes down to answering those two questions. * Ethan uses ChatGPT to dump months of Flora context and Cursor with Figma MCP to prototype directly in code, moving away from Figma as source of truth. * FLORA isn’t just creative software, it’s positioning itself as a creative operating system with no allegiance to any form factor or surface area. * The role of designers is changing fundamentally, expect more work directly in code, higher fidelity prototypes that don’t take forever, and a shared language with engineers. * When hiring designers, Ethan looks for proficiency across multiple creative tools, experience with current AI creative tools, and strong opinions about what works and what doesn’t. Topics Creative tools must respect established workflows while innovating where it matters Ethan says a tool made by creatives for creatives needs to fundamentally acknowledge the history it’s coming from, and it needs to know when to abide by those rules and when to break them. That philosophy is core to how he thinks through FLORA and how to expand it, always asking how they can pay homage to and develop what’s already been done, improve it where it needs to be improved, augment it where it needs to be augmented, specifically with new technologies. It needs to actually work, be scalable, be enjoyable, and be something you can build a personal relationship with. What strikes Ethan when talking to creatives is how personal people’s relationships are with their tools, which is interesting because tools are made to be adopted by lots of different people, yet there’s such an individualistic experience when using them. The way one person uses Figma is different than how another uses it, and that extends all the way down. So FLORA needs to be universally approachable and understandable and adoptable, but also something you can build a relationship with. Models are building blocks, not the intelligence itself, and the abstraction should reflect that Ethan thinks we shouldn’t be thinking about models as intelligent in their own right because they’re not, they’re basically input output machines that are very good and novel in the way they’re doing that. He believes we should be using the models themselves as the tools we’re building a foundation on top of, where it’s less about the individual model and more about the structures and scaffolding on top of them. Right now everyone is model focused, new models come out all the time and people post about which one can do what with better prompt adherence, but Ethan thinks the models are more fundamental than that. We should actually be building structures on top of the models, and that’s really core to the philosophy of FLORA and this next generation of creative tools and how they’re integrating and building on these new abilities. Professional users care about models only insofar as it lets them get to the creative output they want, so if FLORA could guarantee the same output without mentioning a model at all, Ethan bets the vast majority of creative professionals wouldn’t care. The reason model names and hype around new model drops is still important is because it’s much more explicit about the kind of control that enables, but always the point is the output, what am I trying to do, what will it look like, what will it feel like, how do I get the result I want. Modality became FLORA’s abstraction because it’s the only thing that never changes When FLORA was figuring out the common language and substrate they’re building on top of, Ethan explains they realized text to image models were easy to categorize because they all take text input and output an image, and you could add complexity like what aspect ratio or resolution you get, but it’s still manageable. But then multimodal models threw the whole paradigm out the window because a model that can support both text and image but needs to be both or some permutation of the inputs makes that abstraction a lot messier. The conclusion they came to, and what Ethan thinks has contributed to FLORA being successful so far, is you need to pick an abstraction that has nothing to do with the models. The abstraction they settled on was the modality, and Ethan is obviously biased but thinks that’s the correct abstraction we should be building on top of because that’s never going to change. We will always have text, image, video, audio, 3D models, and whatever those atomic units are, the building blocks, that’s why they call them blocks on the canvas. Then you can build on top of that. They’re always stress testing that foundation to make sure it’s compatible with new models that come out, and so far it’s held up, and people really respond well to it because it’s more intuitive than using nonsense model names for people just coming into this. Node-based canvases encourage divergence and convergence while making the creative process visible Ethan loves node-based canvases because they’re inherently spatial, and we are spatial creatures who think in dimensionality and relativity, which is why infinite canvases have become so popular in design tools because you can place things and organize them. A node-based canvas has all those benefits but then the fact that you’re actually connecting your train of thought together makes it very easy to follow and encourages the theme of the double diamond, divergence and convergence, which is very visual when you’re looking at a node-based canvas because you’re quite literally connecting your thoughts together. What FLORA enables specifically and what they’re excited about is that FLORA is allowing you to codify and visualize the process, so the creative process then becomes the material you’re working with, the deliverable, not just the output. You could generate a poster of whatever, but what if FLORA could give you the creative process and make it repeatable and scalable, that’s the new deliverable, the process. Ethan was convinced this paradigm has persisted, it’s been kind of niche but there’s got to be a reason for that, and it’s all those things he explained, but the reality still stands that it is confusing for people who are used to traditional interfaces. By simplifying and reducing and abstracting away the other complexity, he thinks they can let the beauty of that way of making shine through. Every new primitive on the canvas has to fight for its life to prevent overwhelming complexity Ethan is always trying to make sure every new primitive they consider introducing to the canvas has to fight for its life. He hates what he’s seen in past node-based tools, and he says this from a place of love because he’s done a lot of work in Touch Designer, Max MSP, and Pure Data, but if you need a place to put something it’s just make another node, put another node on the canvas. That’s cool for people who understand and are in the universe of the software, but for a new person coming in that’s chaos, you’re telling them there are 400 nodes that all do different things and they have to know what they do and how to connect them and in what combinations. There’s been really beautiful emergent community from that in tools like Blender’s geometry nodes, Unity’s shader graph, and Unreal Engine’s blueprints, with people not knowing what’s going on necessitating coming together and making and sharing and knowledge sharing, which is lovely. But Ethan thinks that can exist without needing to have so many primitives on the canvas, and when he says primitives he means the basic atomic units you’re stitching together to build something bigger, in their case a creative process or workflow. Nodes create causal relationships and “noodles” are where the context-sharing actually happens The nodes inherently have a causal relationship and there’s a chronology to them, Ethan explains, which is different from laying things out in Figma where you might mentally paste horizontally for one iteration and vertically for something else. The nodes implicitly imply this thing and then this thing, you start with text and then get an image and then the image turns into a video and that video turns into something else. The question becomes what is the thing that’s actually

    1h 18m
  5. Xiulung Choy - Head of Design at Graphite

    13/11/2025

    Xiulung Choy - Head of Design at Graphite

    Craft, Code, and Designing for Developers Xiulung Choy is Head of Design at Graphite, a developer tools startup reimagining code review with AI-native workflows. Before Graphite, Xiulung spent his career at the intersection of creativity and technology, leading design at Nest where he helped bring smart home products to life, managing design teams at Adobe Creative Cloud to build extensibility and AI-driven experiences for millions of creators, and working at Google on experimental products in Area 120. He joined Graphite as the founding designer with a mission to build not just a product, but a design team and culture from the ground up. His perspective on where designers should work, how AI changes collaboration, and what craft actually means is shaped by seeing what works across companies at every stage. At a glance * Startups are best for designers with strong foundations looking for ownership, not designers still building skills. * Graphite Agent helps engineers ship code faster by reviewing changes, answering questions, and merging without bottlenecks. * The team designed their Agent as a distinct entity in the UI to match user expectations and trust levels. * v0 lets designers build hyper-specific tools for narrow problems, from testing diff colors to exploring animations. * Graphite’s Agent interface shows intentional design decisions about where AI lives and how users control it. * Design and engineering work in parallel toward the same end state rather than sequential handoffs. * Designers don’t need to code, but understanding how software is built is critical. * Portfolios should showcase taste and what you would have done with full control, not just what shipped. * Keep developing your craft even when the job market is tough, because specialized skills stay in demand. Topics Designing at startups Xiulung thinks startups are great for designers who have a really strong foundation and tool set and are just looking for more ownership and responsibility, but most designers are still looking to grow their design skills, and at a startup if you’re the first, second, or third designer, everyone is so busy working on the product that they don’t have time to mentor others. At agencies, you have specialists like the motion design person, the copywriter, the strategist, the color expert who’s honed their craft into narrow aspects of design, and you get to learn how to do it at a very high level. At large companies, you probably have an experienced design manager and other principal or staff designers who can participate in crits and show you the ropes, but at a startup, everyone’s frantically looking for product-market fit and trying to execute on the next feature. Xiulung says it’s always easier and less painful to learn from other people’s mistakes than your own. If you’re a staff or principal designer with a great foundation looking for an environment to have more impact, startups are perfect for that. What Graphite does and why code review needs AI Xiulung explains that Graphite is a code review platform built for this modern age where engineers are using AI to ship a lot of code, but all those code changes need to be tested, reviewed by co-workers, and merged into the codebase. They have Graphite Agent as an AI code reviewer, a first-in-class code review experience to help co-workers understand code using AI to find answers and formulate responses, and a merge queue to merge everything while making sure it passes tests. The value has only increased as engineers adopted more AI tools to write code changes faster than ever, which puts a bottleneck on code review. Xiulung thinks of AI as an always-awake, super intelligent co-worker who’s ready to jump in and give you answers, especially for teams remotely distributed across time zones where you might wait hours before anybody is awake to review your pull request. [Demo] Inside Graphite Agent’s interface: designing where AI lives in the product Xiulung walks through Graphite Agent’s interface and explains the key design decisions. They experimented with having the Agent show up as a reviewer just like any other human co-worker, but that felt too hidden and didn’t match user expectations, so they ultimately decided it should show up separately and be clear about what it is, what it’s done, and give it its own affordance to rerun. The interface shows what the Agent has access to, what it knows about, and what it’s able to do, and it analyzes the pull request to suggest specific prompts the user might want related to that PR. Users can add comments to chat and ask Graphite about them, which helps short-circuit the round trip between authors and reviewers. Xiulung also shares a concept called code tours, which annotates the pull request with natural language explanations about what the code is doing, almost like having a guide tour you through all the changes so reviewers can understand what’s happening and why. [Demo] Earning trust by being honest about what AI can and can’t do Xiulung thinks earning the trust of engineers is similar to designing for any user base in that you really need to show you understand what they’re trying to accomplish, but engineers are very opinionated and they understand the technology, so they can see through marketing speak. Graphite tries to be very honest with their user base not only about capabilities but also limitations, and they try to show they understand what engineers are there to do by being predictive and providing things before they even ask for it. For instance, if a pull request is failing CI, Graphite has an action card right at the top that encourages someone to fix it then and there, and that pulls up the Agent panel which showcases the changes it’s proposing to fix CI so it passes. They keep the human in the loop, making sure they’re in the driver’s seat and still the ones to click apply changes after reviewing. [Demo] Using v0 to build hyper-specific design tools on the fly Xiulung describes how he recently wanted to update some diff highlighting colors but Figma doesn’t have the best color picker, so he jumped into v0 and asked it to give him syntax highlighting themes so he could switch between a bunch and see what looked interesting, then asked for a bunch of red and green color options to flip between really quickly. He had it add HSL sliders so he could fine-tune them to exactly what he wanted, and it gave him a way to quickly build a very specific design tool for the exact problem he was solving. Xiulung thinks you’re now able to design the ideal tool for the job where a lot of design tools are still general purpose trying to do everything, and sometimes you just want something very specific and hyper-optimized for one thing. Using AI to prototype a design tool lets you have a very customized tool to accomplish a very narrow ask, but perhaps gets you to the end result a lot faster. How design and engineering work in parallel when AI makes iteration cheap Xiulung explains that at Graphite, engineers have leaned into using AI and as a result can ship code changes a lot more quickly, and the engineering org has grown quite a bit, so they found design being a bottleneck and asked how they can change not only the design process but also the goals of design when the cost of engineering iteration is much faster and cheaper. What they do now is have design, engineering, and product get in the room together at the beginning and define the big pieces, the mental model, the rough shape of the end state they’re working towards, forgetting about polish and how it’s actually going to look but focusing on the scaffolding and foundations. They’re much more comfortable with being 60 or 70 percent confident and thinking the solution is somewhere in this realm, so they just start building toward it and refine as they go. That unblocks engineering to kick things off while design works on the ultimate end state, sometimes working backwards to meet engineering at various points for beta releases. Should designers code? Not necessarily, but they need to understand how software is built Xiulung says throughout his career he’s heard many things designers should do, like designers need an MBA, need to know strategy, UX copywriting, marketing, need to become PMs, and he thinks there’s some level of insecurity where designers almost always feel like designing is not enough. If you ask whether designers who don’t know how to code will be out of a job in a year or two, absolutely not, and there’s going to be continued value in being really great visual problem solvers. On the other hand, if you’re designing a software product as most designers are, it’s really critical to understand how that software is built, so even though you might not be coding or shipping code, it’s critical to understand how your designs get translated into code. Part of that is experimenting with AI, with v0, with Lovable to understand how these tools translate Figma frames into code. Xiulung thinks when he considers org structures and division of labor, he currently doesn’t see designers as being the owners of the codebase. What to look for when hiring designers: craft, taste, and creative problem solving Xiulung says they look for designers who can creatively problem solve, ideally in a very visual way, can talk through tradeoffs, and show they really understand the context they’re designing for. They heavily weight portfolios and case studies and try to ignore what company you’re at or what school you went to because designers can come from all sorts of backgrounds, but they look for people who are naturally curious and creative. Xiulung thinks a lot of designers feel held back on the job because of real world constraints like timelines, existing design systems, existing patterns, or engineering bandwidth, and he sees designers say if they had it their way they would do things differe

    1h 17m
  6. Simon Corry - Senior Director of Product Design at Ramp

    16/10/2025

    Simon Corry - Senior Director of Product Design at Ramp

    Ramp’s Second Era: The Return to Zero-to-One Simon Corry is Senior Director of Product Design at Ramp, where he leads design for a company that’s rediscovering its startup DNA while operating at scale. Before Ramp, Simon spent 25 years navigating every evolution of digital design, from the pre-iPhone world through the app store explosion to today’s AI inflection point. He’s built teams, shipped products, and watched design transform from a scrappy creative pursuit into a strategic discipline and back again. Now he’s helping Ramp make a bet that financial automation powered by intelligence could reshape an entire industry. At a glance * Simon explains why Ramp is entering a second zero-to-one era six years in. * Simon describes how AI tools are buying back time without replacing designers. * Simon shares how Ramp Labs is building an industry-defining AI R&D team. * Simon traces design’s generational shift back toward curiosity and creativity. * Simon outlines what he hires for: velocity, autonomy, and natural curiosity over polished portfolios. * Simon lays out Ramp’s vision: context-aware software that’s proactive instead of reactive. Topics Ramp’s second era: why Ramp is 0-to-1 again Simon thinks curious designers look for moments of maximum impact, usually at the beginning when every decision shapes the foundation—but Ramp is different. Six years in, they’ve already made the enterprise transition and now they’re on the other side, entering a second zero-to-one era driven by AI and financial automation. Some areas have strong product-market fit and need careful stewardship, but premium features without deep validation become testing grounds where designers can take real risks with startup energy inside a well-funded company. Simon is clear: “We could play it safe and Ramp will continue to be a very successful company, no doubt. Or we can lean in to the opportunity we’re seeing in front of us and really take that bet on AI automation.” Velocity as DNA: maintaining Ramp’s competitive edge at scale Simon divides Ramp’s work into two mental models: PMF areas that get attention to detail and rigorous process, and pre-PMF areas that get looser constraints and faster iteration—but the core principle stays the same across both, saving customers time and money. The real test is whether Ramp can maintain velocity in the PMF areas while taking bigger swings in the pre-PMF spaces, a balance that separates sustained success from companies that either slow to a crawl or break what’s working while chasing the new thing. Simon is direct about the stakes: “We absolutely want to take care of all of our existing customers. So you have to balance the velocity that we’re known for with that care, that attention to detail that we’re also pretty well known for.” AI tools are buying back more time for the work that matters Simon doesn’t see designers prompting their way to the app store—instead, he sees AI replacing tedious kickoff sprint meetings that used to burn days on alignment, letting small groups prompt ideas in real time and iterate before investing serious cycles. What used to take a month of red tape and sprints now takes days, and that recovered time goes into work that actually matters: accessibility, design systems, refining details that get brushed aside when teams are rushed. Simon puts it simply: “If all you were doing was drawing wireframes, you’ve probably done something wrong. You got into product design to go develop the end-to-end journey. What we’ve done with AI is giving you more time to go explore the end-to-end journey.” What designers are responsible for knowing now Simon draws a parallel to the old “should designers code” debate—the answer is the same: you don’t need to become an engineer, but you need enough understanding to be empathetic about the technology, the customers, and the people you’re working with. Simon thinks curiosity is the baseline, and while understanding models deeply is optional, the real challenge is designing for non-deterministic interfaces where intelligence means designers can’t predict exactly what users will see. At Ramp, Simon’s building around the idea that different customers want different levels of exposure to AI, so the answer is meeting people where they are, showing the work through suggestions and previews, and letting confidence build over time. Ramp’s vision: context-aware, proactive software Simon describes Ramp’s dream as no more cold starts—the insight is that Ramp already knows a lot about your business because they’re embedded in your policy, spend, and workflows, so why build a product with a thousand sections and fifteen clicks when you already know the user is an accountant whose main job is closing the books? Simon compares it to Apple versus Microsoft: Microsoft exposed everything, Apple had an opinion about what to surface but kept complexity under the hood for those who wanted it. Simon thinks traditional left-hand navigation with fifteen clicks can now disappear—tailor the experience to the user, surface the jobs that need doing, learn from interactions, and keep automating until the goal is calm, proactive oversight instead of reactive firefighting. Ramp Labs: building an industry-defining AI R&D team Simon gives credit to Alex Stauffer for founding Ramp Labs on the insight that most companies have trapped value—experiments that almost became products but didn’t fit the roadmap—and with AI, those experiments can spin up in hours instead of months, which means trapped value can actually ship. Labs serves a second purpose: engaging the broader community thinking about AI in interesting ways, putting experiments out in public with a fun spin, and attracting like-minded builders, which has worked exactly as hoped with DMs blowing up from founders and AI talent. Simon is clear that Labs has already created huge impact across the core product because it’s not restricted by roadmap or existing engineering stack, so it exposes what’s possible and other teams learn from both the mistakes and wins without taking those risks themselves. Hiring super ICs: velocity, autonomy, and curiosity Simon hires very differently at Ramp because the culture values velocity above almost everything else, which means being super comfortable with ambiguity, living in it, and thriving on autonomy—Ramp doesn’t want designers who need to be led into problem spaces, they want designers who embrace AI tooling to move faster and go face-to-face with customers using prototypes. Simon looks for natural curiosity in candidates, which shows up in how they ask probing questions and engage in back-and-forth conversation, and he has strong opinions about portfolios: make case studies optional, show the narrative and personality, tell the fun anecdotes instead of boring him with data, and please stop making black and white websites. Simon is direct: metrics are table stakes, not what makes you distinctive—”Show me the unvarnished truth and let that come out on the page.” Design’s generational shift: rediscovering curiosity and creativity Simon traces how design went from a scrappy creative pursuit in the 90s to a six-figure career accessible through boot camps in the last 10 years, which changed the culture dramatically—indie software companies that built for fun died out, natural curiosity faded, and UI became sterile because success metrics drove everything toward hero treatments and rectangles. Simon is sympathetic to mid-weight designers who started 5-10 years ago because they have no relationship to the scrappy fun of the early 2000s and just know how to build patterns and design systems, which is why everything from Notion to OpenAI to Anthropic feels sterile despite new technology. But Simon is bullish about what’s next because the sea change with AI is forcing a rethink, playfulness is returning, and he encourages everyone to live a little: take photographs, study color theory, travel to countries where you don’t speak the language, and if you’re in your 20s, leave your hometown unless there’s a family reason keeping you there. Why we’ll stop calling ourselves designers Simon doesn’t think rigid role labels will last because there are diminishing returns to overspecializing—instead, he expects more hybrids where designers jump into Cursor and get 80% of the way there while working with someone from a PM background who’s transitioned into building cloud-based architecture through AI tools. Tool sets have always evolved from Photoshop to Sketch to Figma, and the job is solving communication problems, not protecting a title—Simon’s hope is that this allows more people to spin up ideas, work collaboratively, and kill off middle management. Simon says it plainly: “I basically think you’re going to start seeing a lot of these labels kind of drop away. And I think it’s clear from a design perspective that to be successful, even in the era that we’re living right now, you need to embrace this new paradigm with AI or you will get left behind.” Thanks for checking out this episode. Stay in the loop on new episodes and upcoming events by subscribing. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit doublediamondnyc.substack.com

    1h 11m
  7. Arjun Mahesh - Head of Design at Hebbia

    01/10/2025

    Arjun Mahesh - Head of Design at Hebbia

    From Chat to Alpha: Designing Useful Agents Arjun Mahesh is Head of Design at Hebbia. Before Hebbia, he spent time in the studio of Frank Gehry, then BCG, Kickstarter, Stripe, and Verse, bouncing between consumer and B2B work and building the muscles to ship in high-stakes domains. He lives at the intersection of architecture, computer science, and product design, and he brings that mix to a single obsession at Hebbia: make intelligence usable in finance and law, where accuracy, auditability, and speed actually decide if software survives. At a glance * Arjun says chat is the right on-ramp, not the destination. * Arjun explains how he introduces agentic software to conservative teams. * Arjun shows how Hebbia codifies a firm’s “alpha” into one-click agents tied to real deliverables. * Arjun walks through how the Hebbia experience is evolving from complexity to approachable, high-power flows. * Arjun outlines the role of Grid as the power surface when chat is not enough. * Arjun talks about tooling, input quality, and short iteration loops. * Arjun shares how he measures success and what he is hiring for next. Topics Chat is the doorway Arjun thinks chat meets people where they are and reduces time-to-first-value. He treats it as the entry point that proves utility fast, then graduates users to higher-power surfaces only when the job requires more control or throughput. Introducing agents to conservative contexts Arjun says many finance users are new to agentic UX, so he frames first-run around known jobs to be done. He leans on safe defaults, concrete outcomes like diligence prep or meeting prep, and language that maps cleanly to existing analyst workflows. Codifying “alpha” into one-click workflows Arjun’s view is that Hebbia’s edge is codification. Proprietary research steps become templates, and proven templates graduate to one-click agents that output the artifacts teams already use in review and reporting. Grid as the power surface Arjun describes Grid as the surface for complex analysis once chat hits its ceiling. It spans multi-document retrieval, large prompt sets, and parameterization, making heavy workflows repeatable and auditable without forcing low-level prompt wrangling. How the Hebbia experience is evolving Arjun walks through the shift from an early, visually dense interface that validated the concept to a design that pins common jobs, makes inputs and outputs explicit, and supports one-click flows for meeting notes and grid analyses, with deeper controls available on demand. Tooling and loops that actually move work Arjun says the team uses Hebbia itself alongside V0, Midjourney or Visual Electric for visuals, and assistants like ChatGPT, Perplexity, and Elicit. His emphasis is that outputs track inputs, references and instructions matter, and short iteration loops beat one-shot prompts. What “success” means in this category Arjun is explicit about signals: is it selling, is it used by the target roles at depth, and is it differentiated from consumer tools and direct competitors. Novelty is fine; durable adoption that replaces legacy workflows is the bar. What’s next and who he is hiring Arjun thinks the next fronts are deeper codification, agent autonomy with appropriate oversight, and mobile contexts with tighter attention windows and latency budgets. On hiring, he looks for blended design-product-engineering profiles, evidence of agentic interaction thinking, understanding of retrieval and guardrails, and working prototypes over perfect decks. Thanks for checking out our first episode. Stay in the loop on new episodes and upcoming events by subscribing. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit doublediamondnyc.substack.com

    1h 13m

About

Chatting product, design, and the future - live in NYC. doublediamondnyc.substack.com

You Might Also Like