High Output: The Future of Engineering

Maestro AI

A window into tomorrow's software organizations through conversations with visionary engineering leaders who are redefining the profession. Join us to explore how leadership will evolve, what makes high-performing teams tick, and where the true value of engineering lies as technology and human creativity continue to intersect in unexpected ways. maestroai.substack.com

Episodios

  1. 11 SEP

    The 35-Engineer Hiring Sprint

    Most hypergrowth startups hire fast and regret it later. They compromise on quality to fill seats, rationalize cultural misfits as "learning experiences," and spend months fixing the technical debt created by rushed hires. Adam Kirk took a different approach: he hired 35 engineers in 12 months and raised his quality bar with each hire. Adam is co-founder and CTO of Jump, where they've built a note-taking platform specifically for financial advisors that grew from 4 people to 50 in one year. But here's what makes his story different: instead of the typical startup hiring playbook, he invented a process that lets him see how candidates actually work before making decisions. The result? A team scaling at breakneck speed without the usual quality compromises. Here's how he did it—and why traditional hiring advice fails when you're moving this fast. 🎧 Subscribe and Listen Now → The Extreme Ownership Filter Traditional hiring advice says "hire for potential and train for skills." Adam learned this doesn't work when you're drowning in growth and can't afford hand-holding. "As a matter of survival, I'm looking for somebody who can just, you know, I can say like, look, take this part of the product and own it, and I don't make it so that I don't have to think about it anymore," he explained. "Make excellent decisions, build it really fast. Fix all the bugs. Deliver a ton of value to customers such that I don't really have to think about it anymore." This isn't just preference—it's survival. When you're making 100 decisions per day and reading hundreds of Slack messages, you need people who can take complete ownership without ongoing guidance. The contrast with most companies is stark: "One feedback I got from somebody was that most companies want engineers to constantly be asking for permission or getting validation on what they're doing. The way that we work is sort of like, no, I trust you. Just go get it done." This filter eliminates 90% of candidates immediately. But for hypergrowth companies, hiring someone who needs constant direction is actually more expensive than not hiring at all. The Trial Week Revolution Here's where Adam's approach gets innovative: instead of trying to predict performance through interviews, he pays candidates to work for a week on real problems. "We give them an actual real difficult challenge, like a ticket that we need built. We get to see them actually working on our code base, actually building something that we need," he said. The process starts with a 30-minute coding exercise to screen basic proficiency, then moves directly to a paid trial week. No multi-round interviews, no theoretical problems, no whiteboarding sessions. What this reveals that traditional interviews can't: * How they handle ambiguity when requirements aren't perfectly defined * How they communicate when they're stuck * How they integrate with the existing team and codebase * Whether they can actually deliver results in your specific environment "Usually by halfway through the week, we know that this is somebody that we want to work with," Adam noted. The approach is intensive—"you're onboarding them, getting them set up, you're evaluating their work constantly"—but it dramatically reduces hiring mistakes. Most importantly, it works for candidates too: "It's good for applicants because they get to see how's the team? Do they like the code? Do they like the tech stack? Working with us for a week, they're pretty sure whether or not it's a good fit for them." When the CTO Can't Scale Adam's honest about the personal cost of hypergrowth: the system that got them here isn't sustainable for him personally. "I make too many decisions every day. I have too much context switching fatigue. I'm reading hundreds of messages in Slack every day. I am being asked to make a hundred decisions every day," he told me. "I wouldn't describe my state as sustainable right now." The challenge isn't just volume—it's that the hardest problems naturally filter up: "All the crap kind of filters to the exec, up to the CEO or to the top. All engineering problems, the worst problems filter to me. And a lot of them are stuff that are not fun to deal with." But here's his insight: there are only two "glass balls" he can't drop that will compound over time—code quality and hiring quality. Everything else can be managed or delegated, but these two mistakes get more expensive every day you don't fix them. His solution is to hire people who can completely remove decision-making burden: "I need you to take one of these bricks that I'm holding and take it completely from me." The AI Productivity Reality Check While the industry debates whether AI will replace engineers, Adam has measured actual results in a hypergrowth environment where productivity matters immediately. "For some things it makes you 10 times faster, like writing tests. AI is so good at writing tests you should never write your own tests anymore," he said. "Maybe it five Xs you while you're typing out the code, maybe it two Xs you answering questions about the codebase." His measured assessment: "The effective increase over all those things combined is probably around 1.5 to 2x more productive." But here's the strategic insight most leaders miss: this doesn't reduce hiring needs. "If you take your 10 engineers, give them AI, and now they're 20, your competitors are gonna hire 20 and double it to 40. You can't hire less engineers." The competitive advantage isn't using AI to hire fewer people—it's using AI to make your existing team more capable while continuing to hire aggressively. Adam's team uses AI extensively during trial weeks to see how candidates leverage these tools in real work. What This Means for Your Startup First, design your hiring process around actual work, not theoretical scenarios. If you can't afford to pay someone for a week of real work, you probably can't afford to hire them full-time either. Second, hire for extreme ownership when scaling fast. Hand-holding kills velocity when you're trying to move at hypergrowth speeds. Look for people who can take complete ownership of product areas without ongoing management overhead. Third, accept that hypergrowth means accepting some unsustainability in exchange for speed. The question isn't whether you'll be overwhelmed—it's whether you're building systems that will eventually let you delegate the right decisions. Fourth, use AI to amplify your best people, but don't expect it to replace hiring. The 1.5-2x productivity gains are real, but your competitors will use the same tools. The advantage goes to whoever can hire and scale the fastest while maintaining quality. The trial week test: Before your next hire, ask yourself: "Would I be comfortable paying this person for a week to work on a real problem?" If not, keep looking. The cost of a failed trial week is much less than the cost of a bad hire. This conversation with Adam Kirk originally appeared on the High Output podcast. For more from-the-trenches insights from engineering leaders navigating hypergrowth and AI transformation, subscribe here. High Output is brought to you by Maestro AI. Adam talked about drowning in "hundreds of Slack messages every day" and making "a hundred decisions every day"—but that decision fatigue creates a visibility problem that most engineering leaders face. When your team is distributed across Slack, Jira, and GitHub, it becomes impossible to see who's actually delivering and where bottlenecks are forming. Maestro cuts through that chaos with daily briefings that reveal where your team's time and energy actually go, so you can spot the high performers worth promoting and the blockers slowing everyone down. Visit https://getmaestro.ai to see how we help engineering leaders make better decisions about their teams and projects. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit maestroai.substack.com

    36 min
  2. 3 SEP

    Why AI Won't Kill Engineering Jobs

    Mike Weaver followed every rule in the engineering playbook. When LLMs started disrupting Replicant's voice automation platform, he did exactly what conventional wisdom dictates: "Let's incorporate it in parts of the product. Let's improve these parts of the experience." Keep the legacy system running, gradually migrate functionality, minimize risk. But months into this approach, Mike hit the wall every engineering leader knows: "The way that story always seems to end is the new one never launched." That's when Mike discovered something that changed everything. AI coding tools had fundamentally altered the economics of rebuilds. "The velocity you can move at with greenfield development is just preposterous," he realized. Instead of years-long migrations that typically fail, his team could rebuild Replicant's core platform in 90 days with 12 people. The result? They "threw things away and started over"—successfully. This breakthrough demonstrates how AI fundamentally changes the strategic calculus for engineering leaders. This reveals what most leaders are missing: AI fundamentally changes the strategic calculus of engineering decisions. Before AI coding tools, engineering leaders faced an impossible choice—keep maintaining legacy systems that block innovation, or attempt risky, years-long rebuilds that usually fail. AI creates a new strategic option: rebuilds that are both fast and low-risk, fundamentally shifting how leaders think about technical debt and resource allocation. 🎧 Subscribe and Listen Now → The Demand Multiplier Nobody Talks About Here's what Mike discovered at Replicant: AI doesn't reduce the need for engineers—it multiplies what's economically possible to build. Mike started programming in the late '80s with BBSs, graduated during the dot-com boom, and learned hardcore C++ from "absolute badasses" who came out of Bell Labs. Over 16 years of engineering leadership, he's seen every technology wave—and this one is different. "The overall need and desire for software in the world just seems to completely outstrip supply," he told me. "I just don't see” AI killing software jobs. “It'll just be more software." "I was just talking to a guy today who runs this gym that I work at," Mike explained. "He says he cannot find any software anywhere that will do this biomechanics type analysis that he wants to do. They do motion capture on people who work out there to analyze all this stuff about how their performance is and create custom workout plans for them." Mike's reaction: "How does that not exist? Like all that, none of that technology is like hard to do. Like the motion capture stuff's all figured out. It's just sort of a data munging application with like a UI and some ML in there." The answer: "I think it's just 'cause the market's too small." But here's the insight: "If you change the economics, maybe not." This is the job creation mechanism nobody sees. AI isn't replacing engineers—it's making thousands of niche markets suddenly viable. Software that was too expensive to build for small markets becomes profitable when development costs drop 10x. Why Companies Do More, Not Hire Less Mike's experience challenges the conventional wisdom about AI-driven downsizing. When I asked if he thought people would still have engineering jobs, his response was immediate: "I do, I do. I think there's this whole talk about companies will be smaller. I just think companies are just gonna do more." He made it concrete: "If you're at a startup right now. How long is your to-do list? Like if I double the size of your engineering team tomorrow, would you have stuff for them to do? Yes." This matches what Mike saw at Replicant. With AI tools like Cursor, his team rebuilt their entire conversation automation platform in 90 days—something that would have taken two years before. The result wasn't fewer engineers. It was more ambitious product goals and rapid team growth. "There's so many parts of the business world, the industrial world, that have not really been penetrated by technology," Mike observed. "Those are all ripe for people to go in there and do it." The Skills That Matter More Mike sees the real challenge differently than most. It's not about AI replacing engineers—it's about engineers adapting to work effectively with AI tools. "Experienced engineers who can use those tools, have experience with the tools as well—'cause I think it takes, there's some learning curve to get the most out of them—can just produce an enormous quantity of reasonable quality software very, very quickly." But the bigger shift is social, not technical. "As you become a more senior engineer, I see it's all about communication," Mike explained. "There's only so much you can get done as an individual. And then you need to go outside yourself." With AI handling more routine coding, human coordination becomes the bottleneck. "If everyone can produce code at four times the rate they used to, it's like, okay, well now, there are no more solo projects. You're gonna have to coordinate." What This Means for You Mike's experience reveals four principles for engineering leaders preparing for the AI-driven future: First, stop asking whether AI will replace engineers. Start asking what becomes possible when engineering costs drop dramatically. The constraint isn't demand for software—it's our ability to build it economically. AI removes that constraint. Second, embrace the rebuild option. Mike's 90-day platform rebuild would have been impossible before AI coding tools. If you're carrying technical debt from pre-AI architectures, the cost-benefit analysis of rebuilds has fundamentally changed. Third, invest in communication skills across your team. "I think the only challenge junior engineers are gonna have is that their expectations will be higher, quicker," Mike noted. When AI handles routine coding, human coordination becomes the new bottleneck. Fourth, think about untapped markets, not just optimization. Mike's gym example illustrates thousands of niche applications that were previously uneconomical. The next decade belongs to engineers who can identify and serve these newly viable markets. The question for your next strategic decision: Instead of asking "How do we stay competitive?" ask "What becomes possible now that wasn't possible before?" High Output is brought to you by Maestro AI. Mike talked about how AI tools let his team "produce an enormous quantity of reasonable quality software very, very quickly"—but that speed creates a new challenge. When your team can build 4x faster, staying aligned on what you're actually building becomes critical. Maestro cuts through the noise with daily briefings that reveal where your team's time and energy actually go, so you can spot when that increased velocity is pointing in the wrong direction. Visit https://getmaestro.ai to see how we help engineering leaders maintain alignment in the age of acceleration. Have your own story about AI changing your engineering strategy? We'd love to hear it. Schedule a chat with our team → https://cal.com/team/maestro-ai/chat-with-maestro This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit maestroai.substack.com

    36 min
  3. 22 AGO

    Turning Model Limits Into Moats with Troy Astorino

    Most engineering leaders think about AI wrong. When they see a new model, they ask: "How fast can we ship this?" But the interesting question is: "Where will this break?" Troy Astorino figured this out early. He's the CTO of PicnicHealth, and they've built something remarkable: an 8 billion parameter model that beats much larger frontier models at medical tasks. Because Troy understood exactly where large, general models fall short—and he engineered around those constraints. His company built what might be the world's best LLM for medical records—working with 7 out of 10 largest pharma companies and collecting 350 million clinician annotations over a decade. But Troy's most valuable insight isn't about AI's capabilities. It's about the immovable constraints that determine whether your AI implementation succeeds or becomes expensive theater. The Filing Cabinet Problem Troy grew up around medicine. Both his parents were doctors. As a kid, he worked in their offices and was "horrified by walls of filing cabinets." When the government spent $40 billion digitizing medical records, Troy thought: finally. Software will fix this mess. It didn't. Most EMR systems made doctors less efficient, not more. This taught Troy something important: you can't just layer technology onto broken processes. The process has to change too. This insight shaped everything that came after. 🎧 Subscribe and Listen Now → When Leaders Need to Code Again Here's what's interesting about leadership during technological shifts: engineering leaders may need to get more technical, not less. Troy started PicnicHealth in 2014, writing code all day. As the company grew, he did what every engineering leader does: stepped back from implementation to focus on team building. "The highest leverage way for me to work is less building everything directly and more building out the team." But when LLMs emerged, Troy had to reverse course. "The ability to understand where opportunity is requires more direct hands-on experience," he told me. Why? Because understanding real constraints requires hands-on experience. Where does fine-tuning actually help? Which domains are narrow enough for reliable automation? You can't evaluate these opportunities from team status reports, because the technology is changing too fast. Troy recognized that during periods of rapid technological change, engineering leaders need deeper technical fluency to make good decisions. He had to balance staying close enough to the technology to spot constraints while still enabling his teams to do their best work. This isn't micromanaging. It's strategic intelligence gathering about what's actually possible. The Data Moat PicnicHealth's advantage isn't the size of their models. It's their data. They have 350 million annotations from real doctors using their system over a decade. Every time a doctor corrects the AI, the model gets better. "That kind of medical record data is not in the public training corpus," Troy explains. This creates something interesting: a feedback loop that gets stronger over time. The more doctors use the system, the better it gets. The better it gets, the more doctors want to use it. Most AI companies focus on building more powerful models. PicnicHealth focused on building better data collection systems. The Application Layer Surprise In 2022, everyone thought AI value would flow primarily to model creators—OpenAI, Anthropic, Google. The reasoning seemed sound: models are the hardest part to build, so they should capture the most value. This turned out to be incomplete. "I'm very glad that we live in a world where a lot of value is delivered and captured by the application layer," Troy says. Here's why: foundation models are commoditizing, but domain expertise isn't. A general-purpose model might have broad knowledge, but it doesn't know the specific workflows of clinical trials, or how doctors actually review patient records, or which edge cases matter most in your domain. This is where constraints become advantages. By focusing on medical records exclusively, PicnicHealth could optimize for things that matter in healthcare but nowhere else. The Narrow Domain Strategy Most AI implementations fail because they try to solve everything at once. Picnic Health builds AI agents that operate within their integrated clinical trial system. This sounds limiting, but it's actually powerful. When you control the entire workflow—from data ingestion to final output—you can build in validation loops, human oversight, and error correction at every step. You can define clear success metrics and create tight feedback cycles. General-purpose AI tools can't do this. They have to work for everyone, which means they're optimized for no one. Bottlenecks Don't Disappear Here's the thing about technological progress: it doesn't eliminate bottlenecks, it just moves them. AI accelerates drug discovery, but regulatory approval still takes 7-10 years. "Even if there's way more potential assets," Troy observes, "you're still 10 years away from people actually being able to use that." This pattern repeats everywhere. Technical capabilities advance at an amazing pace, but distribution into real industries and workflows takes much longer. It requires changing human behavior, not just building better software. The leadership lesson: don't assume AI will solve your bottlenecks. Assume it will create new ones. Your job is figuring out where. What This Means for You If you're building with AI, Troy's approach offers a different path: First, understand your constraints before you optimize for capabilities. Most processes have hidden bottlenecks that no amount of AI will fix. Find those first. Second, build data flywheels, not just models. Look for workflows where user corrections create proprietary datasets. This is how you build moats in a world of commoditized models. Third, go narrow before you go wide. Start with controlled environments where you can measure success precisely and iterate quickly. Reliable automation in a narrow domain beats unreliable automation everywhere. Fourth, during technological shifts, technical leaders need to stay technical. You can't evaluate AI opportunities from conference rooms. You need to understand the constraints firsthand. The question for your next AI decision: are you solving a real constraint, or just adding sophisticated automation to a broken process? The difference determines whether you build a moat or just an expensive feature. A Note About Maestro AI Troy described a challenge most engineering leaders face: as you grow from writing code to leading teams, you lose visibility into what's actually happening. When the work is scattered across Slack, Jira, and GitHub and more, it becomes impossible to see where time goes or what's blocking progress. Maestro AI solves this with daily insights that show where your team's energy actually goes, so you can spot problems before they compound. If you're tired of guessing what's really happening with your team, visit getmaestro.ai or schedule a chat with us here: https://cal.com/team/maestro-ai/chat-with-maestro This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit maestroai.substack.com

    35 min
  4. 25 JUL

    Why AI Mandates Fail with Adil Ajmal

    This week on High Output, Adil Ajmal, CTO of Fandom, tackles the question every engineering leader is wrestling with: How do you get an entire organization to adopt AI without simply mandating it? While many engineering leaders are navigating AI adoption with a mix of top-down encouragement and bottom-up experimentation, Adil took a more structured path. He set a specific, measurable goal: 80% AI adoption across Fandom's engineering organization this year. Not 100%. Not "eventually." Exactly 80%—because he learned that successful technology adoption is about deliberate change management, not just wishful thinking. Managing AI strategy for 350 million monthly users across 250,000 communities, Adil has discovered that the challenge isn't the technology itself—it's getting distributed teams to embrace it while maintaining the stability that massive scale demands. 🎧 Subscribe and Listen Now → What's inside (34 min): → The 80% strategy. Why Fandom set a specific AI adoption goal rather than hoping it happens: "We've set a goal of AI adoption for our teams... we measure our usage." The framework includes enterprise licenses, internal champions, and treating it as deliberate change management, not a tech rollout. → Measuring what matters. How Fandom tracks adoption across different tools and teams: "We also bring in the right tool for the right thing. So if you're doing more front end development, you know, we have cursor that you can use... copilot is better for a bunch of other things." → The champion strategy. Instead of mandating from the top, finding internal advocates who can show others what works: "We try to find champions within our team who've had good experiences so that they can be the promoters of it for other team members." → Experimenting safely at scale. How Fandom balances AI adoption with stability for 350 million users: "You don't want to skip the code review part. You don't want to skip the automated test suites." The key is knowing what AI is good at—and what it's not. → The global content challenge. Why AI translation works perfectly for Shogun but breaks for Expedition 33: "Your translation may be factually correct, but if it doesn't actually match with how people are using it, it's not going to work out." Human oversight becomes critical at scale. Why it matters We're past the point of debating whether to adopt AI—the question now is how to do it effectively across entire engineering organizations. Most companies are taking one of two approaches: mandate it from the top or hope engineers adopt it naturally. Both strategies fail. Adil's 80% goal reveals a third way: set specific, measurable targets and treat AI adoption like any other major organizational change. It requires champions, metrics, enterprise-grade tools, and deliberate change management. His experience managing multiple company acquisitions (Twitter, Amazon, Intuit) taught him that successful technology adoption isn't about the technology—it's about people. The same principles that work for integrating acquired teams work for AI adoption: alignment, context-setting, and giving people time to internalize change. At Fandom's scale, the stakes are higher. With 350 million users depending on platform stability, they can't afford to experiment recklessly. But they also can't afford to fall behind on AI capabilities. Adil's approach shows how to thread that needle. Your turn Adil's framework challenges us to be more deliberate. Here are two questions to consider: * Can you actually measure your team's AI adoption, or are you flying blind? What would change if you could see exactly how AI tools are impacting your team's velocity and output? * As AI creates more efficiency, where are you reinvesting that time—into your product, your platform, or your people? If you're wrestling with AI adoption strategy for your engineering team, we'd love to hear your story. Schedule a chat with us → https://cal.com/team/maestro-ai/chat-with-maestro High Output is brought to you by Maestro AI. As your teams adopt AI tools to ship faster, staying aligned on what you're actually building becomes the critical challenge. Maestro cuts through the noise with narrative status updates that digest every ticket, code change, and team discussion—because in a world where you can build anything, you need clarity on what you should build. Visit https://getmaestro.ai to see how we help engineering leaders maintain alignment in the age of acceleration. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit maestroai.substack.com

    35 min
  5. 24 JUN

    Building in the Age of Abundance

    This week on High Output, Tacita Morway, CTO of Textio, reveals why the biggest challenge in the AI era isn't learning new tools—it's learning what not to build. From landscape design to heavy machinery operation to leading engineering teams, Tacita's unconventional path taught her that management isn't about having all the answers. It's about asking better questions. Now, as AI transforms how we build software, she's applying that same principle to help teams navigate an overwhelming abundance of possibilities. 🎧 Subscribe and Listen Now → What’s inside (34 min): → The brick wall moment. Tacita's transition from engineer to manager felt like "running full force into a brick wall." Her solution? Stop asking "what did I ship today?" and start asking "how are my people growing?" → The information fire hose problem. With AI breakthroughs arriving faster than anyone can absorb them, Tacita's filtering strategy: problems first, technology second. "What challenges am I trying to solve right now?" → AI as your most honest critic. Tacita uses AI to get the unvarnished feedback her team might be too polite to give: "These systems won't hold back—they'll tell you when you're missing something your colleagues might be too polite to mention." → The end of rigid roles. Her prediction: role boundaries between PM, engineering manager, and designer will collapse as AI gives everyone superpowers across disciplines. → The focus challenge. When you can build anything quickly, staying focused on user value becomes the critical leadership skill: "You can do so much now—that's not what they're trying to buy right now." Why it matters We're entering an era where the constraint isn't what we can build—it's what we should build. As Tacita puts it: AI will "unlock creativity and invention," but only if we resist the temptation to build everything just because we can. The companies that win won't be those with the fastest AI-assisted development cycles. They'll be the ones whose leaders can cut through the noise of infinite possibilities to focus on real human problems. This requires a fundamentally different kind of engineering leadership—one that prioritizes strategic thinking over technical prowess. Tacita's vision of "smaller teams" and "fluid roles" isn't about cutting headcount—it's about unlocking organizational agility. AI enables large companies to move like small ones, where people collaborate more intensely, ideate more rapidly, and cross-pollinate ideas across dissolving role boundaries. The focus becomes what machines can't replicate: critical thinking, user empathy, and business judgment. Your turn Tacita's approach raises the essential question: In an age where you can prototype ideas in minutes and ship features in hours, how are you maintaining focus on what actually matters to your users and business? In this age of abundance, what are you saying no to? If you're wrestling with these challenges, we'd love to hear your story. Schedule a chat with us → https://cal.com/team/maestro-ai/chat-with-maestro High Output is brought to you by Maestro AI. When AI lets your team ship faster, staying focused on what matters becomes the critical leadership challenge. As your engineers become more productive, Maestro cuts through the noise with narrative status updates that digest every ticket, code change, and team discussion. Because in a world where you can build anything, you need clarity on what you're actually building. Visit https://getmaestro.ai to see how we help engineering leaders maintain focus in the age of abundance. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit maestroai.substack.com

    34 min
  6. 3 JUN

    People-First Leadership in the AI Era

    As AI reshapes the what of engineering, leadership is about focusing on the who. This week on High Output, Glenn Veil, SVP of Engineering at Order.co, shares how three decades of unplanned leadership taught him the most important lesson of all: technology will always evolve—but people remain the constant. Glenn didn't start out aiming to lead. One sudden promotion, a broken website, and a confused loft full of engineers later, he found himself in charge—and completely unprepared. What followed was a trial-by-fire journey from tech-first problem solver to people-first builder of teams, careers, and future leaders. What's inside (23 min): → The accidental promotion. Glenn thought he was getting fired when the VP was waiting for him at the front door. Instead: "Glen, you're director of technology." He had to tell his new team: "Hey, I think I'm in charge now." → The hard lesson: people over code. Glenn started by focusing on what he knew—the technology. But he learned that engineering leadership isn't about fixing code; it's about developing the people who write it. → The great wave of leaders. Glenn's mission today: leaving behind a generation of engineering leaders who know they can succeed by being authentically themselves. → Reading people, not trends. His secret to staying ahead isn't predicting the great technology—it's anticipating how people and teams will evolve through change. Why it matters As AI handles more of the technical heavy lifting, a counterintuitive truth is emerging: the human side of engineering leadership becomes exponentially more valuable. Glenn's prediction is already playing out—companies will operate at dramatically lower costs within five years as AI optimizes processes. But the real competitive advantage won't come from deploying the smartest AI tools. It'll come from using those tools to create space for deeper people development. Smart engineering leaders aren't just automating code—they're choosing AI solutions that help them understand their teams better, spot growth opportunities faster, and develop the kind of human leadership capabilities that no algorithm can replace. As Glenn puts it: "I don't think we'll ever not need software engineers. But we will be leaner. And we'll need stronger leaders to guide the way." The question isn't whether to adopt AI tools. It's whether you're choosing the ones that multiply your people's potential—not just their productivity. Your turn Glenn's approach raises the critical question: How are you using AI's efficiency gains? Are you reinvesting that time and mental bandwidth into developing your people—or just pushing for faster delivery cycles? If you're ready to move beyond productivity theater and start building the kind of human leadership capabilities that will define tomorrow's engineering orgs, we'd love to hear your story. Schedule a chat with us → https://cal.com/team/maestro-ai/chat-with-maestro High Output is brought to you by Maestro AI. If you're an engineering leader looking to improve velocity while using AI efficiency gains to develop stronger teams, visit https://getmaestro.ai to learn how we help teams navigate the future of software development. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit maestroai.substack.com

    23 min
  7. 21 MAY

    Scaling Lean Startups with Raquel Rodriguez

    We’re thrilled to share something we’ve been working on at Maestro AI behind the scenes: High Output, a weekly conversation series where visionary engineering leaders unpack how they’re shipping faster, scaling smarter, and future-proofing their teams. These are unfiltered, from-the-trenches takes on running engineering teams amidst seismic transformations in the tech industry. 🌌 Why these conversations matter—right now The discipline of software engineering is staring at two vastly different futures: * Future 1 - In 10 years, there will be no more software engineers. AI will write all the code, and entire careers paths will vanish. * Future 2 - In 10 years, there will be 10x more software engineers. Barriers to entering the career will drop. Meanwhile, AI will super-charge every developer, and the demand for talent will explode. Of course, reality is somewhere between these extremes—but that “somewhere” spans an entire universe of possibilities. The uncertainty is feeding real anxiety in engineering orgs everywhere. High Output exists to cut through the noise with candid, first-hand stories from leaders who are making concrete decisions today. 🎧 Episode 1—Scaling Lean with Raquel Rodriguez Our debut conversation features Raquel Rodriguez, Head of Engineering at TYB, the community-rewards startup powering Urban Outfitters, Rare Beauty, and 100+ other brands. Listen now → https://maestroai.substack.com/podcast What’s inside (25 min): * Scaling without bloat. Hiring six new ICs, zero managers—How Raquel reorganized into tech-lead pods with one shared stand-up to keep the org flat and decisions moving. * AI as force multiplier. A candid take on using LLMs to amplify, not replace, dev velocity—and the litmus test she uses before trusting an “AI dev agent” with critical work. * Social dynamics at scale. How a 10× surge in community engagement exposed new cultural and reward-economy pitfalls—and how TYB contained them. * Time-management hacks. The “mornings for meetings, afternoons for deep work” rhythm that sticks. 🔄 Navigating the new frontier together Engineering leadership isn't just adapting to AI—it's actively shaping how this technology transforms our industry. High Output brings you the leaders making pivotal decisions today that will define tomorrow's engineering landscape. Each week, we'll spotlight innovators who are striking that critical balance: using AI to augment human creativity while maintaining lean, resilient teams. Raquel's approach is just the beginning. The gap between AI replacing engineers and AI empowering them isn't just theoretical—it's being decided right now, in organizations like yours. Our ask to you: don't just witness the transformation—help shape it. At Maestro, these conversations are the fuel for our mission. We've built an AI-first approach to measure and track progress in software teams, motivated by the exact challenges engineering leaders share with us every day. If any of this resonates with you, we want to hear your story! Schedule a chat with us. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit maestroai.substack.com

    29 min

Acerca de

A window into tomorrow's software organizations through conversations with visionary engineering leaders who are redefining the profession. Join us to explore how leadership will evolve, what makes high-performing teams tick, and where the true value of engineering lies as technology and human creativity continue to intersect in unexpected ways. maestroai.substack.com