Strong Opinions....Loosely held

Challenging the status quo, one tech leader at a time

"Strong Opinions...loosely held" is a dynamic podcast that dives deep into the world of technology leadership. Hosted by a seasoned tech executive with over 30 years of industry experience, this show brings you unfiltered insights, bold ideas, and adaptable strategies for navigating the ever-evolving tech landscape. Each fortnight, we explore the intersection of innovation, leadership, and personal growth. From managing high-performing teams to fostering diversity and inclusion, from cutting-edge tech trends to maintaining work-life balance, we cover it all. Our guests are forward-thinking leaders who aren't afraid to share their successes, failures, and the lessons learned along the way. Whether you're an aspiring tech leader, a seasoned professional looking to stay ahead of the curve, or simply curious about the human side of technology, this podcast offers something for you. We believe in the power of strong convictions balanced with open-mindedness and continuous learning. Join us as we challenge assumptions, debate ideas, and uncover the qualities that make exceptional tech leaders. Subscribe now to "Strong Opinions...loosely held" and be part of a community that's shaping the future of technology leadership. Remember, in the fast-paced world of tech, the only constant is change – and we're here to help you lead through it. New episodes drop every two weeks. Tune in, join the conversation, and let's elevate tech leadership together! darrylmunro.substack.com

  1. 12/12/2025

    The Slow Wash: How AI is Quietly Absorbing You Into Someone Else’s Culture

    I grew up with Guy Fawkes. The fifth of November was almost a national bloody holiday. Bonfires at school, fireworks at home, and nearly blowing your fingers off because the fireworks we were buying were genuinely dangerous. It was ours – a piece of cultural fabric woven through generations. Now? My colleagues’ kids come home begging to go trick-or-treating. Halloween costumes fill the shops from September. Guy Fawkes has faded to a footnote. Here’s the thing: nobody chose this. No New Zealander woke up one morning and decided to abandon their cultural traditions for American ones. It just... happened. Through exposure. Through content. Through the slow normalisation of someone else’s cultural calendar arriving in our feeds, our screens, our children’s expectations. And now we’re doing it again with AI. Only faster. And with far less awareness of what we’re absorbing. The Distinction Nobody’s Making Everyone talks about AI hallucinating facts. We’ve trained ourselves to watch for invented statistics, fabricated quotes, made-up citations. Good. We should. But nobody talks about hallucinated values. Chris Parsons captured this brilliantly in a recent infographic: every AI model is trained on data that reflects specific cultural assumptions about what constitutes “good communication,” “professional behaviour,” “appropriate decision-making,” and “correct priorities.” These aren’t universal truths. They’re cultural artifacts. And we absorb them without noticing precisely because they feel like common sense. Ask an AI about work-life balance? You’ll get answers shaped by US norms. Ask about organisational hierarchy? You’ll get Silicon Valley flat-org assumptions baked in. When a UK company uses US-trained AI for HR decisions, whose cultural values are shaping those decisions? This isn’t hypothetical. It’s happening now. We instinctively question AI that comes from different cultures – we’d scrutinise outputs from DeepSeek with appropriate scepticism about embedded Eastern perspectives. But we don’t question AI that reflects assumptions we already hold, or assumptions so pervasive we’ve mistaken them for universal truth. We absorb those without noticing. The Numbers Game We’re Already Losing Let me walk you through the mathematics of cultural absorption. Mid-1500s: A town crier hollers in the square. Message reaches maybe a hundred people, slowly, through word of mouth. Gutenberg press arrives: Now you can print flyers, slap them around town. Maybe thousands see your message over weeks. 2024: I take a photo and within 30 seconds it’s across a hundred eyeballs – and I don’t even have a significant following. Someone with millions of followers? The perpetuation of information is happening at a pace I’m losing words to describe because it’s just so f*****g fast. Now apply that to cultural influence. Five million people in New Zealand. Let’s say 1.5 million are actively engaging with social media daily. Meanwhile, individual American cities have populations ten times that – all producing content, all feeding the algorithm, all training the models that shape how we communicate, think, and understand the world. How can it not influence us? Recently, a colleague returned from Hawaii with her daughters. The kids were excited about one thing above all else: trying American fast food chains they’d never experienced but knew intimately. Where did that knowledge come from? It didn’t come from New Zealand media. It came from American Instagram, TikTok, YouTube – the constant drip of cultural information our children are devouring like nothing any generation before them has seen. The Mirror That Shapes What It Reflects I’ve used the phrase myself: AI tools are just a reflection of what humans have created. Of course they are. How else did the data get there? Large language models consumed information from the internet, from curated datasets, from scanned books – all human-made. But here’s what that actually means: if the majority of that information came from a particular belief system, a particular cultural context, a particular demographic, then the “reflection” isn’t neutral. It’s weighted. The mirror has a tint. And when we’re talking about the volume of information – whether it’s created, consumed, policed, or left to complete free speech – the numbers always win. The bigger demographics produce more content. More content trains the models. The models then shape how everyone else communicates. It’s the same dynamic that’s driven cultural influence throughout human history. Dominant narratives have always shaped smaller cultural groups. But – and I cannot stress this enough – that doesn’t make it f*****g right. We’ve seen horror stories throughout history where majority viewpoints denigrated and destroyed minority populations and perspectives. The difference now isn’t the mechanism. It’s the speed. Town crier to Gutenberg took centuries. Gutenberg to broadcast media took decades. Broadcast to social media took years. Social media to AI-mediated everything is taking months. We don’t have time to notice what we’re absorbing before we’ve already absorbed it. The Power Question Hiding in Plain Sight Here’s where this stops being a cultural curiosity and becomes a democratic concern. Our geopolitical systems are built on elected officials helping govern and lead our collectives – neighbourhoods, cities, regions, nations. We chose this structure deliberately. Accountability to the people. Now look at where the actual influence is concentrating: the boardrooms of a handful of technology companies, predominantly in San Francisco and a few other tech hotbeds. These aren’t elected officials. They’re not accountable to populations. They’re accountable to shareholders. Elon Musk recently suggested there will be no jobs and people will have universal income. Maybe that’s fine – I’m not opposed to reimagining work. But how does society transition to that in a way that’s equitable, rather than just lining the pockets of the small few who created these tools? When the dominant narrative about our future is being shaped in boardrooms rather than parliaments, shouldn’t that ring alarm bells? F**k, it better be. What This Actually Looks Like in Practice I spend my days as Head of Architecture for a meat processing company in New Zealand. I field pitches from Microsoft, ServiceNow, Oracle – all pushing their agentic AI capabilities. All built on models trained predominantly on US-centric data with US-centric assumptions about what “good” looks like. And I’ve spent many moments swearing at Claude and ChatGPT for defaulting to US English. You might call that pedantic. I call it significant. Words matter. Every communication – verbal, written, visual – portrays cultural significance. When the tools that help us communicate are trained on one dominant culture’s norms, they nudge us toward those norms whether we notice or not. Data sovereignty is the easy part. We can audit where servers are located. We understand jurisdictions. We can see where data physically lives. Values sovereignty is harder. You cannot audit embedded assumptions. There is no such thing as culturally neutral AI. Every choice in training, every decision about what constitutes “helpful” or “professional” or “appropriate” – these embed specific values. The question worth sitting with isn’t whether AI is biased. Of course it is. The question is whether you know what those biases are, and whether you’re consciously choosing to adopt them. Maintaining Sovereignty Over Your Own Thinking So what do we actually do? I’m not suggesting we stop the rolling snowball. I’m a technologist – I’ve made a career out of this stuff for thirty-plus years. I love the shiny shit. But I’m also a humanist. Just because you can doesn’t mean you should. And that’s something I will die on a hill defending. At the organisational level: Take time to critically evaluate why you’re applying these tools to the problems you have. Don’t spend months working out every possible use case, but don’t blindly apply technology to problems you don’t even understand you’ve got – or assume you’ve got them because somebody else told you they did. At the personal level: Don’t outsource your judgment. Use AI to gather information, not to make decisions directly. When AI produces something for you, read it with your own critical lens intact. Ask yourself: does this sound like me, or does it sound like a US-washed, AI-smoothed version of my ideas? I’ll be honest about my own practice: I’m dictating this article to an AI assistant trained predominantly on US-centric data. There’s irony there, and it’s not lost on me. But I still have the choice to take what’s produced and say “no, I’m not publishing that.” I can rewrite in my own words. I can push back when the phrasing doesn’t sound like Daz from Dunners. The technology doesn’t remove my agency. But maintaining that agency requires vigilance. It requires continuing to think critically rather than accepting outputs at face value. It requires remembering that the mirror has a tint. The Wild Ride We Haven’t Imagined Here’s a consequence nobody’s thinking about yet: what happens when everyone’s talking to their AI assistants in open-plan offices designed for typing, not dictating? The cacophony of everyone speaking to their computers simultaneously. The complete redesign of workspaces we’ll need. The social norms we haven’t developed for AI interaction in shared spaces. We didn’t think through the long-term impacts of sharing our lives on Facebook and Instagram. We’re only now, years later, seeing parents blur their children’s faces in photos – reclaiming a privacy they gave away without considering the consequences. What are we giving away n

    13 min
  2. 12/05/2025

    Agentic AI: The Next Wave (With Big Bloody Caveats)

    I’ve sat through several vendor pitches in the last few months. Every single one had an AI story. Not just “we’ve added some AI features” - full-blown agentic AI roadmaps. Microsoft’s pushing it hard through Dynamics 365 CE and Finance & Operations. ServiceNow’s got their version. Oracle’s in the game. Everyone’s racing to tell us how autonomous AI agents are going to revolutionise our operations. Here’s the thing: they’re not entirely wrong. But they’re not entirely right either. And the gap between those two realities is where organisations are about to burn a lot of cash, waste a lot of time, and piss off a lot of people. The question isn’t “should we use agentic AI?” anymore. When all five vendors on your shortlist have some variation of it baked into their stack, that ship has sailed. The real question - the leadership question - is: where and how do we apply this stuff intelligently? And what problems are we actually trying to solve that existed before the vendor walked in the door? The Blockchain Lesson We’ve Already Forgotten Remember blockchain? About seven years ago, it was going to solve everything. Identity management. Supply chain transparency. Data fidelity. Contract execution. The hype was deafening. And what happened? Bitcoin works brilliantly for its intended purpose. A handful of narrow applications found genuine utility. But the grand enterprise transformation we were promised? Still waiting. Blockchain ran headfirst into the messy reality of existing systems, regulatory complexity, and the inconvenient fact that most organisations couldn’t clearly articulate what problem they were trying to solve with it. They just knew they needed to “do blockchain” because everyone else was. Sound familiar? Agentic AI is following the same trajectory, just faster. The hype cycle is compressed. The vendor pressure is intense. And the promises are even grander - not just automating tasks, but autonomous decision-making, multi-agent systems coordinating across your enterprise, AI that “learns your business rules” and gets smarter over time. I’m not saying it’s all b******t. Some of it genuinely works. But if you can’t see the blockchain parallel playing out in real-time, you’re not paying attention. The 40% Failure Rate Nobody’s Talking About Gartner recently predicted that over 40% of agentic AI projects will be cancelled by the end of 2027. The reasons? Escalating costs, unclear business value, and inadequate risk controls.[1] Forty percent. Nearly half. And that’s not coming from AI sceptics. That’s Gartner - the same analysts whose research these vendors cite when it suits their pitch. McKinsey’s latest State of AI survey tells a similar story from a different angle. They found that 23% of organisations are scaling agentic AI somewhere in their enterprise, with another 39% experimenting. But here’s the kicker: most of those scaling are only doing it in one or two functions. In any given business function, no more than 10% of respondents said they’re actually scaling AI agents.[2] Translation: lots of pilots, lots of experiments, very few organisations actually getting enterprise-level value. Meanwhile, an EY survey of 500 senior leaders found that more than half feel like they’re failing amid AI’s rapid growth. Half reported declining company-wide enthusiasm for AI adoption.[3] The term “AI fatigue” has entered the lexicon - defined as “the collective exhaustion experienced by individuals and organisations in response to the unrelenting pace of AI innovation.”[4] It’s not just about being tired of hearing about AI. It’s about the exhaustion of trying to match reality to the promises. LangChain’s State of AI Agents report adds another dimension: while about 51% of respondents are using agents in production today, 41% cite performance as the primary bottleneck.[5] The tech works in demos. Making it work reliably at scale is a different story entirely. When I read these numbers, I don’t feel vindicated. I feel concerned. Because I know what happens when organisations throw money at technology without understanding what problem they’re solving. I’ve watched it happen with ERP implementations, digital transformation programs, and yes, blockchain initiatives. The pattern is depressingly familiar. A Use Case That Actually Works (With Caveats) Let me be clear: I’m not an AI sceptic. I’ve been experimenting with these tools since ChatGPT launched. This morning - literally before starting this article - I was working on ingesting New Zealand Qualification Authority standards into a structured database, testing how well an LLM could help interpret and validate documents against prescriptive requirements. And in my organisation, we’re seeing genuine value in specific applications. Invoice reconciliation is a good example. We’ve been doing OCR on scanned invoices for years. Nothing new there. But combining AI vision technologies with workflow automation has genuinely streamlined the process. The system can extract header information, line items, supplier details. It can match against purchase orders. It can flag exceptions for human approval through email and Teams - reducing friction, keeping things moving. This isn’t science fiction. It works. Today. But - and this is a significant but - it’s not without problems. The moment you get variation in how invoices are structured, things get complicated. Even when data appears structured - an invoice header with supplier address, a table with line items - the underlying challenges of extracting reliable data from documents haven’t magically disappeared. We’ve just masked them with a slicker interface. Here’s a concrete example: Say you’re managing a project at item level detail, tracking costs across specific work packages. Your supplier sends you an aggregated invoice - one line item covering multiple deliverables. The AI extracts the data perfectly. The invoice matches the PO total. Everything looks clean. But your cost attribution is now f****d. You can’t allocate costs to the right GL codes, can’t track against project budgets, can’t report accurately on where money is actually going. The AI didn’t fail. It did exactly what it was designed to do. But it couldn’t apply business rules that exist in people’s heads, in tribal knowledge, in the accumulated understanding of how your organisation actually operates. Now multiply that across an enterprise with thousands of suppliers, multiple cost centres, capitalisation rules, tax implications, depreciation schedules for physical assets versus amortisation of intangibles. The complexity doesn’t go away. It just moves. The Business Rules Paradox Here’s where I get properly frustrated. We’ve been arguing in the technology industry for decades about abstracting business logic from applications. “Don’t hard-code your rules! Keep them separate! Make them configurable!” Hang on a minute. The entire notion of software is about logic. The codification of logic at different levels of abstraction so that humans can interact with computer hardware. That’s literally what software is. So when vendors tell me their AI will “learn your business rules” and apply them autonomously, I have questions. Lots of questions. Where are those business rules documented today? Because in most organisations I’ve worked with, they’re not. They live in spreadsheets maintained by people who’ve been there for 20 years. They live in the heads of the finance team who know that “this type of invoice always gets coded to that cost centre, except when it doesn’t.” They live in email chains and meeting notes and institutional memory that walks out the door when someone retires. The AI can’t magically divine rules you haven’t articulated. It might infer some things from the structure of your chart of accounts or your organisational hierarchy. But inference isn’t understanding. And confident inference that’s wrong is worse than no inference at all. Then there’s the context window problem - something vendors conveniently gloss over. Your ERP has accumulated thousands of business rules over years of implementation and customisation. Complex interdependencies. Edge cases. Exceptions to exceptions. The AI can’t hold all of that context in working memory any more than a human can. So what happens? You end up compartmentalising. Creating boundaries. Building silos. Sound familiar? It should. We’ve been doing that with human organisations forever. And I suspect we’ll see the same patterns emerge with AI agents - because the underlying complexity hasn’t changed. We’ve just given it a new interface. The LLM Is Just the Headline Act This morning’s experiment with NZQA documents taught me something important. When you let an LLM just riff on document interpretation, you’ll be lucky to get any level of confidence in the output. The LLM is the flashy headline act. It’s what gets demonstrated in vendor pitches and conference keynotes. Look at it understand context! Watch it reason through problems! But behind the scenes, there’s an entire ecosystem doing the unglamorous work. Parsing tools that extract text from PDFs. Validation layers that check data formats. Orchestration frameworks that manage the flow between components. Confidence scoring that determines when to escalate to humans. Upload a three-page invoice with clear structure - nice headers, well-defined sections, consistent formatting - and the system handles it beautifully. Get variation on that structure, and suddenly you need rules to deal with it. Human rules. Explicit logic that someone had to think through and implement. The vendors won’t tell you this in the pitch. They’ll show you the happy path demo where everything works perfectly. They won’t show you what happens when real-world messiness hits their carefully constructed scenari

    18 min
  3. 11/28/2025

    The Specification Revolution: Why Leaders Who Can Articulate Problems Will Own the Next Decade

    Spec-driven development isn’t just changing how we build software. It’s redefining what technology leadership actually means. When the value shifts from code to specifications, the skills that matter most become asking the right questions, understanding problems deeply, and communicating with radical specificity. That’s not a minor adjustment. It’s a complete inversion of how we’ve valued expertise for the past 30 years. There’s a moment I keep returning to. I’m sitting at my desk, describing what I want to build in plain language—narrative, story, context—and watching Claude Code translate that into functioning software at a speed I could never match manually. Not because I can’t understand code. I can read JavaScript, follow the logic, and troubleshoot when things break. But I’ve never been able to look at code and see what it will become. I think in systems, in relationships, in outcomes. The translation layer from that architectural thinking to actual implementation was always where I got stuck. Until now. Spec-driven development has removed the necessity for me to become an expert in code syntax and instead let me focus on what I’ve always done well: clearly articulating what’s required. And that shift—from valuing code as output to valuing specifications as input—has implications that extend far beyond my personal productivity gains. It’s reshaping what leadership in technology actually requires. The No-Code Low-Code Platforms Set the Stage I was an eager and early adopter of the low-code movement. Platforms like OutSystems, Power Platform, and Mendix promised to remove the mundane aspects of getting to value creation. They challenged a paradigm and delivered real results. For organisations that couldn’t attract or afford traditional development talent, these platforms opened doors. But like most large software vendors, they’ve reached a point where they’re cracking at the seams. They’re carrying architectural debt—fundamental choices baked into their platforms that make pivoting to new approaches genuinely difficult. The very abstractions that made them powerful are now constraints. Spec-driven development with AI tools like GitHub Speckit and Claude Code represents the evolution of what those platforms started. The critical difference? The artefacts produced are malleable. A specification written in markdown isn’t locked into a proprietary runtime. It’s portable, interpretable, and can be reimplemented as tools and platforms evolve. We spent decades putting code in escrow, treating it as the valuable artefact worth protecting. Turns out the real value was always upstream—in the understanding of what needed to be built and why. The specifications. The problem definition. The radical specificity about what “bookings” actually means in your organisation versus the seven other things it could mean. Understand the Problem Let me be emphatic about this: the foundation of spec-driven development isn’t the tools. It’s discipline around problem understanding. Document the problem. Validate that it’s the problem everyone thinks you’re solving. Get the people responsible for it in a business context to read the description and confirm it captures reality. Workshop it. Shape it. Argue about the words until you’ve agreed on what they mean. If you have a concept like “bookings” in your domain, get painfully specific about what that means. In the meat processing world I work in, bookings could refer to a mob of livestock arriving at a plant, or picking up bobby calves from a dairy farm, or half a dozen other things. Generalisms create architectural debt that’s far harder to unwind than any technology choice you’ll make. This isn’t new wisdom. Requirements analysis, user stories, functional specifications—we’ve been talking about this for decades. But spec-driven development with AI tools makes the cost of ambiguity immediate and visible. When you can go from specification to working software in hours rather than weeks, the feedback loop on whether you actually understood the problem gets brutally short. The organisations that will thrive aren’t the ones with the best AI tools. They’re the ones with the discipline to understand problems deeply before they start building. The Skills That Now Matter Most Here’s where it gets interesting for leaders thinking about their teams. If specifications become the primary value artefact, then the skills that matter most are: asking questions that surface the real problem, building rapport across functional boundaries, translating between business context and technical possibility, and communicating with precision and narrative clarity. That sounds a lot like what good architects, business analysts, and product owners do. And I’d argue these more analytically focused roles are about to become more valuable than they realise—not because their role stays the same, but because their core competencies are exactly what spec-driven development demands. But here’s the evolution: we need to bring these roles closer to the developer, and the developer closer to these roles. The infrastructure specialist needs to understand the business context. The person with the agriculture science degree needs to be in the room when we’re defining how livestock movements get tracked. The siloed thinking—ops team, apps team, projects team—is dead. Or it should be. What replaces it are multidisciplinary groups of people who come together around problems and solve them together. Put the BA, the dev, the value stream lead, the business owner, the domain expert, and the infrastructure specialist in a room together and start solving things. Stop being so sequential about it. Leadership Flows to Expertise This has implications for how we think about leadership itself. The traditional model—someone makes decisions on Monday, things get done by Friday, information flows up and down a hierarchy—still has its place. Decision-making matters. Arbitration matters. But the most effective teams I’ve worked with have a different dynamic. Leadership flows to whoever has the relevant expertise in the moment. Sometimes that’s the person with the official title. Sometimes it’s the junior developer who spots a pattern no one else saw. Sometimes it’s the business analyst who asks the question that reframes the entire problem. This isn’t about eliminating hierarchy. It’s about creating conditions where diverse skills can contribute their value without artificial barriers. It’s about being observant enough to recognise when things are stalling and inserting leadership to move things forward—not to dictate the answer, but to create space for the team to find it. The research on motivation backs this up. Autonomy, purpose, mastery—the things that actually drive human performance—all depend on people feeling like their contributions matter. Autocracy and bureaucratic control kill that. They might produce compliance, but they don’t produce the kind of creative problem-solving that spec-driven development makes possible. The Experience Paradox Here’s where I need to be honest about a tension in everything I’ve described. Spec-driven development with AI tools makes powerful capabilities more accessible. That’s genuinely liberating for someone like me—thirty years of context about how technology works, what the principles are, why security and testing and infrastructure considerations matter. I can describe what I want, and Claude Code builds it with all the discipline I’d expect from a senior developer. But what about someone without that foundation? The abstractions that make spec-driven development powerful also hide complexity. The OSI model layers don’t disappear just because you can describe your application in plain language. The security implications of architectural choices don’t vanish because an AI wrote the code. I worry about people coming into our industry who never learn the basics. Apps appeared on phones with such low friction that it’s easy to forget the extraordinary complexity that sits behind pushing an icon to a home screen. That complexity isn’t going away. Quantum computing might change the substrate eventually, but for now, it’s still ones and zeros on silicon, still data paths coursing through system boards, still the same fundamental principles I learned in the nineties. As leaders, we need to ensure those foundations keep getting taught. Not because everyone needs to configure firewalls manually, but because understanding the underpinnings is what lets you ask the right questions when something goes wrong. The answer isn’t to restrict access to powerful tools. It’s to pair accessibility with education, to create environments where people can experiment and learn, and to be honest that experience still matters—not as gatekeeping, but as context that improves outcomes. What This Means for Your Team Right Now If you’re a technology leader trying to figure out how to position yourself and your people for this shift, here’s where I’d start. Experiment with the tools. Pick something small and low-risk. If GitHub Speckit and Claude Code aren’t your preference, choose something else. The point is to understand what becomes possible when you can go from specification to working software in hours. That understanding will reshape how you think about team composition and process. Get religious about problem definition. Before you build anything, document what problem you’re solving. Make the business owners read it. Argue about the words. Get specific about terminology. This discipline will pay dividends regardless of what tools you use. Start breaking down silos. Look at how your teams are structured. Are you still doing sequential handoffs between analysis, development, testing, and operations? Start experimenting with multidisciplinary groups that swarm on p

    13 min
  4. Writing Isn’t Thinking: What Three Years of AI Has Actually Taught Us

    11/21/2025

    Writing Isn’t Thinking: What Three Years of AI Has Actually Taught Us

    This morning, I read yet another LinkedIn post proclaiming that “writing is critical reasoning’s most important tool.” The post went on to warn that AI would atrophy our brains, that we’d lose the ability to think deeply if we let machines write for us. All but three years ago today—November 30, 2022—ChatGPT launched and changed everything. And somehow, we’re still having the same tired debate: Team Human versus Team Robot, AI doom versus AI salvation. Your brain will rot, versus AI will save us all. But there’s a third position that nobody’s talking about. And it’s the one that actually matters. The Equation That Never Made Sense Let me be blunt: Writing well does not equal thinking well. They’re just not the same thing. You know who couldn’t write easily? Einstein. Tesla. My father, who spent his entire career as a panel beater and never once put pen to paper to solve a complex mechanical problem. He could reshape a car panel from raw steel, conceiving the entire process in his head and executing it flawlessly. Zero drawings. Zero written plans. Pure spatial reasoning and mechanical intelligence operating at the highest level. Tesla famously constructed entire designs in his mind before building them. No sketches. No notes. Just extraordinary thinking that bypassed writing entirely. But according to the “writing is thinking” crowd, these people weren’t engaging in critical reasoning? That’s complete horseshit, and there’s no logical way to validate that claim. The uncomfortable truth is this: When people say “writing is critical thinking,” what they’re really saying is “the only thinking that counts is the kind I can do.” It’s intellectual narcissism disguised as rigour. The Pattern We Keep Repeating Here’s what’s wild: We’ve had this exact panic before. Multiple times. Socrates argued that writing itself would weaken people’s memories. Scholars worried the Gutenberg printing press would flood the world with low-quality thinking. When calculators arrived, everyone panicked that people would forget how to do math. Every single communication technology—from pen and paper to lithographic representation to the printing press to calculators to AI—has faced the same visceral resistance. And it’s always framed as concern about our thinking capacity. But what if it was never really about thinking at all? What’s Really at Stake Let me tell you what I think this is actually about: power, access, and control over who gets to participate in intellectual discourse. If AI levels the playing field for neurodivergent thinkers, for people who think brilliantly but write “differently,” for people who have revolutionary ideas but not traditional academic training—that’s threatening to existing power structures. Knowledge is power, right? That quote gets bandied around constantly. But here’s the question nobody asks: Is knowledge powerful when you hoard it, or when you share it? Because if you’re hoarding it, who exactly gets to see that power? The real anxiety isn’t about critical thinking. It’s about WHO gets to be considered an intellectual. And if writing is no longer the barrier to entry, the gatekeepers lose their gates. How I Actually Use AI (And Why It’s Not Lazy) Let me walk you through my actual process, because this is important. I start with stream of consciousness—whether typed or spoken, depending on my environment that day. It might be messy. It’s definitely not well-structured prose. But here’s what it IS: it’s me drawing on decades of experience, connecting concepts from yesterday or last week or ten f*****g years ago, reasoning through problems in real-time. Then I use an LLM to help structure that thinking. The AI translates my stream of consciousness into something more organised. And here’s the crucial part that the critics miss: I review everything. I go back and read what the AI produced and ask myself: Did it capture the essence of what I said? Did it use my words to articulate my message? Is this congruent with my actual thinking? If you blindly publish AI output without review, that’s not an AI problem. That’s not a critical reasoning problem. That’s just f*****g lazy. But that laziness isn’t about the tool—it’s about the person. And pretending the tool is the problem lets intellectually lazy people off the hook. I’m doing cognitive labour in three places: * Before/during the stream of consciousness - reasoning, connecting, drawing on experience * In the review process - checking congruence, catching gaps * In the iteration - refining, sharpening the argument The “writing is thinking” crowd only counts step 1 if you type it yourself. But I’m doing all three steps—just using different tools for different parts. The Communication Equation Here’s something else that matters: communication is always two-sided. There’s a sender and a receiver. If those two things are mismatched, the communication has failed. As a neurodivergent professional, I’ve been told my whole life that I communicate wrong. Too verbose. Too tangential. Not structured properly. But that’s framing it as MY failure, when really, it’s a mismatch between my natural communication style and conventional expectations. I’m phenomenal at speaking. I can hear arguments, assimilate information, and respond in ways that help teams succeed. Third chair on my school debating team for two years running. The thinking was always there. Getting it from my ADHD brain onto paper? That was the nightmare. Until AI changed everything. Now I can do what I do well—speak, connect concepts, tell stories—and use a tool to translate that into writing that has the same impact. Who’s winning here? I think I am. What People Are Still Getting Wrong Three years in, here’s what people still don’t understand: AI doesn’t solve everything. We’re talking about a specific subset of tools—LLMs using machine learning, neural networks, and technologies that have existed for decades but are now more accessible. Don’t lump everything into the “ChatGPT or Claude” bucket. That’s one tool in a massive AI superset. The “hallucination” framing is wrong. We’ve humanised these tools in ways that obscure what’s actually happening. A hallucination is a drug-induced state—how can a computer hallucinate? If you get an answer and don’t validate it, that’s not the AI’s fault. That’s you failing to apply critical reasoning. It’s assuming what you see is a true and fair representation of fact. The real skill is knowing how to use the tools. If you’re rolling into ChatGPT with zero context, no system instructions, no narrowing of scope—you might as well walk into a casino with $100 and hope to walk out with $110. Good luck. In Claude, I use Projects. I provide specific system instructions about how I want the AI to respond to me, what voice to use, and what to prioritise. That’s not cheating—that’s expertise. That’s understanding your tools. And you know what? People in my industry—technology—must be quivering in their boots, thinking anyone can ask an LLM a question and get an answer. But there’s still expertise required to understand that output, to validate it, to know when it’s right and when it’s complete b******t. The Call: What You Should Actually Do So here’s what I want from you: Try the damn tools. If you haven’t used them, don’t criticise them. Don’t throw rocks from a position of ignorance. Challenge your cognitive biases. We all have them—it’s scientifically proven. Don’t assume everything you’re seeing and hearing in your bubble is a true, fair, accurate representation of all things out there. Stop gatekeeping intellectual discourse. How do we get better as individuals, as communities, as societies? Through discourse. And if you’re unwilling to participate because you hold an opinion so strongly that you can’t see your own biases? F**k off. Validate, don’t blindly trust. If you use these tools, don’t just publish whatever comes out. Review it. Share it with someone else if you’re uncertain. But don’t poo-poo things just because you don’t understand them. Experiment and learn. Use projects. Set system instructions. Help shape the answers you get by narrowing the field of view. If you choose not to do that, you’re rolling the dice. The Third Position We don’t have to choose between “AI will save us” and “AI will rot our brains.” There’s a third position: AI as a translation tool, a way to make different forms of intelligence visible and valuable. For neurodivergent thinkers, for people whose brilliance doesn’t fit conventional molds, for anyone who’s been told they don’t think “properly” because they don’t write “properly”—these tools are revolutionary. Not because they do our thinking for us, but because they translate our thinking into forms the world can finally see and hear. Three years after ChatGPT launched, we’re still asking the wrong questions. The question isn’t whether AI helps or hurts thinking. The question is: Are we ready to value different kinds of thinking? Are we ready to admit that writing was never the only path to rigorous thought? Because from Socrates to the printing press to ChatGPT, we’ve always feared the tools that level the playing field. And we’ve always dressed up that fear as concern for intellectual standards. It’s time to call that what it is: gatekeeping. And it’s time to move past it. Get full access to Digital Leadership Academy at darrylmunro.substack.com/subscribe

    13 min
  5. 10/07/2024

    Strong Opinions, Loosely Held - David Reiss

    Keywords technology, business transformation, digital transformation, human element, successful projects, business goals, process, complexity, innovation, change management Summary In this conversation, Darryl Munro hosts David Reiss, a seasoned executive with over 30 years of experience in technology and business transformation. They discuss the importance of the human element in technology projects, the definition of successful technology initiatives, and the alignment of technology with business goals. David emphasises that technology should serve a purpose, primarily to improve lives and business outcomes. The conversation also explores the complexities of technology adoption, the necessity of process in transformation, and the evolving role of technology in business. David shares insights from his experiences, highlighting the significance of understanding the 'why' behind technology changes and the need for effective communication and collaboration in driving successful outcomes. Takeaways * Successful innovation is primarily about people, not just technology. * Technology should be viewed as a tool to improve lives and business outcomes. * Aligning technology initiatives with business goals is crucial for success. * Understanding the human element is critical to successful technology projects. * Change is often painful, and organisations must support their teams. * Effective communication is essential in technology adoption. * A solid business case is vital for technology investments. * Organizations should embrace experimentation in complex projects. * Learning and adapting quickly is crucial in technology transformation. * Building trust and understanding the 'why' behind changes foster successful outcomes. Sound Bites * "It's 99% about people behind those things." * "Technology is only there to do something." * "There's no such thing as a technology project." Chapters 00:00 Introduction to David Rice and His Journey 06:20 The Human Element in Technology 09:43 Defining Successful Technology Projects 19:21 Aligning Technology with Business Goals 27:00 The Evolution of Technology in Business 30:52 Case Studies of Successful Technology Projects 34:34 Delivering Business Value through Technology 43:32 Balancing Innovation with Business Needs 50:53 Embracing Co-Creation in Change Management 56:39 Podcast Outro.mp4 Get full access to Digital Leadership Academy at darrylmunro.substack.com/subscribe

    57 min

About

"Strong Opinions...loosely held" is a dynamic podcast that dives deep into the world of technology leadership. Hosted by a seasoned tech executive with over 30 years of industry experience, this show brings you unfiltered insights, bold ideas, and adaptable strategies for navigating the ever-evolving tech landscape. Each fortnight, we explore the intersection of innovation, leadership, and personal growth. From managing high-performing teams to fostering diversity and inclusion, from cutting-edge tech trends to maintaining work-life balance, we cover it all. Our guests are forward-thinking leaders who aren't afraid to share their successes, failures, and the lessons learned along the way. Whether you're an aspiring tech leader, a seasoned professional looking to stay ahead of the curve, or simply curious about the human side of technology, this podcast offers something for you. We believe in the power of strong convictions balanced with open-mindedness and continuous learning. Join us as we challenge assumptions, debate ideas, and uncover the qualities that make exceptional tech leaders. Subscribe now to "Strong Opinions...loosely held" and be part of a community that's shaping the future of technology leadership. Remember, in the fast-paced world of tech, the only constant is change – and we're here to help you lead through it. New episodes drop every two weeks. Tune in, join the conversation, and let's elevate tech leadership together! darrylmunro.substack.com