Scrum Master Toolbox Podcast: Agile storytelling from the trenches

From Deterministic to AI-Driven—The New Paradigm of Software Development | Markus Hjort

AI Assisted Coding: From Deterministic to AI-Driven—The New Paradigm of Software Development, With Markus Hjort

In this BONUS episode, we dive deep into the emerging world of AI-assisted coding with Markus Hjort, CTO of Bitmagic. Markus shares his hands-on experience with what's being called "vibe coding" - a paradigm shift where developers work more like technical product owners, guiding AI agents to produce code while focusing on architecture, design patterns, and overall system quality. This conversation explores not just the tools, but the fundamental changes in how we approach software engineering as a team sport.

Defining Vibecoding: More Than Just Autocomplete

"I'm specifying the features by prompting, using different kinds of agentic tools. And the agent is producing the code. I will check how it works and glance at the code, but I'm a really technical product owner."

Vibecoding represents a spectrum of AI-assisted development approaches. Markus positions himself between pure "vibecoding" (where developers don't look at code at all) and traditional coding. He produces about 90% of his code using AI tools, but maintains technical oversight by reviewing architectural patterns and design decisions. The key difference from traditional autocomplete tools is the shift from deterministic programming languages to non-deterministic natural language prompting, which requires an entirely different way of thinking about software development.

The Paradigm Shift: When AI Changed Everything

"It's a different paradigm! Looking back, it started with autocomplete where Copilot could implement simple functions. But the real change came with agentic coding tools like Cursor and Claude Code."

Markus traces his journey through three distinct phases. First came GitHub Copilot's autocomplete features for simple functions - helpful but limited. Next, ChatGPT enabled discussing architectural problems and getting code suggestions for unfamiliar technologies. The breakthrough arrived with agentic tools like Cursor and Claude Code that can autonomously implement entire features. This progression mirrors the historical shift from assembly to high-level languages, but with a crucial difference: the move from deterministic to non-deterministic communication with machines.

Where Vibecoding Works Best: Knowing Your Risks

"I move between different levels as I go through different tasks. In areas like CSS styling where I'm not very professional, I trust the AI more. But in core architecture where quality matters most, I look more thoroughly."

Vibecoding effectiveness varies dramatically by context. Markus applies different levels of scrutiny based on his expertise and the criticality of the code. For frontend work and styling where he has less expertise, he relies more heavily on AI output and visual verification. For backend architecture and core system components, he maintains closer oversight. This risk-aware approach is essential for startup environments where developers must wear multiple hats. The beauty of this flexibility is that AI enables developers to contribute meaningfully across domains while maintaining appropriate caution in critical areas.

Teaching Your Tools: Making AI-Assisted Coding Work

"You first teach your tool to do the things you value. Setting system prompts with information about patterns you want, testing approaches you prefer, and integration methods you use."

Success with AI-assisted coding requires intentional configuration and practice. Key strategies include:

  • System prompts: Configure tools with your preferred patterns, testing approaches, and architectural decisions

  • Context management: Watch context length carefully; when the AI starts making mistakes, reset the conversation

  • Checkpoint discipline: Commit working code frequently to Git - at least every 30 minutes, ideally after every small working feature

  • Dual AI strategy: Use ChatGPT or Claude for architectural discussions, then bring those ideas to coding tools for implementation

  • Iteration limits: Stop and reassess after roughly 5 failed iterations rather than letting AI continue indefinitely

  • Small steps: Split features into minimal increments and commit each piece separately

In this segment we refer to the episode with Alan Cyment on AI Assisted Coding, and the Pachinko coding anti-pattern. 

Team Dynamics: Bigger Chunks and Faster Coordination

"The speed changes a lot of things. If everything goes well, you can produce so much more stuff. So you have to have bigger tasks. Coordination changes - we need bigger chunks because of how much faster coding is."

AI-assisted coding fundamentally reshapes team workflows. The dramatic increase in coding speed means developers need larger, more substantial tasks to maintain flow and maximize productivity. Traditional approaches of splitting stories into tiny tasks become counterproductive when implementation speed increases 5-10x. This shift impacts planning, requiring teams to think in terms of complete features rather than granular technical tasks. The coordination challenge becomes managing handoffs and integration points when individuals can ship significant functionality in hours rather than days.

The Non-Deterministic Challenge: A New Grammar

"When you're moving from low-level language to higher-level language, they are still deterministic. But now with LLMs, it's not deterministic. This changes how we have to think about coding completely."

The shift to natural language prompting introduces fundamental uncertainty absent from traditional programming. Unlike the progression from assembly to C to Python - all deterministic - working with LLMs means accepting probabilistic outputs. This requires developers to adopt new mental models: thinking in terms of guidance rather than precise instructions, maintaining checkpoints for rollback, and developing intuition for when AI is "hallucinating" versus producing valid solutions. Some developers struggle with this loss of control, while others find liberation in focusing on what to build rather than how to build it.

Code Reviews and Testing: What Changes?

"With AI, I spend more time on the actual product doing exploratory testing. The AI is doing the coding, so I can focus on whether it works as intended rather than syntax and patterns."

Traditional code review loses relevance when AI generates syntactically correct, pattern-compliant code. The focus shifts to testing actual functionality and user experience. Markus emphasizes:

  • Manual exploratory testing becomes more important as developers can't rely on having written and understood every line

  • Test discipline is critical - AI can write tests that always pass (assert true), so verification is essential

  • Test-first approach helps ensure tests actually verify behavior rather than just existing

  • Periodic test validation: Randomly modify test outputs to verify they fail when they should

  • Loosening review processes to avoid bottlenecks when code generation accelerates dramatically

Anti-Patterns and Pitfalls to Avoid

Several common mistakes emerge when developers start with AI-assisted coding:

  • Continuing too long: When AI makes 5+ iterations without progress, stop and reset rather than letting it spiral

  • Skipping commits: Without frequent Git checkpoints, recovery from AI mistakes becomes extremely difficult

  • Over-reliance without verification: Trusting AI-generated tests without confirming they actually test something meaningful

  • Ignoring context limits: Continuing to add context until the AI becomes confused and produces poor results

  • Maintaining traditional task sizes: Splitting work too granularly when AI enables completing larger chunks

  • Forgetting exploration: Reading about tools rather than experimenting hands-on with your own projects

The Future: Autonomous Agents and Automatic Testing

"I hope that these LLMs will become larger context windows and smarter. Tools like Replit are pushing boundaries - they can potentially do automatic testing and verification for you."

Markus sees rapid evolution toward more autonomous development agents. Current trends include:

  • Expanded context windows enabling AI to understand entire codebases without manual context curation

  • Automatic testing generation where AI not only writes code but also creates and runs comprehensive test suites

  • Self-verification loops where agents test their own work and iterate without human intervention

  • Design-to-implementation pipelines where UI mockups directly generate working code

  • Agentic tools that can break down complex features autonomously and implement them incrementally

The key insight: we're moving from "AI helps me code" to "AI codes while I guide and verify" - a fundamental shift in the developer's role from implementer to architect and quality assurance.

Getting Started: Experiment and Le