Technology Explorations in Data

AI Code Reviews with CodeRabbit and Sourcery

Code reviews can be a pain: rushed approvals, “rubber stamping,” and bugs slipping into production. With AI-assisted coding accelerating how much code we produce, the review bottleneck becomes even more real.

In this episode of Technology Explorations at Dataminded, Hannes De Smet (data & platform engineer) shows Jonny Daenen what he learned after exploring AI code reviewers and demos CodeRabbit and Sourcery on a real code change.

You’ll see:

  • How AI reviewers work as a pre-flight check (before opening a PR) and in-PR (via GitHub integration)
  • What kinds of issues they catch well (e.g., type mismatches, logic errors like list handling, division-by-zero)
  • Where they struggle (e.g., noisy “PII exposure” warnings without enough context)
  • UX differences in IDE integrations (Cursor/VS Code), “apply fix” vs “fix with AI,” and why context still matters

Links:
CodeRabbit: https://www.coderabbit.ai/
Sourcery: https://www.sourcery.ai/