Digital After Dark

Digital After Dark

Two mates talking about all things Digital.  Topics can cover Digital Analytics, Data, Transformation, Technology, Concepts and everything inbetween.  If it is related to Digital, and we find it interesting, we are going to discuss it.   

  1. DAD016-The Illusion of Uplift: A Deep Dive Into Noise in Digital Analytics

    1 DAY AGO

    DAD016-The Illusion of Uplift: A Deep Dive Into Noise in Digital Analytics

    Theme: Understanding Noise in Digital Analytics and Experimentation In this episode of Digital After Dark, we dig into a topic that quietly shapes every KPI, every experiment, and every “uplift” teams think they’ve achieved is actually noise. Not tagging issues. Not tool errors. Noise: the natural wobble of stochastic processes that makes digital measurement behave nothing like the physical world. This conversation breaks down what noise actually is, why it’s so misunderstood, and how it silently distorts the numbers organisations rely on.  What We CoverWhy digital measurement behaves differently from the physical worldWhy you can measure a table three times and get the same result, but never measure a conversion rate three times and get the same number.How stochastic noise affects KPIs and experimentsEven when the underlying conversion rate doesn’t change, the outcomes do.Why identical A/B tests can produce completely different resultsAnd why teams often blame traffic, intent, or seasonality when nothing actually changed.Why weekly KPIs swing up and down for no real reasonNoise alone can create “wins” and “drops” that look meaningful but aren’t.Why statistical significance isn’t the safety net people think it isMany real winners never cross the threshold, and some false winners do.Why segmentation increases noise instead of reducing itSmaller samples mean bigger wobble.How tools like Noise Explorer and Noise Check help teams see the true range of possible outcomesAnd why visualising noise changes how you interpret data.  What You’ll Learn Listeners will walk away with a clearer understanding of: How much of their analytics volatility is actually noiseWhy relying on a single observed result can be misleadingHow to distinguish real change from random fluctuationWhy many “insights” are actually noise‑driven illusionsHow to make more grounded decisions in experimentation and KPI trackingHow to avoid chasing ghosts in the dataIf you work in analytics, CRO, experimentation, or digital strategy, this episode will fundamentally shift how you interpret your numbers. Some websites created by Andrea to help demonstrate this better at work.   Noise Explorer (experience how noise affects outcomes of an experiment and get a real grasp on what to expect): https://confidentstory.com/noise/ Noise Check (check if an effect is real or due to noise. It replaces significance): https://confidentstory.com/noisecheck/ GTMsplit (run split tests for free with GTM): https://confidentstory.com/gtmsplit/ GTMsplit Documentation: https://confidentstory.com/docs/gtmsplit/ Whitepaper "The First-Exposure Contamination Problem":  https://confidentstory.com/docs/whitepapers/first-exposure-contamination-problem/

    1hr 7min
  2. AT014 - From Hits to Insights- Walter’s Journey to Adobe Analytics Champion

    2 MAR

    AT014 - From Hits to Insights- Walter’s Journey to Adobe Analytics Champion

    Episode 14 of Andrew Talks brings you a conversation packed with energy, honesty, and deep industry insight. This time, Andrew sits down with Walter: Personalisation & Analytics Specialist, storyteller at heart, and officially recognised Adobe Analytics Champion for 2025–2026. Together, they explore:Walter’s unconventional path into digital analytics, sparked by the early days of hits and curiosity (“I remember them presenting this data around how people were engaging with websites… I gotta get into that space.” )His leap into Adobe Analytics during the Omniture era: documentation chaos, deep‑end learning, and the grind of early implementations (“The documentation was poor… we had to figure out how to use this thing.” )The massive multi‑app standardisation project that shaped his Adobe Analytics Champion application (“We wanted to redo all our analytics and get our data in a standardized way… boy oh boy, it is a nightmare.” )What it really means to be an Adobe Analytics Champion: the learning, the community, the influence, and the responsibility (“Being a champion is not easy. You gotta put in the hours.” )The future of Adobe Analytics (yes, it’s not dead) and why upcoming changes will surprise the industry (“Just watch out for the next couple of months… very big, impressive changes.” )Plus, the two friends share war stories, laugh about implementation nightmares, and dive into a lightning round that reveals Walter’s favourite KPIs, his most‑hated metrics, and the biggest myth in analytics. Connect with Walter on LinkedIn: 👉 https://www.linkedin.com/in/walter-sibanda-a1616868/

    1hr 12min
  3. DAD014: Discussing Compliance, GTM/GA4 & Automation with Dan Truman

    5 JAN

    DAD014: Discussing Compliance, GTM/GA4 & Automation with Dan Truman

    This episode explores the state of digital analytics across consent and ethics, UK/EU regulatory shifts, implementation pitfalls in GA4 and GTM (client‑side and server‑side), what “good” governance looks like, misconceptions that hold businesses back, and how automation and AI will reshape MarTech. The discussion balances NON-legal guidance (we are not lawyers - we will discuss how we would guide our clients) & ethical nuance (cookie consent, PECR/ePrivacy, “ads‑or‑data” paywalls, consent mode ambiguity) with hands‑on implementation guidance (trigger ordering, config tags, enhanced measurement pitfalls, server‑side GTM on first‑party endpoints). It closes with pragmatic views on analytics as a revenue function and near‑term opportunities to productise repeatable work with automation and AI agents.Rising public awareness of data collection and the messy reality of consent banners, paywalls, and browser‑level signals—and how this varies by market.Regulatory ambiguity (UK guidance, PECR/ePrivacy/DUAA interplay, “statistical analysis” carve‑outs) and why organisations must define a clear legal/ethical risk posture—not just a technical stance.Consent Mode, Google Signals, and the “German GTM ruling”: what actually triggered panic, why context matters, and how intent and downstream controls are key.GA4/GTM mistakes: firing order and race conditions, multiple config tags, over‑reliance on Enhanced Measurement, noisy form submits, undocumented “cute” renames, legacy tags, and excessive custom JS.Server‑side GTM: value, common missteps (not truly first‑party endpoints, A‑record/IP mismatches), and SaaS vs self‑host trade‑offs.Analytics isn’t “plug‑and‑play”; “capture everything” promises just shift effort from engineering to data teams. Analytics is a revenue function that powers activation and models.AI/automation: use agents and scripts to productise repeatable tasks, orchestrate tools, and summarise outputs rather than “let AI do it all.”

    1hr 12min
  4. DAD013: Our Presentations at MeasureCamp London: Part 2

    30/12/2025

    DAD013: Our Presentations at MeasureCamp London: Part 2

    In this episode of Digital After Dark, Matt and Andrew dive deep into data layer quality, JSON schema validation, and automated monitoring at scale. Using real-world examples from MeasureCamp and client implementations, they explore how teams can move from messy, inconsistent analytics data to a reliable, validated, and scalable data ecosystem. Andrew focuses on how JSON schemas bring structure and confidence to data layers, empowering developers, QA, and analysts to catch issues early. Matt then builds on that foundation by showing how to operationalize schema validation at scale using tools like ObservePoint, automation, and APIs—ensuring data quality doesn’t break when changes ripple across large sites or multiple domains. The conversation blends technical depth with practical workflows, developer empathy, and a healthy dose of humor (including an unforgettable “number two before number one” moment). Key TakeawaysYour data layer is the schema — the events are temporary, but the schema defines long-term data quality.Validate early, not after launch — catching issues in dev saves exponential time later.JSON Schema turns analytics specs into enforceable contracts, not just documentation.Data quality deserves the same rigor as UX, even if the consequences appear later.Manual testing doesn’t scale — automation and monitoring are essential for modern analytics stacks.Schema validation builds confidence across teams, from developers to analysts to stakeholders.Start small (MVP) — even basic type validation delivers immediate value.At scale, governance beats heroics — automation, APIs, and shared standards win every time.

    1hr 20min
  5. DAD-013: Measurecamp 2025 Review Part 1

    29/11/2025

    DAD-013: Measurecamp 2025 Review Part 1

    In this episode of Digital After Dark, Andrew and Matt dive into their experiences at MeasureCamp London 2025, an unconference powered entirely by the analytics community. They reflect on the energy that comes from 450+ analysts giving up a Saturday to learn, share, and collaborate as well as remanence on the evolution of MeasureCamp—from its early days of beanbags and pizza to today’s polished format with sponsors, merchandise and expertly managed logistics. They walk through memorable moments from the day: the mad “Black Friday dash” to claim session slots, the humour and chaos of handwritten talk cards, and the joy of reconnecting with industry friends. As both presenters and attendees, Matt and Andrew experienced the day from multiple angles, comparing notes on crowd sizes, room selection strategy, session clashes, and the sense of community that continues to define MeasureCamp. The episode then moves into a rapid-fire discussion of the sessions they each attended, ranging from Simo Ahava’s exploration of server-side tagging philosophy, to clever GA4 anomaly detection approaches, to compliance innovation at Condé Nast, to TV analytics “fiendish questions” from ITV. The hosts also tease upcoming podcast guests they met at the event and share key personal takeaways—new tools, new ideas, and renewed appreciation for the digital analytics community.  The second part is still being edited, where Matt and I present to each other the sessions we presented at MeasureCamp.  Listen through the closing to listen to why there was a break in the middle. TOPICS COVERED:     What MeasureCamp is, why it matters, and how the London 2025 edition was organisedThe “session board rush” and discussion of fairness, first-timers, and room allocationOverall vibe of the day: community, conversations, introverts surviving social overloadSession breakdowns (list below)Themes: schema validation, data quality, consent & compliance, server-side tooling SESSIONS ATTENDEDUnsolved Problems with Server-Side Tagging – Simo AhavaGA4 Anomaly Detection and Data Quality Checks at Scale – Marco TognonEnhancing Condé Nast’s Compliance Methodology with SnowplowDiscussion on AEP, CJA and CJO with Max LagaceWhen Data Talks but Nobody Listens: How to Present with Confidence – Parveen DownarThree Fiendish Questions from Streaming & TV Analytics – Tom Milne (ITV)Open-Source GTM Alternative – Alexander Kurzel. (elbwalker.com)Server-Side Circus: End-to-End Server-Side Setup in 15 Minutes – Annie Salo CEO of tracklutionGA4 Custom Events and Event Schema Documentation - Hawa TeladiaiFrames Are a Pain (But Don’t Have to Be) –  Kaail Bigos

    1hr 10min

About

Two mates talking about all things Digital.  Topics can cover Digital Analytics, Data, Transformation, Technology, Concepts and everything inbetween.  If it is related to Digital, and we find it interesting, we are going to discuss it.