The Eighth

Avraham Raskin

A future-facing podcast that explores how augmented reality will show up in our everyday lives, off-screen and all around us. Each episode features everyday people (not tech insiders) grappling with how this new layer of reality could reshape the way we shop, move, learn, or just get through the day. These conversations take abstract ideas and ground them in the practical, turning sci-fi into Saturday morning.

  1. 10/20/2025 · BONUS

    [BS] When Cameras Learn: The Rise of Video-Language Models

    A concise, investigative tour of how security video evolved from passive cctv to intelligent, searchable footage powered by local ai and video-language models. I maps the technical lineage-motion sensing, smart detections, face and plate id, the “AI Key,” and scene-level vlm search-and explain why pattern discovery at scale is the next operational leap for site security and investigations. “video that used to be passive now becomes a searchable narrative.” 🎧 listen on spotify, youtube, apple podcasts 🔗 more episodes → https://avrahamraskin.com/podcast tl;dr security cameras have graduated from passive recorders to active, searchable sensors. video-language models (vlms) and local llm-like agents enable natural-language scene search and condensed pattern visualisations-powerful for investigations but constrained today by compute and edge deployment. the next frontier is real-time, site-wide pattern detection running at the edge. timestamps 00:00 | introduction and context 00:23 | the evolution: cctv → motion → smart detections 01:56 | face detection, license plates, and granular id 02:19 | the “ai key”: local llm-style analytics (what it adds) 03:35 | video-language models: frame description → search 05:03 | practical investigative tools and scene search examples 06:03 | pattern discovery: briefcam and condensed timelines 07:40 | limitations today: compute, edge, and the next step 09:56 | closing thoughts and what’s next

    10 min
  2. 09/24/2025 · BONUS

    [BS] The Future of Security: From Reactive Cameras to Predictive Intelligence

    Most security systems still behave like they did 20 years ago-reactive, limited, and blind to the context hidden inside their own recordings. In this Brainstream, we explore why the real frontier in security isn’t better alerts or higher-resolution cameras, but AI systems that can learn a site’s patterns, behaviours, anomalies, and risks over months of recorded footage. This episode outlines the shift from “review after the incident” to “predict before it happens,” and why the intelligence trapped inside our footage is the most valuable, unused asset in modern security. TL;DR Security cameras shouldn’t just replay the past-they should understand it. When indexed, analysed, and contextualised, months of footage can power predictive, site-specific intelligence far beyond traditional monitoring. 🎧 Listen on ⁠⁠Spotify⁠, ⁠⁠YouTube⁠⁠, ⁠⁠Apple Podcasts⁠⁠ 🔗 More episodes → ⁠https://avrahamraskin.com/podcast⁠ Timestamps 00:00 | Opening: Why talk about the future of security 00:05 | Why this topic needs multiple videos 00:08 | A new product direction after years in the field 00:19 | The core problem: cameras are reactive 00:26 | Footage as an investigative tool, not a live one 00:34 | Tools like BriefCam and condensed investigations 00:51 | The inevitability of deep pattern analysis 01:17 | Rethinking what recorded footage really contains 01:26 | On-site storage vs cloud motion clips 01:48 | Why modern systems rarely store “everything” 02:14 | The hidden value inside long-term footage 02:27 | Thought experiment: downloading 6 months of footage into a guard 03:06 | Scale: 25–100 cameras, months of data 03:25 | What context a human misses vs what the data contains 03:58 | Reviewing footage: hours, days, weeks 04:25 | Pattern detection after the fact 04:54 | The industry’s stuck in reactive mode 05:02 | Moving from reactive to predictive 05:17 | Connecting dots before the incident 05:24 | Trends, anomalies, and site-specific patterns 05:34 | What good security guards actually do 06:00 | Knowing who belongs and who doesn’t 06:13 | Cameras should be able to learn the same 06:22 | Context → patterns → prediction 06:34 | Generations of camera evolution 07:00 | Smart detections: person, car, face, plate 07:14 | More granular detection: clothing, colours, models 07:36 | Natural-language retrieval: next-generation search 07:56 | But still mostly reactive 08:03 | True intelligence: learning the site itself 08:12 | Threat assessment powered by context 08:26 | The massive, untapped value in indexed footage 08:51 | Behaviour understanding vs object detection 09:04 | AI as a security operator/assistant 09:14 | Cameras becoming proactive 09:20 | Future episodes: alarms, sensors, monitoring 09:33 | Industry progress & uneven advancement 09:42 | Why pattern understanding changes everything 09:57 | Closing: A new era is coming

    10 min

About

A future-facing podcast that explores how augmented reality will show up in our everyday lives, off-screen and all around us. Each episode features everyday people (not tech insiders) grappling with how this new layer of reality could reshape the way we shop, move, learn, or just get through the day. These conversations take abstract ideas and ground them in the practical, turning sci-fi into Saturday morning.