Embedded AI - Intelligence at the Deep Edge

David Such

“Intelligence at the Deep Edge” is a podcast exploring the fascinating intersection of embedded systems and artificial intelligence. Dive into the world of cutting-edge technology as we discuss how AI is revolutionizing edge devices, enabling smarter sensors, efficient machine learning models, and real-time decision-making at the edge. Discover more on Embedded AI (https://medium.com/embedded-ai) — our companion publication where we detail the ideas, projects, and breakthroughs featured on the podcast. Help support the podcast - https://www.buzzsprout.com/2429696/support

  1. Why Humans and Robots must Dream

    APR 25

    Why Humans and Robots must Dream

    Send us Fan Mail Put a blindfold on a sighted adult and the visual cortex starts being colonised by touch and hearing within forty-five minutes. Not weeks. Not days. Forty-five minutes. This is not a quirk of extreme cases. It is how the cortex works all the time. Every region of the brain is in continuous low-grade negotiation with its neighbours over territory, and the currency of that negotiation is activity. Stop using a subsystem and the neighbours move in, fast. This is the empirical foundation of a hypothesis from neuroscientist David Eagleman called the defensive activation theory: that REM sleep exists specifically to keep the visual cortex active during the eight hours each night when external input is unavailable, defending its territory against takeover by senses that never go offline. The theory itself is plausible but not yet directly proven. What is proven, and what matters more for engineers, is the underlying principle. A complex system with reconfigurable resources will silently lose capability in any subsystem that is not regularly exercised, even when nothing is actively trying to take that capability away. This is not catastrophic forgetting in the usual machine learning sense, where new training overwrites old parameters. This is something subtler and arguably more dangerous: passive territorial loss in any system that supports continuous adaptation. It shows up wherever capabilities are not being exercised in long-running adaptive AI: rarely-routed experts in mixture-of-experts models, underused sensor pipelines in multi-modal robotics, capabilities that drift out of online reinforcement learning agents over months of deployment. Most current architectures treat their structure as fixed by design. Biology treats its structure as continuously contested. This episode looks at what defensive activation reveals about a missing primitive in modern AI architecture. Current systems have two fundamental modes, training and inference. Brains have at least three, and the third one, the maintenance mode that operates during REM sleep, has no clean equivalent in the systems we build. We examine what this mode is doing structurally, why generative replay in continual learning is mechanistically closer to dreaming than the field usually acknowledges, and what a telemetry-driven maintenance subsystem might look like for embedded and edge AI. The closing argument is straightforward: if biology has been running this experiment for a few hundred million years and converged on internally-driven activation as the way to maintain a plastic computational substrate, the absence of an equivalent mechanism in our architectures is not a neutral design choice. It is a gap. Support the show If you are interested in learning more then please subscribe to the podcast or head over to https://medium.com/@reefwing, where there is lots more content on AI, IoT, robotics, drones, and development. To support us in bringing you this material, you can buy me a coffee or just provide feedback. We love feedback!

    24 min
  2. Sovereign AI and the End of the Borderless Cloud

    APR 19

    Sovereign AI and the End of the Borderless Cloud

    Send us Fan Mail The borderless cloud era is ending. In the second week of January 2026, four government decisions announced in rapid succession made that shift undeniable: the UK activated its £500 million Sovereign AI Unit, France committed €109 billion, the UAE consolidated a $40 billion data centre portfolio, and the Trump administration revised chip export rules to China. In this episode, we examine why AI infrastructure is now being treated as a strategic national utility on par with energy and water, and what that means for engineers and boards making architectural decisions today. We map the global sovereign AI landscape, roughly 130 national initiatives across more than 50 countries, and separate political rhetoric from engineering reality. We examine the distinction between regulatory sovereignty (the legal authority to govern AI) and compute sovereignty (the physical capacity to run it), and explain why most nations have the first without the second. We cover China's full-stack response through Huawei's Ascend and CloudMatrix programme, a deliberate trade-off of efficiency for independence that is becoming a template other regions may follow. We draw on the Clipper chip precedent from the 1990s to show why embedded enforcement mechanisms in silicon create durable market incentives that are difficult to reverse. Support the show If you are interested in learning more then please subscribe to the podcast or head over to https://medium.com/@reefwing, where there is lots more content on AI, IoT, robotics, drones, and development. To support us in bringing you this material, you can buy me a coffee or just provide feedback. We love feedback!

    20 min
  3. The High Interest of Leveraged AI Technical Debt

    APR 6

    The High Interest of Leveraged AI Technical Debt

    Send us Fan Mail Developers feel 20% faster. They are measurably 19% slower. That 39-point gap between perception and reality is not a rounding error. It is the opening symptom of a productivity paradox now visible across every serious dataset on AI-assisted software development. This episode examines the mounting evidence that AI coding assistants are not accelerating delivery. They are mortgaging it. Review time has climbed 91%. Refactoring has collapsed by 60%. Code cloning has risen eightfold. Logic errors and security vulnerabilities are propagating at rates that outpace the review capacity of the teams shipping them. The output looks like speed. The system behaves like debt. We investigate the structural mechanism behind the paradox. AI tools raise the floor of code production while quietly lowering the ceiling of code comprehension. Developers ship code they did not write, cannot fully explain, and increasingly cannot debug. The skill most essential for validating machine-generated output is the exact skill that atrophies fastest when that output is trusted. Meanwhile, additive patterns (copy, paste, regenerate) displace the consolidative patterns (refactor, reuse, move) that historically kept codebases maintainable. The result is a fragmentation signature now measurable at industry scale. The interest rate on this debt is high because it compounds along three axes simultaneously: generation velocity, human comprehension decay, and architectural fragmentation. Traditional debt accrues linearly with deferred cleanup. AI-induced debt accrues superlinearly because the mechanism that produces it also erodes the capacity to repay it. We close with the emerging countermeasures. Spec-driven development. Automated governance guardrails. Architectural review gates positioned upstream of the commit, not downstream of the incident. The organizations treating AI velocity as a raw productivity input are accumulating liabilities they cannot yet see. The organizations treating it as a force multiplier that demands new governance infrastructure are the ones that will still be shipping in three years. The question is not whether AI makes coding faster. The question is what you are borrowing against to get that feeling of speed, and when the repayment comes due. Support the show If you are interested in learning more then please subscribe to the podcast or head over to https://medium.com/@reefwing, where there is lots more content on AI, IoT, robotics, drones, and development. To support us in bringing you this material, you can buy me a coffee or just provide feedback. We love feedback!

    25 min
  4. Pi and the Mirage of Patternicity

    APR 4

    Pi and the Mirage of Patternicity

    Send us Fan Mail In April 2025, a claim began circulating online: pi is gradually increasing around the 7,237th decimal place. A math enthusiast in Cincinnati named April Simons had apparently flagged the anomaly. Prof F.O. Olsday, head of the Number Theory Group at Princeton, was quoted confirming it. Cosmologists were linking it to the accelerating expansion of the universe. The same algorithm, the same hardware, different results. A 4 becoming a 5. Persistent. Inexplicable. Except that "F.O. Olsday" is a phonetic rearrangement of "Fool's Day." And April Simons was posting from Cincinnati on the first of April. Pi has not changed. It cannot change. It is a fixed ratio determined by Euclidean geometry, and every one of its digits is as immutable as the definition that produces them. The 7,237th digit was a 4 before 2016, it was a 4 after 2016, and it will remain a 4 until the heat death of the universe and beyond. But here is what matters: the joke worked. It worked on humans, and it would work on machines. This episode examines why both biological and artificial neural networks are structurally vulnerable to detecting patterns in structurally empty data, a phenomenon with a clinical name: apophenia. We trace the evolutionary logic behind false positive pattern detection, from Skinner's superstitious pigeons to the fusiform face area that fires on toast. We then show how the same asymmetry, optimising for recall at the expense of precision, is recapitulated in trained neural networks through simplicity bias, the documented tendency of gradient-descent-trained models to latch onto whichever statistical regularity is easiest to extract, regardless of whether it reflects causal structure. Support the show If you are interested in learning more then please subscribe to the podcast or head over to https://medium.com/@reefwing, where there is lots more content on AI, IoT, robotics, drones, and development. To support us in bringing you this material, you can buy me a coffee or just provide feedback. We love feedback!

    21 min
  5. The Missing Clock: Why Intelligence Needs Time

    MAR 28

    The Missing Clock: Why Intelligence Needs Time

    Send us Fan Mail Every living organism on Earth keeps time. Not metaphorically. Not approximately. From single-celled cyanobacteria running a three-protein molecular oscillator to the nested circadian hierarchies governing mammalian physiology, intrinsic timekeeping is not a feature of complex life. It is a prerequisite for life itself. Modern AI has no such clock. Transformers encode position, not time. Recurrent networks carry state but generate no rhythm. Reinforcement learning agents step forward on externally imposed ticks. Time in artificial intelligence is metadata, a column in the dataset, not a computational substrate shaping how information is processed moment to moment. This distinction is not academic. It determines what these systems can and cannot do. Biological clocks enable anticipation, not just reaction. They gate energy expenditure to predicted demand. They provide phase context that changes the meaning of identical inputs depending on when they arrive. They synchronize distributed systems without central authority. None of these capabilities emerge naturally from architectures that treat time as data rather than as structure. In this episode, we trace intrinsic timekeeping from its minimal biochemical origins through its multi-scale biological architecture and into the engineering consequences for AI at the edge. We examine why resource-constrained embedded systems, where power budgets, latency, and autonomy matter most, are precisely where the absence of an internal clock creates the sharpest design limitations. And we look at emerging approaches, from neural ordinary differential equations to coupled oscillator models, that begin to close the gap between processing sequences about time and processing in time. Support the show If you are interested in learning more then please subscribe to the podcast or head over to https://medium.com/@reefwing, where there is lots more content on AI, IoT, robotics, drones, and development. To support us in bringing you this material, you can buy me a coffee or just provide feedback. We love feedback!

    21 min
  6. Will Robots Evolve into Crabs?

    MAR 26

    Will Robots Evolve into Crabs?

    Send us Fan Mail Nature keeps reinventing the crab. At least five times, unrelated crustacean lineages have independently converged on the same compact, flat, modular body plan. Biologists call it carcinisation. Engineers should be paying attention. In this episode, we look at what the crab's repeated emergence tells us about the deep constraints that shape both biological and artificial systems. The crab body succeeds not because it is optimal in the abstract, but because its modularity creates a platform for downstream specialisation. The same logic applies to robotic morphology: compact, laterally stable, segment-based designs consistently outperform human-mimicking forms when the selection pressure is efficiency rather than aesthetics. We extend the analogy into AI architecture, where the Transformer has undergone its own carcinisation, colonising vision, audio, robotics, and protein folding from its origins in language modelling. That convergence reflects shared hardware and training constraints, not architectural perfection. And just as crab-like forms have been lost at least seven times in nature through decarcinisation, the emergence of hybrid architectures signals that the Transformer monoculture may be a local optimum, not a final destination. The core argument is that convergence signals constraint, modularity enables both convergence and escape, and the platform matters more than the form. Engineers chasing human mimicry or constant architectural reinvention may be solving the wrong problem. Nature solved it by building modular platforms and letting selection do the rest. Support the show If you are interested in learning more then please subscribe to the podcast or head over to https://medium.com/@reefwing, where there is lots more content on AI, IoT, robotics, drones, and development. To support us in bringing you this material, you can buy me a coffee or just provide feedback. We love feedback!

    19 min

About

“Intelligence at the Deep Edge” is a podcast exploring the fascinating intersection of embedded systems and artificial intelligence. Dive into the world of cutting-edge technology as we discuss how AI is revolutionizing edge devices, enabling smarter sensors, efficient machine learning models, and real-time decision-making at the edge. Discover more on Embedded AI (https://medium.com/embedded-ai) — our companion publication where we detail the ideas, projects, and breakthroughs featured on the podcast. Help support the podcast - https://www.buzzsprout.com/2429696/support

You Might Also Like