Embedded AI - Intelligence at the Deep Edge

David Such

“Intelligence at the Deep Edge” is a podcast exploring the fascinating intersection of embedded systems and artificial intelligence. Dive into the world of cutting-edge technology as we discuss how AI is revolutionizing edge devices, enabling smarter sensors, efficient machine learning models, and real-time decision-making at the edge. Discover more on Embedded AI (https://medium.com/embedded-ai) — our companion publication where we detail the ideas, projects, and breakthroughs featured on the podcast. Help support the podcast - https://www.buzzsprout.com/2429696/support

  1. The Missing Clock: Why Intelligence Needs Time

    8 HR AGO

    The Missing Clock: Why Intelligence Needs Time

    Send us Fan Mail Every living organism on Earth keeps time. Not metaphorically. Not approximately. From single-celled cyanobacteria running a three-protein molecular oscillator to the nested circadian hierarchies governing mammalian physiology, intrinsic timekeeping is not a feature of complex life. It is a prerequisite for life itself. Modern AI has no such clock. Transformers encode position, not time. Recurrent networks carry state but generate no rhythm. Reinforcement learning agents step forward on externally imposed ticks. Time in artificial intelligence is metadata, a column in the dataset, not a computational substrate shaping how information is processed moment to moment. This distinction is not academic. It determines what these systems can and cannot do. Biological clocks enable anticipation, not just reaction. They gate energy expenditure to predicted demand. They provide phase context that changes the meaning of identical inputs depending on when they arrive. They synchronize distributed systems without central authority. None of these capabilities emerge naturally from architectures that treat time as data rather than as structure. In this episode, we trace intrinsic timekeeping from its minimal biochemical origins through its multi-scale biological architecture and into the engineering consequences for AI at the edge. We examine why resource-constrained embedded systems, where power budgets, latency, and autonomy matter most, are precisely where the absence of an internal clock creates the sharpest design limitations. And we look at emerging approaches, from neural ordinary differential equations to coupled oscillator models, that begin to close the gap between processing sequences about time and processing in time. Support the show If you are interested in learning more then please subscribe to the podcast or head over to https://medium.com/@reefwing, where there is lots more content on AI, IoT, robotics, drones, and development. To support us in bringing you this material, you can buy me a coffee or just provide feedback. We love feedback!

    21 min
  2. Will Robots Evolve into Crabs?

    2 DAYS AGO

    Will Robots Evolve into Crabs?

    Send us Fan Mail Nature keeps reinventing the crab. At least five times, unrelated crustacean lineages have independently converged on the same compact, flat, modular body plan. Biologists call it carcinisation. Engineers should be paying attention. In this episode, we look at what the crab's repeated emergence tells us about the deep constraints that shape both biological and artificial systems. The crab body succeeds not because it is optimal in the abstract, but because its modularity creates a platform for downstream specialisation. The same logic applies to robotic morphology: compact, laterally stable, segment-based designs consistently outperform human-mimicking forms when the selection pressure is efficiency rather than aesthetics. We extend the analogy into AI architecture, where the Transformer has undergone its own carcinisation, colonising vision, audio, robotics, and protein folding from its origins in language modelling. That convergence reflects shared hardware and training constraints, not architectural perfection. And just as crab-like forms have been lost at least seven times in nature through decarcinisation, the emergence of hybrid architectures signals that the Transformer monoculture may be a local optimum, not a final destination. The core argument is that convergence signals constraint, modularity enables both convergence and escape, and the platform matters more than the form. Engineers chasing human mimicry or constant architectural reinvention may be solving the wrong problem. Nature solved it by building modular platforms and letting selection do the rest. Support the show If you are interested in learning more then please subscribe to the podcast or head over to https://medium.com/@reefwing, where there is lots more content on AI, IoT, robotics, drones, and development. To support us in bringing you this material, you can buy me a coffee or just provide feedback. We love feedback!

    19 min
  3. LLM Coding Assistants: Scaling Limits and the AGI Thesis

    4 MAR

    LLM Coding Assistants: Scaling Limits and the AGI Thesis

    Send us Fan Mail In this episode, we take a hard look at one of the most debated questions in artificial intelligence: do LLM-based coding assistants face structural scaling limits that prevent them from becoming a pathway to Artificial General Intelligence? Critics argue that transformer models suffer from quadratic attention costs, lack persistent memory, and process code as flat token streams rather than structured systems. These concerns raise serious questions about whether today’s architectures can scale to handle large, real-world codebases or sustain long-horizon reasoning. But the story is more complex. We explore how engineering innovations such as retrieval-augmented generation, hybrid architectures, sub-quadratic attention methods, and agentic plan–execute–revise loops are actively mitigating many of these constraints. Research in mechanistic interpretability also challenges the “flat sequence” narrative, revealing that models form surprisingly rich internal representations of control flow, structure, and semantics. While human experts still hold an edge on deep architectural reasoning and large-scale system design, that gap is shifting as test-time compute scaling and structured reasoning frameworks improve performance on real-world software benchmarks. Rather than describing permanent ceilings, this episode frames current limitations as active research frontiers. The central question is not whether scaling hits a wall, but whether architectural diversification and hybrid systems can carry LLM-based coding assistants beyond today’s boundaries and closer to general intelligence. Support the show If you are interested in learning more then please subscribe to the podcast or head over to https://medium.com/@reefwing, where there is lots more content on AI, IoT, robotics, drones, and development. To support us in bringing you this material, you can buy me a coffee or just provide feedback. We love feedback!

    18 min
  4. Why AI makes Experts Worse

    28 FEB

    Why AI makes Experts Worse

    Send us Fan Mail Recent research points to a “leveling effect” in knowledge work. Generative AI dramatically improves the performance of novices by acting as a cognitive scaffold, raising productivity and output quality. Yet for elite professionals, the same tools can subtly degrade performance. Automation bias, overcorrection, skill atrophy, and the jagged, uneven reliability of AI systems create a situation where partial collaboration produces weaker results than either human or machine alone. We examine how this shift disrupts the traditional apprenticeship model. When entry-level tasks are automated, junior professionals lose the structured repetition that once built deep, intuitive mastery. At the same time, experts risk outsourcing the very cognitive processes that made them exceptional. The episode argues that the solution is not to reject AI, but to use it differently. Instead of treating AI as a co-author, experts should deploy it as an adversarial sparring partner to stress-test ideas, surface blind spots, and challenge assumptions. As the economy integrates AI more deeply, the value of human work moves away from procedural competence and toward strategic judgment, ethical reasoning, and contextual awareness. In this new landscape, the advantage belongs to those who can orchestrate intelligent systems without surrendering their own intellectual edge. Support the show If you are interested in learning more then please subscribe to the podcast or head over to https://medium.com/@reefwing, where there is lots more content on AI, IoT, robotics, drones, and development. To support us in bringing you this material, you can buy me a coffee or just provide feedback. We love feedback!

    17 min

About

“Intelligence at the Deep Edge” is a podcast exploring the fascinating intersection of embedded systems and artificial intelligence. Dive into the world of cutting-edge technology as we discuss how AI is revolutionizing edge devices, enabling smarter sensors, efficient machine learning models, and real-time decision-making at the edge. Discover more on Embedded AI (https://medium.com/embedded-ai) — our companion publication where we detail the ideas, projects, and breakthroughs featured on the podcast. Help support the podcast - https://www.buzzsprout.com/2429696/support

You Might Also Like