Embedded AI - Intelligence at the Deep Edge

David Such

“Intelligence at the Deep Edge” is a podcast exploring the fascinating intersection of embedded systems and artificial intelligence. Dive into the world of cutting-edge technology as we discuss how AI is revolutionizing edge devices, enabling smarter sensors, efficient machine learning models, and real-time decision-making at the edge. Discover more on Embedded AI (https://medium.com/embedded-ai) — our companion publication where we detail the ideas, projects, and breakthroughs featured on the podcast. Help support the podcast - https://www.buzzsprout.com/2429696/support

  1. LLM Coding Assistants: Scaling Limits and the AGI Thesis

    MAR 4

    LLM Coding Assistants: Scaling Limits and the AGI Thesis

    Send a text In this episode, we take a hard look at one of the most debated questions in artificial intelligence: do LLM-based coding assistants face structural scaling limits that prevent them from becoming a pathway to Artificial General Intelligence? Critics argue that transformer models suffer from quadratic attention costs, lack persistent memory, and process code as flat token streams rather than structured systems. These concerns raise serious questions about whether today’s architectures can scale to handle large, real-world codebases or sustain long-horizon reasoning. But the story is more complex. We explore how engineering innovations such as retrieval-augmented generation, hybrid architectures, sub-quadratic attention methods, and agentic plan–execute–revise loops are actively mitigating many of these constraints. Research in mechanistic interpretability also challenges the “flat sequence” narrative, revealing that models form surprisingly rich internal representations of control flow, structure, and semantics. While human experts still hold an edge on deep architectural reasoning and large-scale system design, that gap is shifting as test-time compute scaling and structured reasoning frameworks improve performance on real-world software benchmarks. Rather than describing permanent ceilings, this episode frames current limitations as active research frontiers. The central question is not whether scaling hits a wall, but whether architectural diversification and hybrid systems can carry LLM-based coding assistants beyond today’s boundaries and closer to general intelligence. Support the show If you are interested in learning more then please subscribe to the podcast or head over to https://medium.com/@reefwing, where there is lots more content on AI, IoT, robotics, drones, and development. To support us in bringing you this material, you can buy me a coffee or just provide feedback. We love feedback!

    18 min
  2. Why AI makes Experts Worse

    FEB 28

    Why AI makes Experts Worse

    Send a text Recent research points to a “leveling effect” in knowledge work. Generative AI dramatically improves the performance of novices by acting as a cognitive scaffold, raising productivity and output quality. Yet for elite professionals, the same tools can subtly degrade performance. Automation bias, overcorrection, skill atrophy, and the jagged, uneven reliability of AI systems create a situation where partial collaboration produces weaker results than either human or machine alone. We examine how this shift disrupts the traditional apprenticeship model. When entry-level tasks are automated, junior professionals lose the structured repetition that once built deep, intuitive mastery. At the same time, experts risk outsourcing the very cognitive processes that made them exceptional. The episode argues that the solution is not to reject AI, but to use it differently. Instead of treating AI as a co-author, experts should deploy it as an adversarial sparring partner to stress-test ideas, surface blind spots, and challenge assumptions. As the economy integrates AI more deeply, the value of human work moves away from procedural competence and toward strategic judgment, ethical reasoning, and contextual awareness. In this new landscape, the advantage belongs to those who can orchestrate intelligent systems without surrendering their own intellectual edge. Support the show If you are interested in learning more then please subscribe to the podcast or head over to https://medium.com/@reefwing, where there is lots more content on AI, IoT, robotics, drones, and development. To support us in bringing you this material, you can buy me a coffee or just provide feedback. We love feedback!

    17 min

About

“Intelligence at the Deep Edge” is a podcast exploring the fascinating intersection of embedded systems and artificial intelligence. Dive into the world of cutting-edge technology as we discuss how AI is revolutionizing edge devices, enabling smarter sensors, efficient machine learning models, and real-time decision-making at the edge. Discover more on Embedded AI (https://medium.com/embedded-ai) — our companion publication where we detail the ideas, projects, and breakthroughs featured on the podcast. Help support the podcast - https://www.buzzsprout.com/2429696/support