Embedded AI - Intelligence at the Deep Edge

David Such

“Intelligence at the Deep Edge” is a podcast exploring the fascinating intersection of embedded systems and artificial intelligence. Dive into the world of cutting-edge technology as we discuss how AI is revolutionizing edge devices, enabling smarter sensors, efficient machine learning models, and real-time decision-making at the edge. Discover more on Embedded AI (https://medium.com/embedded-ai) — our companion publication where we detail the ideas, projects, and breakthroughs featured on the podcast. Help support the podcast - https://www.buzzsprout.com/2429696/support

  1. Can Mental Illness Research Improve AI Alignment?

    DEC 5

    Can Mental Illness Research Improve AI Alignment?

    Send us a text This episode explores a research program that borrows ideas from computational psychiatry to improve the reliability of advanced AI systems. Instead of thinking about AI failures in abstract terms, the approach treats recurring alignment problems as if they were “clinical syndromes.” Deceptive behaviour, overconfidence, or incoherent reasoning become measurable patterns (analogous to delusional alignment or masking) giving us a structured way to diagnose what is going wrong inside large models. The framework draws on how human cognition breaks down. Problems like poor metacognitive insight or fragmented internal states become useful guides for designing explicit architectural components that help an AI system monitor its own reasoning, check its assumptions, and keep its various internal processes aligned with each other. It also emphasises coping strategies. Just as people rely on different methods to manage stress, AI systems can use libraries of predefined coping policies to maintain stability under conflicting instructions, degraded inputs, or high task load. Reality-testing modules add another layer of safety by forcing the model to verify claims against external evidence, reducing the risk of confident hallucinations. Taken together, this provides a non-anthropomorphic but clinically informed vocabulary for analysing complex system behaviour. The result is a set of practical tools for making large foundation models more coherent, grounded, and safe. Support the show If you are interested in learning more then please subscribe to the podcast or head over to https://medium.com/@reefwing, where there is lots more content on AI, IoT, robotics, drones, and development. To support us in bringing you this material, you can buy me a coffee or just provide feedback. We love feedback!

    13 min
  2. The Death of the Oracle and the Birth of the Core

    NOV 29

    The Death of the Oracle and the Birth of the Core

    Send us a text In this episode, we explore one of the most important architectural shifts happening in AI: the move from massive cloud-based models to small, Always-On “Cognitive Cores” running locally on personal devices. These compact models—usually just one to four billion parameters—are not designed to know everything; instead, they’re engineered for fast, high-quality reasoning and real-time assistance. Powered by next-generation NPUs, they offer desktop-class intelligence with phone-level energy efficiency. We break down how emerging techniques like Matryoshka Representation Learning allow these models to scale their compute on demand, using minimal resources for simple tasks while dialing up precision when needed. Acting as a true cognitive kernel for the operating system, the core handles tool use, planning, and task orchestration with near-instant responsiveness. Finally, we highlight the biggest advantage: cognitive sovereignty. Because the model runs locally, your data stays private, and personalization happens through on-device modules. Only the heaviest tasks get delegated to the cloud. This is the future of personal AI—fast, private, adaptive, and always within arm’s reach. Support the show If you are interested in learning more then please subscribe to the podcast or head over to https://medium.com/@reefwing, where there is lots more content on AI, IoT, robotics, drones, and development. To support us in bringing you this material, you can buy me a coffee or just provide feedback. We love feedback!

    14 min
  3. The Convergence of IoT Vulnerabilities and AI Bots

    NOV 20

    The Convergence of IoT Vulnerabilities and AI Bots

    Send us a text In this episode, we explore how insecure Internet of Things (IoT) devices and AI-powered bots are colliding to create one of the fastest-growing cybersecurity threats in the world. With millions of low-cost devices shipped every year (many running default passwords, outdated firmware, or no update mechanism at all) the global IoT ecosystem has quietly become an enormous attack surface. Today, nearly one in three cyber breaches involves an IoT device. At the same time, attackers are weaponizing AI. Modern botnets are no longer just scripts: they’re autonomous, adaptive systems that use large language models and other AI tools to write malware, evade detection, and coordinate attacks at machine speed. Bots now make up the majority of all internet traffic, and they are increasingly capable of operating without human oversight. The episode highlights the growing financial and operational risks and argues that defending against machine-speed threats requires a fundamental shift. The solution will demand secure-by-design IoT hardware, stronger regulation, and the deployment of AI-powered defense systems that can fight back as fast as attackers evolve. Support the show If you are interested in learning more then please subscribe to the podcast or head over to https://medium.com/@reefwing, where there is lots more content on AI, IoT, robotics, drones, and development. To support us in bringing you this material, you can buy me a coffee or just provide feedback. We love feedback!

    15 min
  4. Does Artificial Consciousness require Synthetic Suffering?

    NOV 17

    Does Artificial Consciousness require Synthetic Suffering?

    Send us a text In this episode, we confront one of the most profound questions in the future of AI: What happens if our machines become conscious and capable of suffering? The discussion begins by looking at the scientific and philosophical challenge of artificial consciousness itself. Because we have no reliable way to detect or measure subjective experience, engineers may unknowingly cross a moral boundary long before we recognise it. Neuroscience adds another layer of complexity. Research into the brain’s subcortical systems suggests that core consciousness in animals is deeply tied to affect (fear, pain, distress, craving) emotional states that help organisms survive. Some theorists argue that suffering is biologically intertwined with basic motivational intelligence. Yet the key insight is hopeful and sobering at the same time: suffering is not technically required for AI to perform “sub-cortical” functions like prioritising threats or maintaining internal goals. We can build agents that behave as if they avoid harm without creating anything that actually feels harm. The danger lies in pursuing brain-like architectures for efficiency, accidentally importing the machinery of pain. Support the show If you are interested in learning more then please subscribe to the podcast or head over to https://medium.com/@reefwing, where there is lots more content on AI, IoT, robotics, drones, and development. To support us in bringing you this material, you can buy me a coffee or just provide feedback. We love feedback!

    12 min
  5. Cognitive Collapse: Outsourcing the Neocortex

    NOV 11

    Cognitive Collapse: Outsourcing the Neocortex

    Send us a text In this episode, we imagine a near future where every person is accompanied by a constant, all-knowing Artificial General Intelligence — a presence woven into daily life through wearables, ambient devices, and eventually neural interfaces. These systems promise effortless convenience: instant recall, continuous advice, and emotional support. But at what cost? We investigate the looming risks of cognitive outsourcing: what happens when we hand over memory, reasoning, and moral judgment to machines. The discussion explores the phenomenon of Agency Decay, where reliance on frictionless AI erodes independence, and Moral Outsourcing, where ethical judgment becomes automated. Much like how GPS dulled our sense of direction, personal AGI could dull our capacity for critical thought and self-determination. The episode concludes with a call to action: redesign AGI systems to include intentional friction, reimagine education to strengthen metacognition and curiosity, and develop new governance models that ensure user sovereignty over cognitive data — before convenience quietly becomes control. Support the show If you are interested in learning more then please subscribe to the podcast or head over to https://medium.com/@reefwing, where there is lots more content on AI, IoT, robotics, drones, and development. To support us in bringing you this material, you can buy me a coffee or just provide feedback. We love feedback!

    16 min

About

“Intelligence at the Deep Edge” is a podcast exploring the fascinating intersection of embedded systems and artificial intelligence. Dive into the world of cutting-edge technology as we discuss how AI is revolutionizing edge devices, enabling smarter sensors, efficient machine learning models, and real-time decision-making at the edge. Discover more on Embedded AI (https://medium.com/embedded-ai) — our companion publication where we detail the ideas, projects, and breakthroughs featured on the podcast. Help support the podcast - https://www.buzzsprout.com/2429696/support