EDGE AI POD

EDGE AI FOUNDATION

Discover the cutting-edge world of energy-efficient machine learning, edge AI, hardware accelerators, software algorithms, and real-world use cases with this podcast feed from all things in the world's largest EDGE AI community.  These are shows like EDGE AI Talks, EDGE AI Blueprints as well as EDGE AI FOUNDATION event talks on a range of research, product and business topics. Join us to stay informed and inspired!

  1. 1D AGO

    What happens when you use AI to optimize AI and make AI models run fast anywhere?

    Tired of choosing between performance and freedom? We sit down with Stefan Crossin, CEO and co‑founder of YASP, to unpack how a hardware‑aware AI compiler can speed up training, simplify deployment, and finally make model portability real. The story starts with a distributed team in Freiburg and Montreal and moves straight into the heart of the problem: most AI groups burn time on infrastructure and juggle separate stacks for training and inference, all while staying tethered to one dominant vendor’s software ecosystem. Stefan lays out a different path. YASP converts models into a clean intermediate representation, plugs into the tools teams already use, and applies a closed‑loop optimization system that learns the target hardware. Instead of forcing a new language or workflow, a few lines of integration unlock dynamic kernel generation, graph‑level tuning, and one‑click deployment to different chips, clouds, or edge devices. The result is a practical bridge between “write once” ideals and real‑world performance, where being hardware‑aware—not hardware‑bound—delivers speed without lock‑in. We also dive into the market dynamics behind portability. Incumbents protect moats; challengers need bridges. Cloud providers fear shorter runtimes but win when customers get more value per dollar and per watt. With credible benchmarks showing meaningful gains in training and inference, YASP is courting chip makers, CSPs, and end users through a focused beta, a clear roadmap to launch, and a business model that combines free access with subscription tiers. If you’ve been waiting for proof that AI can be both faster and freer across architectures, this conversation makes the case with clarity and detail. Enjoy the episode? Follow the show, share it with a colleague, and leave a quick review—what platform or accelerator would you target first with true portability? Send a text Support the show Learn more about the EDGE AI FOUNDATION - edgeaifoundation.org

    24 min
  2. FEB 11

    2026 and Beyond - The Edge AI Transformation

    What if the smartest part of AI isn’t in the cloud at all—but right next to the sensor where data is born? We pull back the curtain on the rapid rise of edge AI and explain why speed, privacy, and resilience are pushing intelligence onto devices themselves. From self‑driving safety and zero‑lag user experiences to battery‑friendly wearables, we map the forces reshaping how AI is built, deployed, and trusted. We start with the hard constraints: latency that breaks real‑time systems, the explosion of data at the edge, and the ethical costs of giant data centers—energy, water, and noise. Then we dive into the hardware leap that makes on‑device inference possible: neural processing units delivering 10–100x efficiency per watt. You’ll hear how a hybrid model emerges, where the cloud handles heavy training and oversight while tiny, optimized models make instant decisions on sensors, cameras, and controllers. Using our BLERP framework—bandwidth, latency, economics, reliability, privacy—we give a clear rubric for deciding when edge AI wins. From there, we walk through the full edge workflow: on‑device pre‑processing and redaction, cloud training with MLOps, aggressive model optimization via quantization and pruning, and robust field inference with confidence thresholds and human‑in‑the‑loop fallbacks. We spotlight the technologies driving the next wave: small language models enabling generative capability on constrained chips, agentic edge systems that act autonomously in warehouses and factories, and neuromorphic, event‑driven designs ideal for always‑on sensing. We also unpack orchestration at scale with Kubernetes variants and the compilers that unlock cross‑chip portability. Across manufacturing, mobility, retail, agriculture, and the public sector, we connect real use cases to BLERP, showing how organizations cut bandwidth, reduce costs, protect privacy, and operate reliably offline. With 2026 flagged as a major inflection point for mainstream edge‑enabled devices and billions of chipsets on the horizon, the opportunity is massive—and so are the security stakes. Join us to understand where AI will live next, how it will run, and what it will take to secure a planet of intelligent endpoints. If this deep dive sparked ideas, subscribe, share with a colleague, and leave a review to help others find the show. Send a text Support the show Learn more about the EDGE AI FOUNDATION - edgeaifoundation.org

    18 min
  3. FEB 11

    Edge Computing Revolutionized: MemryX's New AI Accelerator

    Ready to revolutionize your approach to edge AI? Keith Kressin, a veteran with 13 years at Qualcomm before joining MemoryX, shares a breakthrough technology that's transforming how AI operates in resource-constrained environments. MemoryX has developed an architecture that defies conventional wisdom about AI acceleration. Unlike traditional systems dependent on memory buses and controllers, their solution features autonomous parallel cores with localized memory, eliminating bottlenecks and enabling linear scaling from small devices to powerful edge servers. The result? About 20 times better performance per watt than common alternatives like NVIDIA's Jetson platform, all packaged in a simple M.2 form factor that consumes just half a watt to two watts depending on workload. What truly sets MemoryX apart is their software approach. While many AI accelerators require extensive model optimization, MemoryX offers one-click compilation for over 4,000 models without modifications. This accessibility has opened doors across industries – from manufacturing defect detection to construction safety monitoring, medical devices to multi-camera surveillance systems. The technology proves particularly valuable for "brownfield" computing environments where legacy hardware needs AI capabilities without complete system redesigns. The company embodies efficiency at every level. While competitors have raised $250+ million in funding, MemoryX has built their complete hardware and software stack with just $60 million. This resourcefulness extends to their community approach – they offer free software, extensive documentation, and support educational initiatives including robotics camps and hackathons. Curious about bringing AI acceleration to your next project? Visit MemoryX's developer hub for free resources and examples, or purchase their M.2 accelerator directly through Amazon. Whether you're upgrading decades-old industrial equipment or designing cutting-edge multi-camera systems, this plug-and-play solution might be exactly what you need. Send a text Support the show Learn more about the EDGE AI FOUNDATION - edgeaifoundation.org

    22 min
  4. FEB 3

    Atym and WASM is revolutionizing edge AI computing for resource-constrained devices.

    Most conversations about edge computing gloss over the enormous challenge of actually deploying and managing software on constrained devices in the field. As Jason Shepherd, Atym's founder, puts it: "I've seen so many architecture diagrams with data lakes and cloud hubs, and then this tiny little box at the bottom labeled 'sensors and gateways' - which means you've never actually done this in the real world, because that stuff is some of the hardest part." Atym tackles this challenge head-on by bringing cloud principles to devices that traditionally could only run firmware. Their revolutionary approach uses WebAssembly to enable containerization on devices with as little as 256 kilobytes of memory - creating solutions thousands of times lighter than Docker containers. Founded in 2023, Atym represents the natural evolution of edge computing. While previous solutions focused on extending cloud capabilities to Linux-based edge servers and gateways, Atym crosses what they call "the Linux barrier" to bring containerization to microcontroller-based devices. This fundamentally changes how embedded systems can be developed and maintained. The impact extends beyond technical elegance. By enabling containers on constrained devices, Adam bridges the skills gap between embedded engineers who understand hardware and firmware, and application developers who work with higher-level languages and AI. A machine learning engineer can now deploy models to microcontrollers without learning embedded C, while the embedded team maintains the core device functionality. This capability becomes increasingly crucial as edge AI proliferates and cybersecurity regulations tighten. Devices that once performed simple functions now need to run sophisticated intelligence that may come from third parties and require frequent updates - a scenario traditional firmware development approaches cannot efficiently support. Ready to revolutionize how you manage your edge devices? Explore how Atym's lightweight containerization could transform your edge deployment strategy. Send us a text Support the show Learn more about the EDGE AI FOUNDATION - edgeaifoundation.org

    25 min
  5. JAN 27

    Honey, I Shrunk the LLMs: Edge-Deployed AI Agents

    The landscape of artificial intelligence is experiencing a profound transformation, with AI capabilities moving from distant cloud servers directly to edge devices where your data lives. This pivotal shift isn't just about running small models locally—it represents a fundamental reimagining of how we interact with AI systems. In this fascinating exploration, Dell Technologies' Aruna Kolluru takes us deep into the world of edge-deployed AI agents that can perceive their surroundings, generate language, plan actions, remember context, and use tools—all without requiring cloud connectivity. These aren't simple classification systems but fully autonomous digital partners capable of making complex decisions where your data is generated. Discover how miniaturized foundation models like Mistral and TinyLlama, combined with agentic frameworks and edge-native runtimes, have made this revolution possible. Through compelling real-world examples, Aruna demonstrates how these systems are transforming industries today: autonomous factory agents detecting defects and triggering interventions, rural healthcare assistants providing offline medical guidance, disaster response drones generating situational awareness, and personalized retail advisors creating real-time offers for shoppers. The technical journey doesn't stop at deployment. We examine the sophisticated optimization techniques making these models edge-friendly, the memory systems enabling contextual awareness, and the planning frameworks orchestrating multi-step workflows. Importantly, we tackle the critical governance considerations for these autonomous systems, including encrypted storage, tool access control, and comprehensive audit logging. Whether you're a developer looking to build edge AI solutions, an enterprise decision-maker exploring AI deployment options, or simply curious about where AI is headed, this episode offers invaluable insights into a technology that's bringing intelligence directly to where it's needed most. Subscribe to our podcast and join the conversation about the future of AI at the edge! Send us a text Support the show Learn more about the EDGE AI FOUNDATION - edgeaifoundation.org

    42 min
  6. JAN 20

    Ambient Scientific's Journey: From Personal Tragedy to Ultra-Low Power AI Innovation

    When personal tragedy strikes, some find a way to transform pain into purpose. Such is the remarkable story behind Ambient Scientific, where founder GP Singh's mission to prevent falls after losing a family member evolved into groundbreaking semiconductor technology enabling AI at the ultra-low power edge. The journey wasn't simple. Creating chips that could run sophisticated deep learning algorithms on tiny batteries proved more challenging than building data center processors. This demanded innovation at every level – from custom instruction sets and compilers to complete software stacks. What emerged wasn't just a single-purpose chip but a programmable platform with the versatility to support diverse applications while consuming a fraction of the power of conventional solutions. Most fascinating is what GP calls the "gravitational pull" toward edge computing. Applications initially deployed in the cloud inevitably migrate closer to where data originates – from data centers to on-premises, to desktops, to mobile devices, and ultimately to tiny wearables. This migration stems from fundamental business concerns: operating costs, data sovereignty, vendor lock-in, and the inherent distrust organizations have for cloud dependencies. The evidence? In hundreds of customer conversations, GP has yet to meet a single organization content with keeping their AI exclusively in the cloud. Ready to explore ultra-low power AI? Ambient Scientific offers development kits accessible to anyone familiar with embedded systems programming and Python-based deep learning. Join the revolution bringing intelligence to where data is created, not where it's processed. Your next innovation might be powered by a chip that sips power while delivering remarkable AI capabilities. Send us a text Support the show Learn more about the EDGE AI FOUNDATION - edgeaifoundation.org

    23 min
  7. JAN 13

    How EMASS is Revolutionizing Battery-Powered AI Applications

    Power efficiency has become the new currency in AI, and no company exemplifies this shift better than EMAS. Founded by Professor Mohamed Ali as a spinoff from his groundbreaking research at NTU Singapore, this innovative startup is revolutionizing edge AI with semiconductor technology that delivers unprecedented power efficiency for battery-constrained devices. The story begins in 2018 when Ali and his team set out to examine the entire computing stack from applications down to nanotechnology devices. Their research led to a remarkable breakthrough: a chip architecture that brings memory and compute components closer together, resulting in power efficiency 10-100 times better than competing solutions. Unlike other processors that claim low power consumption only during standby, EMAS's chip maintains ultra-low power usage while actively processing data—the true measure of efficiency for AI applications. Mark Gornson, CEO of EMAS's Semiconductor Division, brings 46 years of industry experience to the team, having worked with giants like Intel and ON Semiconductor. After seeing the benchmarks of EMAS's technology, he came out of retirement to help commercialize what he recognized as a game-changing innovation perfectly timed for the edge AI explosion. The applications are vast and growing. Drones can achieve dramatically longer flight times with lighter batteries. Wearable devices gain extended battery life without compromising functionality. Agricultural equipment benefits from real-time monitoring without frequent recharging. Industrial machinery can be equipped with predictive maintenance capabilities that identify subtle anomalies in vibration, temperature, or current draw before failures occur. Robotics systems gain critical safety features through microsecond decision-making capabilities. For developers, EMAS has prioritized accessibility by ensuring compatibility with familiar frameworks like TensorFlow and PyTorch. Their backend engine handles the translation to optimized binaries, eliminating the learning curve typically associated with specialized hardware. Ready to experience this breakthrough technology? EMAS offers development kits for hands-on testing and even provides remote access to their hardware for preliminary evaluation. See them in person at upcoming industry events in Amsterdam and Taipei, where they'll showcase how their innovative approach is redefining what's possible with battery-powered intelligent devices. Join the edge AI revolution and discover how EMAS is making efficient intelligence accessible everywhere it matters. Send us a text Support the show Learn more about the EDGE AI FOUNDATION - edgeaifoundation.org

    23 min

Ratings & Reviews

4
out of 5
2 Ratings

About

Discover the cutting-edge world of energy-efficient machine learning, edge AI, hardware accelerators, software algorithms, and real-world use cases with this podcast feed from all things in the world's largest EDGE AI community.  These are shows like EDGE AI Talks, EDGE AI Blueprints as well as EDGE AI FOUNDATION event talks on a range of research, product and business topics. Join us to stay informed and inspired!