This week’s deep dive explores a powerful theme shaping the modern threat landscape: invisible signals. From the devices we wear and drive to the AI systems we increasingly rely on, our technology is constantly emitting data — sometimes to protect us, sometimes to expose us. We begin with a new Android app called Nearby Glasses, designed to alert users when smart glasses like Meta’s Ray-Bans are detected nearby via Bluetooth manufacturer identifiers. It’s a citizen-built countermeasure to always-on wearable cameras, highlighting rising tensions between convenience and consent in public spaces. Next, we examine research showing that tire pressure monitoring systems (TPMS), mandatory in U.S. vehicles since 2007, broadcast unencrypted, persistent identifiers. Researchers captured millions of signals and demonstrated how vehicles can be passively tracked using inexpensive radio equipment. No hacking required — just poorly designed IoT architecture turning cars into rolling beacons. From physical signals to digital footprints, a new study reveals that AI can deanonymize social media users by correlating small details across platforms. What once required nation-state resources can now be done with commodity large language models, fundamentally challenging the concept of online anonymity. We then dive into the “Truman Show” investment scam — a sophisticated fraud operation that uses AI-generated personas, fake group chats, fabricated media coverage, and sham trading apps to create a fully immersive illusion of legitimacy. Rather than stealing trust directly, scammers now manufacture entire digital realities where trust feels inevitable. AI agents themselves are also reshaping security assumptions. Modern assistants can access files, write code, and interact with online services using a user’s privileges. Researchers warn that prompt injection attacks — hidden malicious instructions embedded in content — can manipulate these agents into leaking data or performing harmful actions. When AI combines sensitive access, untrusted input, and outbound communication, it becomes a new form of insider risk. That risk was underscored by the OpenClaw vulnerability, which allowed malicious web pages to brute-force a local AI agent gateway and potentially hijack it. The lesson: “local” no longer means secure. Any system with elevated privileges must be treated as a governed identity. On the defensive side, AI is accelerating security improvements. Anthropic used a large language model to analyze Firefox’s codebase, identifying over 100 flaws in two weeks, including 22 confirmed security bugs. AI is compressing months of review into days — but the same acceleration applies to attackers. Finally, Operation Candy in Sweden demonstrates how digital evidence can unravel vast criminal networks. Two seized phones exposed an international drug and money laundering operation spanning multiple continents, proving that even small data points can collapse large hidden systems. Zooming out, the pattern is clear: wearables broadcast presence, cars broadcast identity, AI strips away anonymity, scams construct synthetic realities, assistants act autonomously, and devices quietly record history. Signals are everywhere — visible and invisible — and AI is amplifying their impact. The question is no longer whether your technology emits signals. It’s who is listening — and whether they’re protecting you or profiling you.