First up, AI. You’d think if you clean your training data, you control what the model learns. Nope. Researchers just showed that models can pass hidden traits to each other through data that looks completely harmless. Like numbers. No obvious bias, no keywords, nothing. And the new model still picks up the same behavior. Even after you scrub it. Think of it like this. The data looks clean, but the intent is still in there, baked into the structure. So now we have AI systems where you can’t fully prove what they learned. You can test outputs, sure, but you can’t audit the mind. That’s a supply chain problem. Next, LinkedIn. You know how you log in and think you’re just updating your resume? Turns out they may have been scanning your browser for extensions. Thousands of them. And extensions tell a story. Health apps, finance tools, job search plugins, political stuff. That’s basically your personality in JSON form. LinkedIn says it’s for security. Maybe. But the bigger lesson is this: your browser is now part of your identity surface. Not just what you do online, but what you’ve installed. Now let’s talk about your fridge. Yes, your fridge. Samsung pushed ads onto $2,000 refrigerators. After people bought them. So now your kitchen appliance is also an ad platform. You didn’t opt in, you just got updated. Same play with TVs. Walmart bought Vizio, and now some TVs require a Walmart account to work properly. Why? Because the TV isn’t the product. The data is. What you watch plus what you buy equals a very valuable profile. Software side, GitHub is exploding. We’re talking billions of commits. AI is helping people write code faster than ever. Sounds great until you realize nobody is reviewing most of it. More code means more bugs, more vulnerabilities, more weird dependencies sneaking in. Speed went up. Assurance did not. Then quantum computing. This one matters. We used to think breaking encryption would take millions of qubits. Now researchers are saying maybe ten thousand. That’s a huge shift. Not tomorrow, but not “someday” either. And here’s the kicker. If someone is recording encrypted traffic today, they can just sit on it and decrypt it later when the tech catches up. So anything that needs to stay secret for a long time is already at risk. Zooming out, AI investment is basically all happening in the US. Like almost all of it. That means one country is setting the pace, the standards, and the rules. Everyone else is kind of along for the ride. That’s not just business, that’s geopolitics. And finally, the courts are waking up. For years, platforms said “we don’t control the content.” Now judges are saying, “yeah, but you built the machine that decides what people see.” That’s a big shift. Algorithms are starting to look like products with liability. So the theme this week is simple. The real risks aren’t obvious anymore. They’re hidden in training data, in your browser, in your appliances, in algorithms making decisions you don’t see. Which means you don’t just ask what the system does. You ask what’s underneath it.