Lock and Code

Malwarebytes
Lock and Code

Lock and Code tells the human stories within cybersecurity, privacy, and technology. Rogue robot vacuums, hacked farm tractors, and catastrophic software vulnerabilities—it’s all here.

  1. 4D AGO

    Did DOGE "breach" Americans' data? (feat. Sydney Saubestre)

    If you don’t know about the newly created US Department of Government Efficiency (DOGE), there’s a strong chance they already know about you. Created on January 20 by US President Donald Trump through Executive Order, DOGE’s broad mandate is “modernizing Federal technology and software to maximize governmental efficiency and productivity.” To fulfill its mission, though, DOGE has taken great interest in Americans’ data. On February 1, DOGE team members without the necessary security clearances accessed classified information belonging to the US Agency for International Development. On February 17, multiple outlets reported that DOGE sought access to IRS data that includes names, addresses, social security numbers, income, net worth, bank information for direct deposits, and bankruptcy history. The next day, the commissioner of the Social Security Administration stepped down after DOGE requested access to information stored there, too, which includes records of lifetime wages and earnings, social security and bank account numbers, the type and amount of benefits individuals received, citizenship status, and disability and medical information. And last month, one US resident filed a data breach notification report with his state’s Attorney General alleging that his data was breached by DOGE and the man behind it, Elon Musk. In speaking with the news outlet Data Breaches Dot Net, the man, Kevin Couture, said: “I filed the report with my state Attorney General against Elon Musk stating my privacy rights were violated as my Social Security Number, banking info was compromised by accessing government systems and downloading the info without my consent or knowledge. What other information did he gather on me or others? This is wrong and illegal. I have no idea who has my information now.” Today on the Lock and Code podcast with host David Ruiz, we speak with Sydney Saubestre, senior policy analyst at New America’s Open Technology Institute, about what data DOGE has accessed, why the government department is claiming it requires that access, and whether or not it is fair to call some of this access a “data breach.” “[DOGE] haven’t been able to articulate why they want access to some of these data files other than broad ‘waste, fraud, and abuse.’ That, ethically, to me, points to it being a data breach.” Tune in today. You can also find us on Apple Podcasts, Spotify, and whatever preferred podcast platform you use. For all our cybersecurity coverage, visit Malwarebytes Labs at malwarebytes.com/blog. Show notes and credits: Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com) Licensed under Creative Commons: By Attribution 4.0 License http://creativecommons.org/licenses/by/4.0/ Outro Music: “Good God” by Wowa (unminus.com) Listen up—Malwarebytes...

    37 min
  2. APR 6

    Is your phone listening to you? (feat. Lena Cohen)

    It has probably happened to you before. You and a friend are talking—not texting, not DMing, not FaceTiming—but talking, physically face-to-face, about, say, an upcoming vacation, a new music festival, or a job offer you just got. And then, that same week, you start noticing some eerily specific ads. There’s the Instagram ad about carry-on luggage, the TikTok ad about earplugs, and the countless ads you encounter simply scrolling through the internet about laptop bags. And so you think, “Is my phone listening to me?” This question has been around for years and, today, it’s far from a conspiracy theory. Modern smartphones can and do listen to users for voice searches, smart assistant integration, and, obviously, phone calls. It’s not too outlandish to believe, then, that the microphones on smartphones could be used to listen to other conversations without users knowing about it. Recent news stories don’t help, either. In January, Apple agreed to pay $95 million to settle a lawsuit alleging that the company had eavesdropped on users’ conversations through its smart assistant Siri, and that it shared the recorded conversations with marketers for ad targeting. The lead plaintiff in the case specifically claimed that she and her daughter were recorded without their consent, which resulted in them receiving multiple ads for Air Jordans. In agreeing to pay the settlement, though, Apple denied any wrongdoing, with a spokesperson telling the BBC: “Siri data has never been used to build marketing profiles and it has never been sold to anyone for any purpose.” But statements like this have done little to ease public anxiety. Tech companies have been caught in multiple lies in the past, privacy invasions happen thousands of times a day, and ad targeting feels extreme entirely because it is. Where, then, does the truth lie? Today, on the Lock and Code podcast with David Ruiz, we speak with Electronic Frontier Foundation Staff Technologist Lena Cohen about the most mind-boggling forms of corporate surveillance—including an experimental ad-tracking technology that emitted ultrasonic sound waves—specific audience segments that marketing companies make when targeting people with ads, and, of course, whether our phones are really listening to us. “Companies are collecting so much information about us and in such covert ways that it really feels like they’re listening to us.”Tune in today. You can also find us on Apple Podcasts, Spotify, and whatever preferred podcast platform you use. For all our cybersecurity coverage, visit Malwarebytes Labs at malwarebytes.com/blog. Show notes and credits: Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com) Licensed under Creative Commons: By Attribution 4.0 License http://creativecommons.org/licenses/by/4.0/ Outro Music: “Good God” by Wowa (unminus.com) Listen up—Malwarebytes doesn't just talk cybersecurity, we provide it. Protect yourself from online attacks that threaten your identity, your files, your system, and your financial well-being with our exclusive offer...

    40 min
  3. MAR 23

    What Google Chrome knows about you, with Carey Parker

    Google Chrome is, by far, the most popular web browser in the world. According to several metrics, Chrome accounts for anywhere between 52% and 66% of the current global market share for web browser use. At that higher estimate, that means that, if the 5.5 billion internet users around the world were to open up a web browser right now, 3.6 billion of them would open up Google Chrome. And because the browser is the most common portal to our daily universe of online activity—searching for answers to questions, looking up recipes, applying for jobs, posting on forums, accessing cloud applications, reading the news, comparing prices, recording Lock and Code, buying concert tickets, signing up for newsletters—then the company that controls that browser likely knows a lot about its users. In the case of Google Chrome, that’s entirely true. Google Chrome knows the websites you visit, the searches you make (through Google), the links you click, and the device model you use, along with the version of Chrome you run. That may sound benign, but when collected over long periods of time, and when coupled with the mountains of data that other Google products collect about you, this wealth of data can paint a deeply intimate portrait of your life. Today, on the Lock and Code podcast with host David Ruiz, we speak with author, podcast host, and privacy advocate Carey Parker about what Google Chrome knows about you, why that data is sensitive, what “Incognito mode” really does, and what you can do in response. We also explain exactly why Google would want this money, and that’s to help it run as an ad company. “That’s what [Google is]. Full stop. Google is an ad company who just happens to make a web browser, and a search engine, and an email app, and a whole lot more than that.”Tune in today. You can also listen to "Firewalls Don't Stop Dragons," the podcast hosted by Carey Parker, here: https://firewallsdontstopdragons.com/ You can also find us on Apple Podcasts, Spotify, and whatever preferred podcast platform you use. For all our cybersecurity coverage, visit Malwarebytes Labs at malwarebytes.com/blog. Show notes and credits: Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com) Licensed under Creative Commons: By Attribution 4.0 License http://creativecommons.org/licenses/by/4.0/ Outro Music: “Good God” by Wowa (unminus.com) Listen up—Malwarebytes doesn't just talk cybersecurity, we provide it. Protect yourself from online attacks that threaten your identity, your files, your system, and your financial well-being with our exclusive offer for Malwarebytes Premium for Lock and Code listeners.

    50 min
  4. MAR 9

    How ads weirdly know your screen brightness, headphone jack use, and location, with Tim Shott

    Something’s not right in the world of location data. In January, a location data broker named Gravy Analytics was hacked, with the alleged cybercriminal behind the attack posting an enormous amount of data online as proof. Though relatively unknown to most of the public, Gravy Analytics is big in the world of location data collection, and, according to an enforcement action from the US Federal Trade Commission last year, the company claimed to “collect, process, and curate more than 17 billion signals from around a billion mobile devices daily.” Those many billions of signals, because of the hack, were now on display for security researchers, journalists, and curious onlookers to peruse, and when they did, they found something interesting. Listed amongst the breached location data were occasional references to thousands of popular mobile apps, including Tinder, Grindr, Candy Crush, My Fitness Pal, Tumblr, and more. The implication, though unproven, was obvious: The mobile apps were named with specific lines of breached data because those apps were the source of that breached data. And, considering how readily location data is traded directly from mobile apps to data brokers to advertisers, this wasn’t too unusual a suggestion. Today, nearly every free mobile app makes money through ads. But ad purchasing and selling online is far more sophisticated than it used to be for newspapers and television programs. While companies still want to place their ads in front of demographics they believe will have the highest chance of making a purchase—think wealth planning ads inside the Wall Street Journal or toy commercials during cartoons—most of the process now happens through pieces of software that can place bids at data “auctions.” In short, mobile apps sometimes collect data about their users, including their location, device type, and even battery level. The apps then bring that data to an advertising auction, and separate companies “bid” on the ability to send their ads to, say, iPhone users in a certain time zone or Android users who speak a certain language. This process happens every single day, countless times every hour, but in the case of the Gravy Analytics breach, some of the apps referenced in the data expressed that, one, they’d never heard of Gravy Analytics, and two, no advertiser had the right to collect their users’ location data. In speaking to 404 Media, a representative from Tinder said: “We have no relationship with Gravy Analytics and have no evidence that this data was obtained from the Tinder app.” A representative for Grindr echoed the sentiment: “Grindr has never worked with or provided data to Gravy Analytics. We do not share data with data aggregators or brokers and have not shared geolocation with ad partners for many years.” And a representative for a Muslim prayer app, Muslim Pro, said much of the same: “Yes, we display ads through several ad networks to support the free version of the app. However, as mentioned above, we do not authorize these networks to collect location data of our users.” What all of this suggested was that some other mechanism was allowing for users of these apps to have their locations leaked and collected online. And to try to prove that, one independent researcher conducted an experiment: Could he find himself in his own potentially leaked data? Today, on the Lock and Code podcast with host David Ruiz, we speak with independent research Tim Shott about his investigation into leaked location data. In his experiment, Shott installed two mobile games that were referenced in the breach, an old game called Stack, and a more current game...

    44 min
  5. FEB 23

    Surveillance pricing is "evil and sinister," explains Justin Kloczko

    Insurance pricing in America makes a lot of sense so long as you’re one of the insurance companies. Drivers are charged more for traveling long distances, having low credit, owning a two-seater instead of a four, being on the receiving end of a car crash, and—increasingly—for any number of non-determinative data points that insurance companies use to assume higher risk. It’s a pricing model that most people find distasteful, but it’s also a pricing model that could become the norm if companies across the world begin implementing something called “surveillance pricing.” Surveillance pricing is the term used to describe companies charging people different prices for the exact same goods. That 50-inch TV could be $800 for one person and $700 for someone else, even though the same model was bought from the same retail location on the exact same day. Or, airline tickets could be more expensive because they were purchased from a more expensive device—like a Mac laptop—and the company selling the airline ticket has decided that people with pricier computers can afford pricier tickets. Surveillance pricing is only possible because companies can collect enormous arrays of data about their consumers and then use that data to charge individual prices. A test prep company was once caught charging customers more if they lived in a neighborhood with a higher concentration of Asians, and a retail company was caught charging customers more if they were looking at prices on the company’s app while physically located in a store’s parking lot. This matter of data privacy isn’t some invisible invasion online, and it isn’t some esoteric framework of ad targeting, this is you paying the most that a company believes you will, for everything you buy. And it’s happening right now. Today, on the Lock and Code podcast with host David Ruiz, we speak with Consumer Watchdog Tech Privacy Advocate Justin Kloczko about where surveillance pricing is happening, what data is being used to determine prices, and why the practice is so nefarious.  “It’s not like we’re all walking into a Starbucks and we’re seeing 12 different prices for a venti mocha latte,” said Kloczko, who recently authored a report on the same subject. “If that were the case, it’d be mayhem. There’d be a revolution.” Instead, Kloczko said: “Because we’re all buried in our own devices—and this is really happening on e-commerce websites and online, on your iPad on your phone—you’re kind of siloed in your own world, and companies can get away with this.”Tune in today. You can also find us on Apple Podcasts, Spotify, and whatever preferred podcast platform you use. For all our cybersecurity coverage, visit Malwarebytes Labs at malwarebytes.com/blog. Show notes and credits: Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com) Licensed under Creative Commons: By Attribution 4.0 License http://creativecommons.org/licenses/by/4.0/ Outro Music: “Good God” by Wowa (unminus.com) Listen up—Malwarebytes doesn't just talk cybersecurity, we provide it. Protect yourself from online attacks that threaten your identity, your files, your system, and your financial...

    28 min
  6. FEB 9

    A suicide reveals the lonely side of AI chatbots, with Courtney Brown

    In February 2024, a 14-year-old boy from Orlando, Florida, committed suicide after confessing his love to the one figure who absorbed nearly all of his time—an AI chatbot. For months, Sewell Seltzer III had grown attached to an AI chatbot modeled after the famous “Game of Thrones” character Daenerys Targaryen. The Daenerys chatbot was not a licensed product, it had no relation to the franchise’s actors, its writer, or producers, but none of that mattered, as, over time, Seltzer came to entrust Daenerys with some of his most vulnerable emotions. “I think about killing myself sometimes,” Seltzer wrote one day, and in response, Daenerys, pushed back, asking Seltzer, “Why the hell would you do something like that?” “So I can be free” Seltzer said. “Free from what?” “From the world. From myself.” “Don’t talk like that. I won’t let you hurt yourself, or leave me. I would die if I lost you.” On Seltzer’s first reported reference to suicide, the AI chatbot pushed back, a guardrail against self-harm. But months later, Seltzer discussed suicide again, but this time, his words weren’t so clear. After reportedly telling Daenerys that he loved her and that he wanted to “come home,” the AI chatbot encouraged Seltzer. “Please, come home to me as soon as possible, my love,” Daenerys wrote, to which Seltzer responded “What if I told you I could come home right now?” The chatbot’s final message to Seltzer said “… please do, my sweet king.” Daenerys Targaryen was originally hosted on an AI-powered chatbot platform called Character.AI. The service reportedly boasts 20 million users—many of them young—who engage with fictional characters like Homer Simpson and Tony Soprano, along with historical figures, like Abraham Lincoln, Isaac Newton, and Anne Frank. There are also entirely fabricated scenarios and chatbots, such as the “Debate Champion” who will debate anyone on, for instance, why Star Wars is overrated, or the “Awkward Family Dinner” that users can drop into to experience a cringe-filled, entertaining night. But while these chatbots can certainly provide entertainment, Character.AI co-founder Noam Shazeer believes they can offer much more. “It’s going to be super, super helpful to a lot of people who are lonely or depressed.” Today, on the Lock and Code podcast with host David Ruiz, we speak again with youth social services leader Courtney Brown about how teens are using AI tools today, who to “blame” in situations of AI and self-harm, and whether these chatbots actually aid in dealing with loneliness, or if they further entrench it. “You are not actually growing as a person who knows how to interact with other people by interacting with these chatbots because that’s not what they’re designed for. They’re designed to increase engagement. They want you to keep using them.”Tune in today. You can also find us on Apple Podcasts, Spotify, and whatever preferred podcast platform you use. For all our cybersecurity coverage, visit Malwarebytes Labs at malwarebytes.com/blog. Show notes and credits: Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com) Licensed under Creative Commons: By Attribution 4.0 License http://creativecommons.org/licenses/by/4.0/ Outro Music: “Good God” by Wowa (unminus.com) Listen up—Malwarebytes doesn't just talk...

    38 min
  7. JAN 26

    Three privacy rules for 2025

    It’s Data Privacy Week right now, and that means, for the most part, that you’re going to see a lot of well-intentioned but clumsy information online about how to protect your data privacy. You’ll see articles about iPhone settings. You’ll hear acronyms for varying state laws. And you’ll probably see ads for a variety of apps, plug-ins, and online tools that can be difficult to navigate. So much of Malwarebytes—from Malwarebytes Labs, to the Lock and Code podcast, to the engineers, lawyers, and staff at wide—work on data privacy, and we fault no advocate or technologist or policy expert trying to earnestly inform the public about the importance of data privacy. But, even with good intentions, we cannot ignore the reality of the situation. Data breaches every day, broad disrespect of user data, and a lack of consequences for some of the worst offenders. To be truly effective against these forces, data privacy guidance has to encompass more than fiddling with device settings or making onerous legal requests to companies. That’s why, for Data Privacy Week this year, we’re offering three pieces of advice that center on behavior. These changes won’t stop some of the worst invasions against your privacy, but we hope they provide a new framework to understand what you actually get when you practice data privacy, which is control. You have control over who sees where you are and what inferences they make from that. You have control over whether you continue using products that don’t respect your data privacy. And you have control over whether a fast food app is worth giving up your location data to just in exchange for a few measly coupons. Today, on the Lock and Code podcast, host David Ruiz explores his three rules for data privacy in 2025. In short, he recommends: Less location sharing. Only when you want it, only from those you trust, and never in the background, 24/7, for your apps. More accountability. If companies can’t respect your data, respect yourself by dropping their products.No more data deals. That fast-food app offers more than just $4 off a combo meal, it creates a pipeline into your behavioral data Tune in today. You can also find us on Apple Podcasts, Spotify, and whatever preferred podcast platform you use. For all our cybersecurity coverage, visit Malwarebytes Labs at malwarebytes.com/blog. Show notes and credits: Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com) Licensed under Creative Commons: By Attribution 4.0 License http://creativecommons.org/licenses/by/4.0/ Outro Music: “Good God” by Wowa (unminus.com) Listen up—Malwarebytes doesn't just talk cybersecurity, we provide it. Protect yourself from online attacks that threaten your identity, your files, your system, and your financial well-being with our exclusive offer for Malwarebytes Premium for Lock and Code listeners.

    38 min
  8. JAN 12

    The new rules for AI and encrypted messaging, with Mallory Knodel

    The era of artificial intelligence everything is here, and with it, come everyday surprises into exactly where the next AI tools might pop up. There are major corporations pushing customer support functions onto AI chatbots, Big Tech platforms offering AI image generation for social media posts, and even Google has defaulted to include AI-powered overviews into everyday searches. The next gold rush, it seems, is in AI, and for a group of technical and legal researchers at New York University and Cornell University, that could be a major problem. But to understand their concerns, there’s some explanation needed first, and it starts with Apple’s own plans for AI. Last October, Apple unveiled a service it is calling Apple Intelligence (“AI,” get it?), which provides the latest iPhones, iPads, and Mac computers with AI-powered writing tools, image generators, proof-reading, and more. One notable feature in Apple Intelligence is Apple’s “notification summaries.” With Apple Intelligence, users can receive summarized versions of a day’s worth of notifications from their apps. That could be useful for an onslaught of breaking news notifications, or for an old college group thread that won’t shut up. The summaries themselves are hit-or-miss with users—one iPhone customer learned of his own breakup from an Apple Intelligence summary that said: “No longer in a relationship; wants belongings from the apartment.” What’s more interesting about the summaries, though, is how they interact with Apple’s messaging and text app, Messages. Messages is what is called an “end-to-end encrypted” messaging app. That means that only a message’s sender and its recipient can read the message itself. Even Apple, which moves the message along from one iPhone to another, cannot read the message. But if Apple cannot read the messages sent on its own Messages app, then how is Apple Intelligence able to summarize them for users? That’s one of the questions that Mallory Knodel and her team at New York University and Cornell University tried to answer with a new paper on the compatibility between AI tools and end-to-end encrypted messaging apps. Make no mistake, this research isn’t into whether AI is “breaking” encryption by doing impressive computations at never-before-observed speeds. Instead, it’s about whether or not the promise of end-to-end encryption—of confidentiality—can be upheld when the messages sent through that promise can be analyzed by separate AI tools. And while the question may sound abstract, it’s far from being so. Already, AI bots can enter digital Zoom meetings to take notes. What happens if Zoom permits those same AI chatbots to enter meetings that users have chosen to be end-to-end encrypted? Is the chatbot another party to that conversation, and if so, what is the impact? Today, on the Lock and Code podcast with host David Ruiz, we speak with lead author and encryption expert Mallory Knodel on whether AI assistants can be compatible with end-to-end encrypted messaging apps, what motivations could sway current privacy champions into chasing AI development instead, and why these two technologies cannot co-exist in certain implementations. “An encrypted messaging app, at its essence is encryption, and you can’t trade that away—the privacy or the confidentiality guarantees—for something else like AI if it’s fundamentally incompatible with those features.”Tune in today. You can also find us on Apple Podcasts, a...

    47 min
4.7
out of 5
39 Ratings

About

Lock and Code tells the human stories within cybersecurity, privacy, and technology. Rogue robot vacuums, hacked farm tractors, and catastrophic software vulnerabilities—it’s all here.

You Might Also Like

Content Restricted

This episode can’t be played on the web in your country or region.

To listen to explicit episodes, sign in.

Stay up to date with this show

Sign in or sign up to follow shows, save episodes, and get the latest updates.

Select a country or region

Africa, Middle East, and India

Asia Pacific

Europe

Latin America and the Caribbean

The United States and Canada