Lock and Code

Malwarebytes
Lock and Code

Lock and Code tells the human stories within cybersecurity, privacy, and technology. Rogue robot vacuums, hacked farm tractors, and catastrophic software vulnerabilities—it’s all here.

  1. 2月23日

    Surveillance pricing is "evil and sinister," explains Justin Kloczko

    Insurance pricing in America makes a lot of sense so long as you’re one of the insurance companies. Drivers are charged more for traveling long distances, having low credit, owning a two-seater instead of a four, being on the receiving end of a car crash, and—increasingly—for any number of non-determinative data points that insurance companies use to assume higher risk. It’s a pricing model that most people find distasteful, but it’s also a pricing model that could become the norm if companies across the world begin implementing something called “surveillance pricing.” Surveillance pricing is the term used to describe companies charging people different prices for the exact same goods. That 50-inch TV could be $800 for one person and $700 for someone else, even though the same model was bought from the same retail location on the exact same day. Or, airline tickets could be more expensive because they were purchased from a more expensive device—like a Mac laptop—and the company selling the airline ticket has decided that people with pricier computers can afford pricier tickets. Surveillance pricing is only possible because companies can collect enormous arrays of data about their consumers and then use that data to charge individual prices. A test prep company was once caught charging customers more if they lived in a neighborhood with a higher concentration of Asians, and a retail company was caught charging customers more if they were looking at prices on the company’s app while physically located in a store’s parking lot. This matter of data privacy isn’t some invisible invasion online, and it isn’t some esoteric framework of ad targeting, this is you paying the most that a company believes you will, for everything you buy. And it’s happening right now. Today, on the Lock and Code podcast with host David Ruiz, we speak with Consumer Watchdog Tech Privacy Advocate Justin Kloczko about where surveillance pricing is happening, what data is being used to determine prices, and why the practice is so nefarious.  “It’s not like we’re all walking into a Starbucks and we’re seeing 12 different prices for a venti mocha latte,” said Kloczko, who recently authored a report on the same subject. “If that were the case, it’d be mayhem. There’d be a revolution.” Instead, Kloczko said: “Because we’re all buried in our own devices—and this is really happening on e-commerce websites and online, on your iPad on your phone—you’re kind of siloed in your own world, and companies can get away with this.”Tune in today. You can also find us on Apple Podcasts, Spotify, and Google Podcasts, plus whatever preferred podcast platform you use. For all our cybersecurity coverage, visit Malwarebytes Labs at malwarebytes.com/blog. Show notes and credits: Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com) Licensed under Creative Commons: By Attribution 4.0 License http://creativecommons.org/licenses/by/4.0/ Outro Music: “Good God” by Wowa (unminus.com) Listen up—Malwarebytes

    28 分鐘
  2. 2月9日

    A suicide reveals the lonely side of AI chatbots, with Courtney Brown

    In February 2024, a 14-year-old boy from Orlando, Florida, committed suicide after confessing his love to the one figure who absorbed nearly all of his time—an AI chatbot. For months, Sewell Seltzer III had grown attached to an AI chatbot modeled after the famous “Game of Thrones” character Daenerys Targaryen. The Daenerys chatbot was not a licensed product, it had no relation to the franchise’s actors, its writer, or producers, but none of that mattered, as, over time, Seltzer came to entrust Daenerys with some of his most vulnerable emotions. “I think about killing myself sometimes,” Seltzer wrote one day, and in response, Daenerys, pushed back, asking Seltzer, “Why the hell would you do something like that?” “So I can be free” Seltzer said. “Free from what?” “From the world. From myself.” “Don’t talk like that. I won’t let you hurt yourself, or leave me. I would die if I lost you.” On Seltzer’s first reported reference to suicide, the AI chatbot pushed back, a guardrail against self-harm. But months later, Seltzer discussed suicide again, but this time, his words weren’t so clear. After reportedly telling Daenerys that he loved her and that he wanted to “come home,” the AI chatbot encouraged Seltzer. “Please, come home to me as soon as possible, my love,” Daenerys wrote, to which Seltzer responded “What if I told you I could come home right now?” The chatbot’s final message to Seltzer said “… please do, my sweet king.” Daenerys Targaryen was originally hosted on an AI-powered chatbot platform called Character.AI. The service reportedly boasts 20 million users—many of them young—who engage with fictional characters like Homer Simpson and Tony Soprano, along with historical figures, like Abraham Lincoln, Isaac Newton, and Anne Frank. There are also entirely fabricated scenarios and chatbots, such as the “Debate Champion” who will debate anyone on, for instance, why Star Wars is overrated, or the “Awkward Family Dinner” that users can drop into to experience a cringe-filled, entertaining night. But while these chatbots can certainly provide entertainment, Character.AI co-founder Noam Shazeer believes they can offer much more. “It’s going to be super, super helpful to a lot of people who are lonely or depressed.” Today, on the Lock and Code podcast with host David Ruiz, we speak again with youth social services leader Courtney Brown about how teens are using AI tools today, who to “blame” in situations of AI and self-harm, and whether these chatbots actually aid in dealing with loneliness, or if they further entrench it. “You are not actually growing as a person who knows how to interact with other people by interacting with these chatbots because that’s not what they’re designed for. They’re designed to increase engagement. They want you to keep using them.”Tune in today. You can also find us on Apple Podcasts, Spotify, and Google Podcasts, plus whatever preferred podcast platform you use. For all our cybersecurity coverage, visit Malwarebytes Labs at malwarebytes.com/blog. Show notes and credits: Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com) Licensed under Creative Commons: By Attribution 4.0 License a href="http://creativecommons.org/licenses/by/4.0/" rel="noopener...

    38 分鐘
  3. 1月26日

    Three privacy rules for 2025

    It’s Data Privacy Week right now, and that means, for the most part, that you’re going to see a lot of well-intentioned but clumsy information online about how to protect your data privacy. You’ll see articles about iPhone settings. You’ll hear acronyms for varying state laws. And you’ll probably see ads for a variety of apps, plug-ins, and online tools that can be difficult to navigate. So much of Malwarebytes—from Malwarebytes Labs, to the Lock and Code podcast, to the engineers, lawyers, and staff at wide—work on data privacy, and we fault no advocate or technologist or policy expert trying to earnestly inform the public about the importance of data privacy. But, even with good intentions, we cannot ignore the reality of the situation. Data breaches every day, broad disrespect of user data, and a lack of consequences for some of the worst offenders. To be truly effective against these forces, data privacy guidance has to encompass more than fiddling with device settings or making onerous legal requests to companies. That’s why, for Data Privacy Week this year, we’re offering three pieces of advice that center on behavior. These changes won’t stop some of the worst invasions against your privacy, but we hope they provide a new framework to understand what you actually get when you practice data privacy, which is control. You have control over who sees where you are and what inferences they make from that. You have control over whether you continue using products that don’t respect your data privacy. And you have control over whether a fast food app is worth giving up your location data to just in exchange for a few measly coupons. Today, on the Lock and Code podcast, host David Ruiz explores his three rules for data privacy in 2025. In short, he recommends: Less location sharing. Only when you want it, only from those you trust, and never in the background, 24/7, for your apps. More accountability. If companies can’t respect your data, respect yourself by dropping their products.No more data deals. That fast-food app offers more than just $4 off a combo meal, it creates a pipeline into your behavioral data Tune in today. You can also find us on Apple Podcasts, Spotify, and Google Podcasts, plus whatever preferred podcast platform you use. For all our cybersecurity coverage, visit Malwarebytes Labs at malwarebytes.com/blog. Show notes and credits: Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com) Licensed under Creative Commons: By Attribution 4.0 License http://creativecommons.org/licenses/by/4.0/ Outro Music: “Good God” by Wowa (unminus.com) Listen up—Malwarebytes doesn't just talk cybersecurity, we provide it. Protect yourself from online attacks that threaten your identity, your files, your system, and your financial well-being with our exclusive offer for Malwarebytes Premium for Lock and Code listeners.

    38 分鐘
  4. 1月12日

    The new rules for AI and encrypted messaging, with Mallory Knodel

    The era of artificial intelligence everything is here, and with it, come everyday surprises into exactly where the next AI tools might pop up. There are major corporations pushing customer support functions onto AI chatbots, Big Tech platforms offering AI image generation for social media posts, and even Google has defaulted to include AI-powered overviews into everyday searches. The next gold rush, it seems, is in AI, and for a group of technical and legal researchers at New York University and Cornell University, that could be a major problem. But to understand their concerns, there’s some explanation needed first, and it starts with Apple’s own plans for AI. Last October, Apple unveiled a service it is calling Apple Intelligence (“AI,” get it?), which provides the latest iPhones, iPads, and Mac computers with AI-powered writing tools, image generators, proof-reading, and more. One notable feature in Apple Intelligence is Apple’s “notification summaries.” With Apple Intelligence, users can receive summarized versions of a day’s worth of notifications from their apps. That could be useful for an onslaught of breaking news notifications, or for an old college group thread that won’t shut up. The summaries themselves are hit-or-miss with users—one iPhone customer learned of his own breakup from an Apple Intelligence summary that said: “No longer in a relationship; wants belongings from the apartment.” What’s more interesting about the summaries, though, is how they interact with Apple’s messaging and text app, Messages. Messages is what is called an “end-to-end encrypted” messaging app. That means that only a message’s sender and its recipient can read the message itself. Even Apple, which moves the message along from one iPhone to another, cannot read the message. But if Apple cannot read the messages sent on its own Messages app, then how is Apple Intelligence able to summarize them for users? That’s one of the questions that Mallory Knodel and her team at New York University and Cornell University tried to answer with a new paper on the compatibility between AI tools and end-to-end encrypted messaging apps. Make no mistake, this research isn’t into whether AI is “breaking” encryption by doing impressive computations at never-before-observed speeds. Instead, it’s about whether or not the promise of end-to-end encryption—of confidentiality—can be upheld when the messages sent through that promise can be analyzed by separate AI tools. And while the question may sound abstract, it’s far from being so. Already, AI bots can enter digital Zoom meetings to take notes. What happens if Zoom permits those same AI chatbots to enter meetings that users have chosen to be end-to-end encrypted? Is the chatbot another party to that conversation, and if so, what is the impact? Today, on the Lock and Code podcast with host David Ruiz, we speak with lead author and encryption expert Mallory Knodel on whether AI assistants can be compatible with end-to-end encrypted messaging apps, what motivations could sway current privacy champions into chasing AI development instead, and why these two technologies cannot co-exist in certain implementations. “An encrypted messaging app, at its essence is encryption, and you can’t trade that away—the privacy or the confidentiality guarantees—for something else like AI if it’s fundamentally incompatible with those features.”Tune in today. You can also find us on Apple Podcasts, a...

    47 分鐘
  5. 2024/12/29

    Is nowhere safe from AI slop?

    You can see it on X. You can see on Instagram. It’s flooding community pages on Facebook and filling up channels on YouTube. It’s called “AI slop” and it’s the fastest, laziest way to drive engagement. Like “click bait” before it (“You won’t believe what happens next,” reads the trickster headline), AI slop can be understood as the latest online tactic in getting eyeballs, clicks, shares, comments, and views. With this go-around, however, the methodology is turbocharged with generative AI tools like ChatGPT, Midjourney, and MetaAI, which can all churn out endless waves of images and text with little restrictions. To rack up millions of views, a “fall aesthetic” account on X might post an AI-generated image of a candle-lit café table overlooking a rainy, romantic street. Or, perhaps, to make a quick buck, an author might “write” and publish an entirely AI generated crockpot cookbook—they may even use AI to write the glowing reviews on Amazon. Or, to sway public opinion, a social media account may post an AI-generated image of a child stranded during a flood with the caption “Our government has failed us again.” There is, currently, another key characteristic to AI slop online, and that is its low quality. The dreamy, Vaseline sheen produced by many AI image generators is easy (for most people) to spot, and common mistakes in small details abound: stoves have nine burners, curtains hang on nothing, and human hands sometimes come with extra fingers. But little of that has mattered, as AI slop has continued to slosh about online. There are AI-generated children’s books being advertised relentlessly on the Amazon Kindle store. There are unachievable AI-generated crochet designs flooding Reddit. There is an Instagram account described as “Austin’s #1 restaurant” that only posts AI-generated images of fanciful food, like Moo Deng croissants, and Pikachu ravioli, and Obi-Wan Canoli. There’s the entire phenomenon on Facebook that is now known only as “Shrimp Jesus.” If none of this is making much sense, you’ve come to the right place. Today, on the Lock and Code podcast with host David Ruiz, we’re speaking with Malwarebytes Labs Editor-in-Chief Anna Brading and ThreatDown Cybersecurity Evangelist Mark Stockley about AI slop—where it’s headed, what the consequences are, and whether anywhere is safe from its influence. Tune in today. You can also find us on Apple Podcasts, Spotify, and Google Podcasts, plus whatever preferred podcast platform you use. For all our cybersecurity coverage, visit Malwarebytes Labs at malwarebytes.com/blog. Show notes and credits: Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com) Licensed under Creative Commons: By Attribution 4.0 License a...

    39 分鐘
  6. 2024/12/16

    A day in the life of a privacy pro, with Ron de Jesus

    Privacy is many things for many people. For the teenager suffering from a bad breakup, privacy is the ability to stop sharing her location and to block her ex on social media. For the political dissident advocating against an oppressive government, privacy is the protection that comes from secure, digital communications. And for the California resident who wants to know exactly how they’re being included in so many targeted ads, privacy is the legal right to ask a marketing firm how they collect their data. In all these situations, privacy is being provided to a person, often by a company or that company’s employees. The decisions to disallow location sharing and block social media users are made—and implemented—by people. The engineering that goes into building a secure, end-to-end encrypted messaging platform is done by people. Likewise, the response to someone’s legal request is completed by either a lawyer, a paralegal, or someone with a career in compliance. In other words, privacy, for the people who spend their days with these companies, is work. It’s their expertise, their career, and their to-do list. But what does that work actually entail? Today, on the Lock and Code podcast with host David Ruiz, we speak with Transcend Field Chief Privacy Officer Ron de Jesus about the responsibilities of privacy professionals today and how experts balance the privacy of users with the goals of their companies. De Jesus also explains how everyday people can meaningfully judge whether a company’s privacy “promises” have any merit by looking into what the companies provide, including a legible privacy policy and “just-in-time” notifications that ask for consent for any data collection as it happens. “When companies provide these really easy-to-use controls around my personal information, that’s a really great trigger for me to say, hey, this company, really, is putting their money where their mouth is.”Tune in today. You can also find us on Apple Podcasts, Spotify, and Google Podcasts, plus whatever preferred podcast platform you use. For all our cybersecurity coverage, visit Malwarebytes Labs at malwarebytes.com/blog. Show notes and credits: Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com) Licensed under Creative Commons: By Attribution 4.0 License http://creativecommons.org/licenses/by/4.0/ Outro Music: “Good God” by Wowa (unminus.com) Listen up—Malwarebytes doesn't just talk cybersecurity, we provide it. Protect yourself from online attacks that threaten your identity, your files, your system, and your financial well-being with our exclusive offer for Malwarebytes Premium for Lock and Code listeners.

    34 分鐘
  7. 2024/12/01

    These cars want to know about your sex life (re-air)

    Two weeks ago, the Lock and Code podcast shared three stories about home products that requested, collected, or exposed sensitive data online. There were the air fryers that asked users to record audio through their smartphones. There was the smart ring maker that, even with privacy controls put into place, published data about users’ stress levels and heart rates. And there was the smart, AI-assisted vacuum that, through the failings of a group of contractors, allowed an image of a woman on a toilet to be shared on Facebook. These cautionary tales involved “smart devices,” products like speakers, fridges, washers and dryers, and thermostats that can connect to the internet. But there’s another smart device that many folks might forget about that can collect deeply personal information—their cars. Today, the Lock and Code podcast with host David Ruiz revisits a prior episode from 2023 about what types of data modern vehicles can collect, and what the car makers behind those vehicles could do with those streams of information. In the episode, we spoke with researchers at Mozilla—working under the team name “Privacy Not Included”—who reviewed the privacy and data collection policies of many of today’s automakers. To put it shortly, the researchers concluded that cars are a privacy nightmare.  According to the team’s research, Nissan said it can collect “sexual activity” information about consumers. Kia said it can collect information about a consumer’s “sex life.” Subaru passengers allegedly consented to the collection of their data by simply being in the vehicle. Volkswagen said it collects data like a person’s age and gender and whether they’re using your seatbelt, and it could use that information for targeted marketing purposes.  And those are just the highlights. Explained Zoë MacDonald, content creator for Privacy Not Included:  “We were pretty surprised by the data points that the car companies say they can collect… including social security number, information about your religion, your marital status, genetic information, disability status… immigration status, race.”In our full conversation from last year, we spoke with Privacy Not Included’s MacDonald and Jen Caltrider about the data that cars can collect, how that data can be shared, how it can be used, and whether consumers have any choice in the matter. Tune in today. You can also find us on Apple Podcasts, Spotify, and Google Podcasts, plus whatever preferred podcast platform you use. For all our cybersecurity coverage, visit Malwarebytes Labs at malwarebytes.com/blog. Show notes and credits: Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com) Licensed under Creative Commons: By Attribution 4.0 License a...

    45 分鐘
  8. 2024/11/18

    An air fryer, a ring, and a vacuum get brought into a home. What they take out is your data

    The month, a consumer rights group out of the UK posed a question to the public that they’d likely never considered: Were their air fryers spying on them? By analyzing the associated Android apps for three separate air fryer models from three different companies, a group of researchers learned that these kitchen devices didn’t just promise to make crispier mozzarella sticks, crunchier chicken wings, and flakier reheated pastries—they also wanted a lot of user data, from precise location to voice recordings from a user’s phone. “In the air fryer category, as well as knowing customers’ precise location, all three products wanted permission to record audio on the user’s phone, for no specified reason,” the group wrote in its findings. While it may be easy to discount the data collection requests of an air fryer app, it is getting harder to buy any type of product today that doesn’t connect to the internet, request your data, or share that data with unknown companies and contractors across the world. Today, on the Lock and Code pocast, host David Ruiz tells three separate stories about consumer devices that somewhat invisibly collected user data and then spread it in unexpected ways. This includes kitchen utilities that sent data to China, a smart ring maker that published de-identified, aggregate data about the stress levels of its users, and a smart vacuum that recorded a sensitive image of a woman that was later shared on Facebook. These stories aren’t about mass government surveillance, and they’re not about spying, or the targeting of political dissidents. Their intrigue is elsewhere, in how common it is for what we say, where we go, and how we feel, to be collected and analyzed in ways we never anticipated. Tune in today. You can also find us on Apple Podcasts, Spotify, and Google Podcasts, plus whatever preferred podcast platform you use. For all our cybersecurity coverage, visit Malwarebytes Labs at malwarebytes.com/blog. Show notes and credits: Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com) Licensed under Creative Commons: By Attribution 4.0 License http://creativecommons.org/licenses/by/4.0/ Outro Music: “Good God” by Wowa (unminus.com) Listen up—Malwarebytes doesn't just talk cybersecurity, we provide it. Protect yourself from online attacks that threaten your identity, your files, your system, and your financial well-being with our exclusive offer for Malwarebytes Premium for Lock and Code listeners.

    27 分鐘
4.8
(滿分 5 顆星)
38 則評分

簡介

Lock and Code tells the human stories within cybersecurity, privacy, and technology. Rogue robot vacuums, hacked farm tractors, and catastrophic software vulnerabilities—it’s all here.

你可能也會喜歡

若要收聽兒少不宜的單集,請登入帳號。

隨時掌握此節目最新消息

登入或註冊後,即可追蹤節目、儲存單集和掌握最新資訊。

選取國家或地區

非洲、中東和印度

亞太地區

歐洲

拉丁美洲與加勒比海地區

美國與加拿大