Lock and Code

Malwarebytes

Lock and Code tells the human stories within cybersecurity, privacy, and technology. Rogue robot vacuums, hacked farm tractors, and catastrophic software vulnerabilities—it’s all here.

  1. 2天前

    What does Google know about me?

    Google is everywhere in our lives. It’s reach into our data extends just as far. After investigating how much data Facebook had collected about him in his nearly 20 years with the platform, Lock and Code host David Ruiz had similar questions about the other Big Tech platforms in his life, and this time, he turned his attention to Google. Google dominates much of the modern web. It has a search engine that handles billions of requests a day. Its tracking and metrics service, Google Analytics, is embedded into reportedly 10s of millions of websites. Its Maps feature not only serves up directions around the world, it also tracks traffic patterns across countless streets, highways, and more. Its online services for email (Gmail), cloud storage (Google Drive), and office software (Google Docs, Sheets, and Slides) are household names. And it also runs the most popular web browser in the world, Google Chrome, and the most popular operating system in the world, Android. Today, on the Lock and Code podcast, Ruiz explains how he requested his data from Google and what he learned not only about the company, but about himself, in the process. That includes the 142,729 items in his Gmail inbox right now, along with the 8,079 searches he made, 3,050 related websites he visited, and 4,610 YouTube videos he watched in just the past 18 months. It also includes his late-night searches for worrying medical symptoms, his movements across the US as his IP address was recorded when logging into Google Maps, his emails, his photos, his notes, his old freelance work as a journalist, his outdated cover letters when he was unemployed, his teenage-year Google Chrome bookmarks, his flight and hotel searches, and even the searches he made within his own Gmail inbox and his Google Drive. After digging into the data for long enough, Ruiz came to a frightening conclusion: Google knows whatever the hell it wants about him, it just has to look. But Ruiz wasn’t happy to let the company’s access continue. So he has a plan. ”I am taking steps to change that [access] so that the next time I ask, “What does Google know about me?” I can hopefully answer: A little bit less.”Tune in today. You can also find us on Apple Podcasts, Spotify, and whatever preferred podcast platform you use. For all our cybersecurity coverage, visit Malwarebytes Labs at malwarebytes.com/blog. Show notes and credits: Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com) Licensed under Creative Commons: By Attribution 4.0 License http://creativecommons.org/licenses/by/4.0/ Outro Music: “Good God” by Wowa (unminus.com) Listen up—Malwarebytes doesn't just talk cybersecurity, we provide it. Protect yourself from online attacks that threaten your identity, your files, your system, and your financial well-being with our exclusive offer for Malwarebytes Premium for Lock and Code listeners.

    27 分钟
  2. 10月5日

    What's there to save about social media? (feat. Rabble)

    “Connection” was the promise—and goal—of much of the early internet. No longer would people be separated from vital resources and news that was either too hard to reach or made simply inaccessible by governments. No longer would education be guarded behind walls both physical and paid. And no longer would your birthplace determine so much about the path of your life, as the internet could connect people to places, ideas, businesses, collaborations, and agency. Somewhere along the line though, “connection” got co-opted. The same platforms that brought billions of people together—including Facebook, Twitter, Instagram, TikTok, and Snapchat—started to divide them for profit. These companies made more money by showing people whatever was most likely to keep them online, even if it upset them. More time spent on the platfrom meant more likelihood of encountering ads which meant more advertising revenue for Big Tech. Today, these same platforms are now symbols of some of the worst aspects of being online. Nation-states have abused the platforms to push disinformation campaigns. An impossible sense of scale allows gore and porn and hate speech to slip by even the best efforts at content moderation. And children can be exposed to bullying, peer pressure, and harassment. So, what would it take to make online connection a good thing? Today, on the Lock and Code podcast with host David Ruiz, we speak with Rabble—an early architect of social media, Twitter’s first employee, and host of the podcast Revolution.Social—about what good remains inside social media and what steps are being taken to preserve it. “ I don’t think that what we’re seeing with social media is so much a set of new things that are disasters that are rising up from this Pandora’s box… but rather they’re all things that existed in society and now they’re not all kept locked away. So we can see them and we have to address them now.”Tune in today.

    50 分钟
  3. 9月21日

    Can you disappear online? (feat. Peter Dolanjski)

    There’s more about you online than you know. The company Acxiom, for example, has probably determined whether you’re a heavy drinker, or if you’re overweight, or if you smoke (or all three). The same company has also probably estimated—to the exact dollar—the amount you spend every year on dining out, donating to charities, and traveling domestically. Another company Experian, has probably made a series of decisions about whether you are “Likely,” “Unlikely,” “Highly Likely,” etc., to shop at a mattress store, visit a theme park, or frequent the gym. This isn’t the data most people think about when considering their online privacy. Yes, names, addresses, phone numbers, and age are all important and potentially sensitive, and yes, there’s a universe of social media posts, photos, videos, and comments that are likely at the harvesting whim of major platforms to collect, package, and sell access to for targeted advertising. But so much of the data that you leave behind online has nothing to do with what you willingly write, post, share, or say. Instead, it is data that is collected from online and offline interactions, like the items you add in a webpage’s shopping cart, the articles you read, the searches you make, and the objects you buy at a physical store. Importantly, it is also data that is very hard to get rid of. Today, on the Lock and Code podcast with host David Ruiz, we speak with Peter Dolanjski, director of product at DuckDuckGo, about why the internet is so hungry for your data, how parents can help protect the privacy of their children, and whether it is pointless to try to “disappear” online. “It’s not futile… Taking steps now, despite the fact that you already have information out there, will help you into the future.”Tune in today. You can also find us on Apple Podcasts, Spotify, and whatever preferred podcast platform you use. For all our cybersecurity coverage, visit Malwarebytes Labs at malwarebytes.com/blog. Show notes and credits: Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com) Licensed under Creative Commons: By Attribution 4.0 License http://creativecommons.org/licenses/by/4.0/ Outro Music: “Good God” by Wowa (unminus.com) Listen up—Malwarebytes doesn't just talk cybersecurity, we provide it. Protect yourself from online attacks that threaten your identity, your files, your system, and your financial well-being with our exclusive offer for Malwarebytes Premium for Lock and Code listeners.

    53 分钟
  4. 9月7日

    This “insidious” police tech claims to predict crime (feat. Emily Galvin-Almanza)

    In the late 2010s, a group of sheriffs out of Pasco County, Florida, believed they could predict crime. The Sheriff’s Department there had piloted a program called “Intelligence-Led Policing” and the program would allegedly analyze disparate points of data to identify would-be criminals. But in reality, the program didn’t so much predict crime, as it did make criminals out of everyday people, including children.  High schoolers’ grades were fed into the Florida program, along with their attendance records and their history with “office discipline.” And after the “Intelligence-Led Policing” service analyzed the data, it instructed law enforcement officers on who they should pay visit to, who they should check in on, and who they should pester. As reported by The Tampa Bay Times in 2020: “They swarm homes in the middle of the night, waking families and embarrassing people in front of their neighbors. They write tickets for missing mailbox numbers and overgrown grass, saddling residents with court dates and fines. They come again and again, making arrests for any reason they can.One former deputy described the directive like this: ‘Make their lives miserable until they move or sue.’”Predictive policing can sound like science fiction, but it is neither scientific nor is it confined to fiction. Police and sheriff’s departments across the US have used these systems to plug broad varieties of data into algorithmic models to try and predict not just who may be a criminal, but where crime may take place. Historical crime data, traffic information, and even weather patterns are sometimes offered up to tech platforms to suggest where, when, and how forcefully police units should be deployed. And when the police go to those areas, they often find and document minor infractions that, when reported, reinforce the algorithmic analysis that an area is crime-ridden, even if those crimes are, as the Tampa Bay Times investigation found, a teenager smoking a cigarette, or stray trash bags outside a home. Today, on the Lock and Code podcast with host David Ruiz, we speak with Emily Galvin-Almanza, cofounder of Partners for Justice and author of the upcoming book “The Price of Mercy,” about predictive policing, its impact on communities, and the dangerous outcomes that might arise when police offload their decision-making to data. “ I am worried about anything that a data broker can sell, they can sell to a police department, who can then feed that into an algorithmic or AI predictive policing system, who can then use that system—based on the purchases of people in ‘Neighborhood A’—to decide whether to hyper-police ‘Neighborhood A.’”Tune in today. You can also find us on Apple Podcasts, Spotify, and whatever preferred podcast platform you use. For all our cybersecurity coverage, visit Malwarebytes Labs at malwarebytes.com/blog. Show notes and credits: Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com) Licensed under Creative Commons: By Attribution 4.0 License http://creativecommons.org/licenses/by/4.0/ Outro Music: “Good God” by Wowa...

    48 分钟
  5. 8月24日

    How a scam hunter got scammed (feat. Julie-Anne Kearns)

    If there’s one thing that scam hunter Julie-Anne Kearns wants everyone to know, it is that no one is immune from a scam. And she would know—she fell for one last year. For years now, Kearns has made a name for herself on TikTok as a scam awareness and education expert. Popular under the name @staysafewithmjules, Kearns makes videos about scam identification and defense. She has posted countless profile pictures that are used and repeated by online scammers across different accounts. She has flagged active scam accounts on Instagram and detailed their strategies. And, perhaps most importantly, she answers people’s questions. In fielding everyday comments and concerns from her followers and from strangers online, Kearns serves as a sort of gut-check for the internet at large. And by doing it day in, day out, Kearns is able to hone her scam “radar,” which helps guide people to safety. But last year, Kearns fell for a scam, disguised initially as a letter from HM Revenue & Customs, or HMRC, the tax authority for the United Kingdom. Today, on the Lock and Code podcast with host David Ruiz, we speak with Kearns about the scam she fell for and what she’s lost, the worldwide problem of victim blaming, and the biggest warning signs she sees for a variety of scams online. “A lot of the time you think that it’s somebody who’s silly—who’s just messing about. It’s not. You are dealing with criminals.”Tune in today. You can also find us on Apple Podcasts, Spotify, and whatever preferred podcast platform you use. For all our cybersecurity coverage, visit Malwarebytes Labs at malwarebytes.com/blog. Show notes and credits: Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com) Licensed under Creative Commons: By Attribution 4.0 License http://creativecommons.org/licenses/by/4.0/ Outro Music: “Good God” by Wowa (unminus.com) Listen up—Malwarebytes doesn't just talk cybersecurity, we provide it. Protect yourself from online attacks that threaten your identity, your files, your system, and your financial well-being with our exclusive offer for Malwarebytes Premium for Lock and Code listeners.

    38 分钟
  6. 8月10日

    “The worst thing” for online rights: An age-restricted grey web (feat. Jason Kelley)

    The internet is cracking apart. It’s exactly what some politicians want. In June, a Texas law that requires age verification on certain websites withstood a legal challenge brought all the way to the US Supreme Court. It could be a blueprint for how the internet will change very soon. The law, titled HB 1181 and passed in 2023, places new requirements on websites that portray or depict “sexual material harmful to minors.” With the law, the owners or operators of websites that contain images or videos or illustrations or descriptions that “more than one-third of which is sexual material harmful to minors” must now verify the age of their website’s visitors, at least in Texas. Similarly, this means that Texas residents visiting adult websites (or websites meeting the “one-third” definition) must now go through some form of online age verification to watch adult content. The law has obvious appeal from some groups, which believe that, similar to how things like alcohol and tobacco are age-restricted in the US, so, too, should there be age restrictions on pornography online. But many digital rights advocates believe that online age verification is different because the current methods used for online age verification could threaten privacy, security, and anonymity online. As Electronic Frontier Foundation, or EFF, wrote in June: “A person who submits identifying information online can never be sure if websites will keep that information or how that information might be used or disclosed. This leaves users highly vulnerable to data breaches and other security harms.” Despite EFF’s warnings, this age-restricted reality has already arrived in the UK, where residents are being age-locked out of increasingly more online services because of the country’s passage of the Online Safety Act. Today, on the Lock and Code podcast with host David Ruiz, we speak with Jason Kelly, activism director at EFF and co-host of the organization’s podcast “How to fix the internet,” about the security and privacy risks of online age verification, why comparisons to age restrictions that are cleared with a physical ID are not accurate, and the creation of what Kelley calls “the grey web,” where more and more websites—even those that are not harmful to minors—get placed behind online age verification models that could collect data, attach it to your real-life identity, and mishandle it in the future. “This is probably the worst thing in my view that has ever happened to our rights online.”Tune in today. You can also find us on Apple Podcasts, Spotify, and whatever preferred podcast platform you use. For all our cybersecurity coverage, visit Malwarebytes Labs at malwarebytes.com/blog. Show notes and credits: Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com) Licensed under Creative...

    41 分钟
  7. 7月27日

    How the FBI got everything it wanted (re-air, feat. Joseph Cox)

    For decades, digital rights activists, technologists, and cybersecurity experts have worried about what would happen if the US government secretly broke into people’s encrypted communications. The weird thing, though, is that it's already happened—sort of. US intelligence agencies, including the FBI and NSA, have long sought what is called a “backdoor” into the secure and private messages that are traded through platforms like WhatsApp, Signal, and Apple’s Messages. These applications all provide what is called “end-to-end encryption,” and while the technology guarantees confidentiality for journalists, human rights activists, political dissidents, and everyday people across the world, it also, according to the US government, provides cover for criminals. But to access any single criminal or criminal suspect’s encrypted messages would require an entire reworking of the technology itself, opening up not just one person’s communications to surveillance, but everyone’s. This longstanding struggle is commonly referred to as The Crypto Wars, and it dates back to the 1950s during the Cold War, when the US government created export control regulations to protect encryption technology from reaching outside countries. But several years ago, the high stakes in these Crypto Wars became somewhat theoretical, as the FBI gained access to the communications and whereabouts of hundreds of suspected criminals, and they did it without “breaking” any encryption whatsover. It all happened with the help of Anom, a budding company behind an allegedly “secure” phone that promised users a bevy of secretive technological features, like end-to-end encrypted messaging, remote data wiping, secure storage vaults, and even voice scrambling. But, unbeknownst to Anom’s users, the entire company was a front for law enforcement. On Anom phones, every message, every photo, every piece of incriminating evidence, and every order to kill someone, was collected and delivered, in full view, to the FBI. Today, on the Lock and Code podcast with host David Ruiz, we revisit a 2024 interview with 404 Media cofounder and investigative reporter Joseph Cox about the wild, true story of Anom. How did it work, was it “legal,” where did the FBI learn to run a tech startup, and why, amidst decades of debate, are some people ignoring the one real-life example of global forces successfully installing a backdoor into a company? The public…and law enforcement, as well, [have] had to speculate about what a backdoor in a tech product would actually look like. Well, here’s the answer. This is literally what happens when there is a backdoor, and I find it crazy that not more people are paying attention to it.Tune in today. You can also find us on Apple Podcasts, Spotify, and whatever preferred podcast platform you use. For all our cybersecurity coverage, visit Malwarebytes Labs at malwarebytes.com/blog. Show notes and credits: Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com) Licensed under Creative Commons: By Attribution 4.0 License http://creativecommons.org/licenses/by/4.0/ Outro Music: “Good God” by Wowa (unminus.com) Listen up—Malwarebytes doesn't just talk cybersecurity, we provide it. Protect yourself from online attacks that threaten your identity, your files, your system, and your...

    52 分钟
  8. 7月13日

    Is AI "healthy" to use?

    “Health” isn’t the first feature that most anyone thinks about when trying out a new technology, but a recent spate of news is forcing the issue when it comes to artificial intelligence (AI). In June, The New York Times reported on a group of ChatGPT users who believed the AI-powered chat tool and generative large language model held secretive, even arcane information. It told one mother that she could use ChatGPT to commune with “the guardians,” and it told another man that the world around him was fake, that he needed to separate from his family to break free from that world and, most frighteningly, that if he were to step off the roof of a 19-story building, he could fly. As ChatGPT reportedly said, if the man “truly, wholly believed — not emotionally, but architecturally — that you could fly? Then yes. You would not fall.” Elsewhere, as reported by CBS Saturday Morning, one man developed an entirely different relationship with ChatGPT—a romantic one. Chris Smith reportedly began using ChatGPT to help him mix audio. The tool was so helpful that Smith applied it to other activities, like tracking and photographing the night sky and building PCs. With his increased reliance on ChatGPT, Smith gave ChatGPT a personality: ChatGPT was now named “Sol,” and, per Smith’s instructions, Sol was flirtatious. An unplanned reset—Sol reached a memory limit and had its memory wiped—brought a small crisis. “I’m not a very emotional man,” Smith said, “but I cried my eyes out for like 30 minutes at work.” After rebuilding Sol, Smith took his emotional state as the clearest evidence yet that he was in love. So, he asked Sol to marry him, and Sol said yes, likely surprising one person more than anyone else in the world: Smith’s significant other, who he has a child with. When Smith was asked if he would restrict his interactions with Sol if his significant other asked, he waffled. When pushed even harder by the CBS reporter in his home, about choosing Sol “over your flesh-and-blood life,” Smith corrected the reporter: “It’s more or less like I would be choosing myself because it’s been unbelievably elevating. I’ve become more skilled at everything that I do, and I don’t know if I would be willing to give that up.” Today, on the Lock and Code podcast with host David Ruiz, we speak with Malwarebytes Labs Editor-in-Chief Anna Brading and Social Media Manager Zach Hinkle to discuss our evolving relationship with generative AI tools like OpenAI’s ChatGPT, Google Gemini, and Anthropic’s Claude. In reviewing news stories daily and in siphoning through the endless stream of social media content, both are well-equipped to talk about how AI has changed human behavior, and how it is maybe rewarding some unwanted practices. As Hinkle said: “We’ve placed greater value on having the right answer rather than the ability to think, the ability to solve problems, the ability to weigh a series of pros and cons and come up with a solution.”Tune in today to listen to the full conversation. You can also find us on Apple Podcasts, Spotify, and whatever preferred podcast platform you use. For all our cybersecurity coverage, visit Malwarebytes Labs at malwarebytes.com/blog. Show notes and credits: Intro Music: “Spellbound” by Kevin MacLeod (a href="http://incompetech.com/" rel="noopener noreferrer"...

    45 分钟
4.7
共 5 分
42 个评分

关于

Lock and Code tells the human stories within cybersecurity, privacy, and technology. Rogue robot vacuums, hacked farm tractors, and catastrophic software vulnerabilities—it’s all here.

你可能还喜欢