Lock and Code

Malwarebytes
Lock and Code

Lock and Code tells the human stories within cybersecurity, privacy, and technology. Rogue robot vacuums, hacked farm tractors, and catastrophic software vulnerabilities—it’s all here.

  1. 18. NOV.

    An air fryer, a ring, and a vacuum get brought into a home. What they take out is your data

    The month, a consumer rights group out of the UK posed a question to the public that they’d likely never considered: Were their air fryers spying on them? By analyzing the associated Android apps for three separate air fryer models from three different companies, a group of researchers learned that these kitchen devices didn’t just promise to make crispier mozzarella sticks, crunchier chicken wings, and flakier reheated pastries—they also wanted a lot of user data, from precise location to voice recordings from a user’s phone. “In the air fryer category, as well as knowing customers’ precise location, all three products wanted permission to record audio on the user’s phone, for no specified reason,” the group wrote in its findings. While it may be easy to discount the data collection requests of an air fryer app, it is getting harder to buy any type of product today that doesn’t connect to the internet, request your data, or share that data with unknown companies and contractors across the world. Today, on the Lock and Code pocast, host David Ruiz tells three separate stories about consumer devices that somewhat invisibly collected user data and then spread it in unexpected ways. This includes kitchen utilities that sent data to China, a smart ring maker that published de-identified, aggregate data about the stress levels of its users, and a smart vacuum that recorded a sensitive image of a woman that was later shared on Facebook. These stories aren’t about mass government surveillance, and they’re not about spying, or the targeting of political dissidents. Their intrigue is elsewhere, in how common it is for what we say, where we go, and how we feel, to be collected and analyzed in ways we never anticipated. Tune in today. You can also find us on Apple Podcasts, Spotify, and Google Podcasts, plus whatever preferred podcast platform you use. For all our cybersecurity coverage, visit Malwarebytes Labs at malwarebytes.com/blog. Show notes and credits: Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com) Licensed under Creative Commons: By Attribution 4.0 License http://creativecommons.org/licenses/by/4.0/ Outro Music: “Good God” by Wowa (unminus.com) Listen up—Malwarebytes doesn't just talk cybersecurity, we provide it. Protect yourself from online attacks that threaten your identity, your files, your system, and your financial well-being with our exclusive offer for Malwarebytes Premium for Lock and Code listeners.

    27 Min.
  2. 3. NOV.

    Why your vote can’t be “hacked,” with Cait Conley of CISA

    The US presidential election is upon the American public, and with it come fears of “election interference.” But “election interference” is a broad term. It can mean the now-regular and expected foreign disinformation campaigns that are launched to sow political discord or to erode trust in American democracy. It can include domestic campaigns to disenfranchise voters in battleground states. And it can include the upsetting and increasing threats made to election officials and volunteers across the country. But there’s an even broader category of election interference that is of particular importance to this podcast, and that’s cybersecurity. Elections in the United States rely on a dizzying number of technologies. There are the voting machines themselves, there are electronic pollbooks that check voters in, there are optical scanners that tabulate the votes that the American public actually make when filling in an oval bubble with pen, or connecting an arrow with a solid line. And none of that is to mention the infrastructure that campaigns rely on every day to get information out—across websites, through emails, in text messages, and more. That interlocking complexity is only multiplied when you remember that each, individual state has its own way of complying with the Federal government’s rules and standards for running an election. As Cait Conley, Senior Advisor to the Director of the US Cybersecurity and Infrastructure Security Agency (CISA) explains in today’s episode: “There’s a common saying in the election space: If you’ve seen one state’s election, you’ve seen one state’s election.” How, then, are elections secured in the United States, and what threats does CISA defend against? Today, on the Lock and Code podcast with host David Ruiz, we speak with Conley about how CISA prepares and trains election officials and volunteers before the big day, whether or not an American’s vote can be “hacked,” and what the country is facing in the final days before an election, particularly from foreign adversaries that want to destabilize American trust. ”There’s a pretty good chance that you’re going to see Russia, Iran, or China try to claim that a distributed denial of service attack or a ransomware attack against a county is somehow going to impact the security or integrity of your vote. And it’s not true.”Tune in today. You can also find us on Apple Podcasts, Spotify, and Google Podcasts, plus whatever preferred podcast platform you use. For all our cybersecurity coverage, visit Malwarebytes Labs at malwarebytes.com/blog. Show notes and credits: Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com) Licensed under Creative Commons: By Attribution 4.0 License http://creativecommons.org/licenses/by/4.0/ Outro Music: “Good God” by Wowa (unminus.com) Listen up—Malwarebytes doesn't just talk cybersecurity, we provide it. Protect yourself from online attacks that threaten your identity, your files, your system, and your financial well-being with our exclusive offer for Malwarebytes Premium for Lock and

    40 Min.
  3. 21. OKT.

    This industry profits from knowing you have cancer, explains Cody Venzke

    On the internet, you can be shown an online ad because of your age, your address, your purchase history, your politics, your religion, and even your likelihood of having cancer. This is because of the largely unchecked “data broker” industry. Data brokers are analytics and marketing companies that collect every conceivable data point that exists about you, packaging it all into profiles that other companies use when deciding who should see their advertisements. Have a new mortgage? There are data brokers that collect that information and then sell it to advertisers who believe new homeowners are the perfect demographic to purchase, say, furniture, dining sets, or other home goods. Bought a new car? There are data brokers that collect all sorts of driving information directly from car manufacturers—including the direction you’re driving, your car’s gas tank status, its speed, and its location—because some unknown data model said somewhere that, perhaps, car drivers in certain states who are prone to speeding might be more likely to buy one type of product compared to another. This is just a glimpse of what is happening to essentially every single adult who uses the Internet today. So much of the information that people would never divulge to a stranger—like their addresses, phone numbers, criminal records, and mortgage payments—is collected away from view by thousands of data brokers. And while these companies know so much about people, the public at large likely know very little in return. Today, on the Lock and Code podcast with host David Ruiz, we speak with Cody Venzke, senior policy counsel with the ACLU, about how data brokers collect their information, what data points are off-limits (if any), and how people can protect their sensitive information, along with the harms that come from unchecked data broker activity—beyond just targeted advertising. “We’re seeing data that’s been purchased from data brokers used to make decisions about who gets a house, who gets an employment opportunity, who is offered credit, who is considered for admission into a university.”Tune in today. You can also find us on Apple Podcasts, Spotify, and Google Podcasts, plus whatever preferred podcast platform you use. For all our cybersecurity coverage, visit Malwarebytes Labs at malwarebytes.com/blog. Show notes and credits: Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com) Licensed under Creative Commons: By Attribution 4.0 License http://creativecommons.org/licenses/by/4.0/ Outro Music: “Good God” by Wowa (unminus.com) Listen up—Malwarebytes doesn't just talk cybersecurity, we provide it. Protect yourself from online attacks that threaten your identity, your files, your system, and your financial well-being with our exclusive offer for Malwarebytes Premium for Lock and Code listeners.

    35 Min.
  4. 7. OKT.

    Exposing the Facebook funeral livestream scam

    Online scammers were seen this August stooping to a new low—abusing local funerals to steal from bereaved family and friends. Cybercrime has never been a job of morals (calling it a “job” is already lending it too much credit), but, for many years, scams wavered between clever and brusque. Take the “Nigerian prince” email scam which has plagued victims for close to two decades. In it, would-be victims would receive a mysterious, unwanted message from alleged royalty, and, in exchange for a little help in moving funds across international borders, would be handsomely rewarded. The scam was preposterous but effective—in fact, in 2019, CNBC reported that this very same “Nigerian prince” scam campaign resulted in $700,000 in losses for victims in the United States. Since then, scams have evolved dramatically. Cybercriminals today willl send deceptive emails claiming to come from Netflix, or Google, or Uber, tricking victims into “resetting” their passwords. Cybercriminals will leverage global crises, like the COVID-19 pandemic, and send fraudulent requests for donations to nonprofits and hospital funds. And, time and again, cybercriminals will find a way to play on our emotions—be they fear, or urgency, or even affection—to lure us into unsafe places online. This summer, Malwarebytes social media manager Zach Hinkle encountered one such scam, and it happened while attending a funeral for a friend. In a campaign that Malwarebytes Labs is calling the “Facebook funeral live stream scam,” attendees at real funerals are being tricked into potentially signing up for a “live stream” service of the funerals they just attended. Today on the Lock and Code podcast with host David Ruiz, we speak with Hinkle and Malwarebytes security researcher Pieter Arntz about the Facebook funeral live stream scam, what potential victims have to watch out for, and how cybercriminals are targeting actual, grieving family members with such foul deceit. Hinkle also describes what he felt in the moment of trying to not only take the scam down, but to protect his friends from falling for it. “You’re grieving… and you go through a service and you’re feeling all these emotions, and then the emotion you feel is anger because someone is trying to take advantage of friends and loved ones, of somebody who has just died. That’s so appalling”Tune in today. You can also find us on Apple Podcasts, Spotify, and Google Podcasts, plus whatever preferred podcast platform you use. For all our cybersecurity coverage, visit Malwarebytes Labs at malwarebytes.com/blog. Show notes and credits: Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com) Licensed under Creative Commons: By Attribution 4.0 License http://creativecommons.org/licenses/by/4.0/ Outro Music: “Good God” by Wowa (unminus.com) Listen up—Malwarebytes doesn't just talk cybersecurity, we provide it. Protect yourself from online attacks that threaten your identity, your files, your system, and your financial well-being with our exclusive offer for Malwarebytes Premium for Lock and Code...

    36 Min.
  5. 23. SEPT.

    San Francisco’s fight against deepfake porn, with City Attorney David Chiu

    On August 15, the city of San Francisco launched an entirely new fight against the world of deepfake porn—it sued the websites that make the abusive material so easy to create. “Deepfakes,” as they’re often called, are fake images and videos that utilize artificial intelligence to swap the face of one person onto the body of another. The technology went viral in the late 2010s, as independent film editors would swap the actors of one film for another—replacing, say, Michael J. Fox in Back to the Future with Tom Holland. But very soon into the technology’s debut, it began being used to create pornographic images of actresses, celebrities, and, more recently, everyday high schoolers and college students. Similar to the threat of “revenge porn,” in which abusive exes extort their past partners with the potential release of sexually explicit photos and videos, “deepfake porn” is sometimes used to tarnish someone’s reputation or to embarrass them amongst friends and family. But deepfake porn is slightly different from the traditional understanding of “revenge porn” in that it can be created without any real relationship to the victim. Entire groups of strangers can take the image of one person and put it onto the body of a sex worker, or an adult film star, or another person who was filmed having sex or posing nude. The technology to create deepfake porn is more accessible than ever, and it’s led to a global crisis for teenage girls. In October of 2023, a reported group of more than 30 girls at a high school in New Jersey had their likenesses used by classmates to make sexually explicit and pornographic deepfakes. In March of this year, two teenage boys were arrested in Miami, Florida for allegedly creating deepfake nudes of male and female classmates who were between the ages of 12 and 13. And at the start of September, this month, the BBC reported that police in South Korea were investigating deepfake pornography rings at two major universities. While individual schools and local police departments in the United States are tackling deepfake porn harassment as it arises—with suspensions, expulsions, and arrests—the process is slow and reactive. Which is partly why San Francisco City Attorney David Chiu and his team took aim at not the individuals who create and spread deepfake porn, but at the websites that make it so easy to do so. Today, on the Lock and Code podcast with host David Ruiz, we speak with San Francisco City Attorney David Chiu about his team’s lawsuit against 16 deepfake porn websites, the city’s history in protecting Californians, and the severity of abuse that these websites offer as a paid service. “At least one of these websites specifically promotes the non-consensual nature of this. I’ll just quote: ‘Imagine wasting time taking her out on dates when you can just use website X to get her nudes.’”Tune in today. You can also find us on Apple Podcasts, Spotify, and Google Podcasts, plus whatever preferred podcast platform you use. For all our cybersecurity coverage, visit Malwarebytes Labs at malwarebytes.com/blog. Show notes and credits: Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com) Licensed under Creative Commons: By Attribution 4.0 License a...

    21 Min.
  6. 9. SEPT.

    What the arrest of Telegram's CEO means, with Eva Galperin

    On August 24, at an airport just outside of Paris, a man named Pavel Durov was detained for questioning by French investigators. Just days later, the same man was charged in crimes related to the distribution of child pornography and illicit transactions, such as drug trafficking and fraud. Durov is the CEO and founder of the messaging and communications app Telegram. Though Durov holds citizenship in France and the United Arab Emirates—where Telegram is based—he was born and lived for many years in Russia, where he started his first social media company, Vkontakte. The Facebook-esque platform gained popularity in Russia, not just amongst users, but also the watchful eye of the government. Following a prolonged battle regarding the control of Vkontake—which included government demands to deliver user information and to shut down accounts that helped organize protests against Vladimir Putin in 2012—Durov eventually left the company and the country all together. But more than 10 years later, Durov is once again finding himself a person of interest for government affairs, facing several charges now in France where, while he is not in jail, he has been ordered to stay. After Durov’s arrest, the X account for Telegram responded, saying: “Telegram abides by EU laws, including the Digital Services Act—its moderation is within industry standards and constantly improving. Telegram’s CEO Pavel Durov has nothing to hide and travels frequently in Europe. It is absurd to claim that a platform or its owner are responsible for abuse of the platform.” But how true is that? In the United States, companies themselves, such as YouTube, X (formerly Twitter), and Facebook often respond to violations of “copyright”—the protection that gets violated when a random user posts clips or full versions of movies, television shows, and music. And the same companies get involved when certain types of harassment, hate speech, and violent threats are posted on public channels for users to see. This work, called “content moderation,” is standard practice for many technology and social media platforms today, but there’s a chance that Durov’s arrest isn’t related to content moderation at all. Instead, it may be related to the things that Telegram users say in private to one another over end-to-end encrypted chats. Today, on the Lock and Code podcast with host David Ruiz, we speak with Electronic Frontier Foundation Director of Cybersecurity Eva Galperin about Telegram, its features, and whether Durov’s arrest is an escalation of content moderation gone wrong or the latest skirmish in government efforts to break end-to-end encryption. “Chances are that these are requests around content that Telegram can see, but if [the requests] touch end-to-end encrypted content, then I have to flip tables.”Tune in today. You can also find us on Apple Podcasts, Spotify, and Google Podcasts, plus whatever preferred podcast platform you use. For all our cybersecurity coverage, visit Malwarebytes Labs at malwarebytes.com/blog. Show notes and credits: Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com) Licensed under Creative Commons: By Attribution 4.0 License a href="http://creativecommons.org/licenses/by/4.0/" rel="noopener noreferrer"...

    34 Min.
  7. 26. AUG.

    Move over malware: Why one teen is more worried about AI (re-air)

    Every age group uses the internet a little bit differently, and it turns out for at least one Gen Z teen in the Bay Area, the classic approach to cyberecurity—defending against viruses, ransomware, worms, and more—is the least of her concerns. Of far more importance is Artificial Intelligence (AI). Today, the Lock and Code podcast with host David Ruiz revisits a prior episode from 2023 about what teenagers fear the most about going online. The conversation is a strong reminder that when America’s youngest generations experience online is far from the same experience that Millennials, Gen X’ers, and Baby Boomers had with their own introduction to the internet. Even stronger proof of this is found in recent research that Malwarebytes debuted this summer about how people in committed relationships share their locations, passwords, and devices with one another. As detailed in the larger report, “What’s mine is yours: How couples share an all-access pass to their digital lives,” Gen Z respondents were the most likely to say that they got a feeling of safety when sharing their locations with significant others. But a wrinkle appeared in that behavior, according to the same research: Gen Z was also the most likely to say that they only shared their locations because their partners forced them to do so. In our full conversation from last year, we speak with Nitya Sharma about how her “favorite app” to use with friends is “Find My” on iPhone, the dangers are of AI “sneak attacks,” and why she simply cannot be bothered about malware.  “I know that there’s a threat of sharing information with bad people and then abusing it, but I just don’t know what you would do with it. Show up to my house and try to kill me?” Tune in today to listen to the full conversation. You can also find us on Apple Podcasts, Spotify, and Google Podcasts, plus whatever preferred podcast platform you use. For all our cybersecurity coverage, visit Malwarebytes Labs at malwarebytes.com/blog. Show notes and credits: Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com) Licensed under Creative Commons: By Attribution 4.0 License http://creativecommons.org/licenses/by/4.0/ Outro Music: “Good God” by Wowa (unminus.com) Listen up—Malwarebytes doesn't just talk cybersecurity, we provide it. Protect yourself from online attacks that threaten your identity, your files, your system, and your financial well-being with our exclusive offer for Malwarebytes Premium for Lock and Code listeners.

    49 Min.
  8. 12. AUG.

    AI girlfriends want to know all about you. So might ChatGPT

    Somewhere out there is a romantic AI chatbot that wants to know everything about you. But in a revealing overlap, other AI tools—which are developed and popularized by far larger companies in technology—could crave the very same thing. For AI tools of any type, our data is key. In the nearly two years since OpenAI unveiled ChatGPT to the public, the biggest names in technology have raced to compete. Meta announced Llama. Google revealed Gemini. And Microsoft debuted Copilot. All these AI features function in similar ways: After having been trained on mountains of text, videos, images, and more, these tools answer users’ questions in immediate and contextually relevant ways. Perhaps that means taking a popular recipe and making it vegetarian friendly. Or maybe that involves developing a workout routine for someone who is recovering from a new knee injury. Whatever the ask, the more data that an AI tool has already digested, the better it can deliver answers. Interestingly, romantic AI chatbots operate in almost the same way, as the more information that a user gives about themselves, the more intimate and personal the AI chatbot’s responses can appear. But where any part of our online world demands more data, questions around privacy arise. Today, on the Lock and Code podcast with host David Ruiz, we speak with Zoë MacDonald, content creator for Privacy Not Included at Mozilla about romantic AI tools and how users can protect their privacy from ChatGPT and other AI chatbots. When in doubt, MacDonald said, stick to a simple rule: “I would suggest that people don’t share their personal information with an AI chatbot.”Tune in today. You can also find us on Apple Podcasts, Spotify, and Google Podcasts, plus whatever preferred podcast platform you use. For all our cybersecurity coverage, visit Malwarebytes Labs at malwarebytes.com/blog. Show notes and credits: Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com) Licensed under Creative Commons: By Attribution 4.0 License http://creativecommons.org/licenses/by/4.0/ Outro Music: “Good God” by Wowa (unminus.com) Listen up—Malwarebytes doesn't just talk cybersecurity, we provide it. Protect yourself from online attacks that threaten your identity, your files, your system, and your financial well-being with our exclusive offer for Malwarebytes Premium for Lock and Code listeners.

    41 Min.

Info

Lock and Code tells the human stories within cybersecurity, privacy, and technology. Rogue robot vacuums, hacked farm tractors, and catastrophic software vulnerabilities—it’s all here.

Das gefällt dir vielleicht auch

Melde dich an, um anstößige Folgen anzuhören.

Bleib auf dem Laufenden mit dieser Sendung

Melde dich an oder registriere dich, um Sendungen zu folgen, Folgen zu sichern und die neusten Updates zu erhalten.

Wähle ein Land oder eine Region aus

Afrika, Naher Osten und Indien

Asien/Pazifik

Europa

Lateinamerika und Karibik

USA und Kanada