25 min

Numlock Sunday: Karen Hao on Facebook's AI crisis The Numlock Podcast

    • News

By Walt Hickey

Welcome to the Numlock Sunday edition. This week, another podcast edition!

This week, I spoke to MIT Technology Review editor Karen Hao, who frequently appears in Numlock and wrote the bombshell story “How Facebook Got Addicted to Spreading Misinformation.”

The story was a fascinating look inside one of the most important companies on the planet and their struggles around the use of algorithms on their social network. Facebook uses algorithms for far more than just placing advertisements, but has come under scrutiny for the ways that misinformation and extremism have been amplified by the code that makes their website work.

Karen’s story goes inside Facebook’s attempts to address that, and how their focus on rooting out algorithmic bias may ignore other, more important problems related to the algorithms that print them money.

Karen can be found on Twitter, @_Karenhao at MIT Technology Review, and at her newsletter, The Algorithm, that goes out every week on Fridays.

This interview has been condensed and edited.

You wrote this really outstanding story quite recently called, “How Facebook Got Addicted to Spreading Misinformation.” It's a really cool profile of a team within Facebook that works on AI problems, and extensively was working towards an AI solution. But as you get into the piece, it's really complicated. We talk a lot about algorithms. Do you want to go into what algorithms are in the context of Facebook?What a question start with! In the public conversation when people say that Facebook uses AI, I think most people are thinking, oh, they use AI to target users with ads. And that is 100 percent true, but Facebook is also running thousands of AI algorithms concurrently, not just the ones that they use to target you with ads. They also have facial recognition algorithms that are recognizing your friends in your photos. They also have language translation algorithms, the ones when someone posts something in different language there's that little option to say, translate into English, or whatever language you speak. They also have Newsfeed ranking algorithms which are ordering what you see in Newsfeed. And other recommendation algorithms that are telling you, hey, you might like this page, or you might want to join this group. So, there's just a lot of algorithms that are being used on Facebook's platform in a variety of different ways. But essentially, every single thing that you do on Facebook is somehow supported in part by algorithms.You wrote they have thousands of models running concurrently, but the thing that you also highlighted, and one reason that this team was thrown together, was that almost none of them have been vetted for bias.Most of them have not been vetted for bias. In terms of what algorithmic bias is, it's this field of study that has recognized that when algorithms learn from historical data they will often perpetuate the inequities that are present in that historical data. Facebook is currently under a lawsuit from the Housing and Urban Development agency where HUD alleges that Facebook's ad targeting algorithms are showing different people different housing opportunities based on their race, which is illegal. White people more often see houses for sale, whereas minority users more often see houses for rent. And it's because the algorithms are learning from this historical data. Facebook has a team called Responsible AI, but there's also a field of research that's called responsible AI that's all about understanding how do algorithms impact society, and how can we redesign them from the beginning to make sure that they don't have harmful unintended consequences?And so this team, when they spun up, they were like "none of these algorithms have been audited for bias and that is an unintended consequence that can happen that can legitimately harm people, so we

By Walt Hickey

Welcome to the Numlock Sunday edition. This week, another podcast edition!

This week, I spoke to MIT Technology Review editor Karen Hao, who frequently appears in Numlock and wrote the bombshell story “How Facebook Got Addicted to Spreading Misinformation.”

The story was a fascinating look inside one of the most important companies on the planet and their struggles around the use of algorithms on their social network. Facebook uses algorithms for far more than just placing advertisements, but has come under scrutiny for the ways that misinformation and extremism have been amplified by the code that makes their website work.

Karen’s story goes inside Facebook’s attempts to address that, and how their focus on rooting out algorithmic bias may ignore other, more important problems related to the algorithms that print them money.

Karen can be found on Twitter, @_Karenhao at MIT Technology Review, and at her newsletter, The Algorithm, that goes out every week on Fridays.

This interview has been condensed and edited.

You wrote this really outstanding story quite recently called, “How Facebook Got Addicted to Spreading Misinformation.” It's a really cool profile of a team within Facebook that works on AI problems, and extensively was working towards an AI solution. But as you get into the piece, it's really complicated. We talk a lot about algorithms. Do you want to go into what algorithms are in the context of Facebook?What a question start with! In the public conversation when people say that Facebook uses AI, I think most people are thinking, oh, they use AI to target users with ads. And that is 100 percent true, but Facebook is also running thousands of AI algorithms concurrently, not just the ones that they use to target you with ads. They also have facial recognition algorithms that are recognizing your friends in your photos. They also have language translation algorithms, the ones when someone posts something in different language there's that little option to say, translate into English, or whatever language you speak. They also have Newsfeed ranking algorithms which are ordering what you see in Newsfeed. And other recommendation algorithms that are telling you, hey, you might like this page, or you might want to join this group. So, there's just a lot of algorithms that are being used on Facebook's platform in a variety of different ways. But essentially, every single thing that you do on Facebook is somehow supported in part by algorithms.You wrote they have thousands of models running concurrently, but the thing that you also highlighted, and one reason that this team was thrown together, was that almost none of them have been vetted for bias.Most of them have not been vetted for bias. In terms of what algorithmic bias is, it's this field of study that has recognized that when algorithms learn from historical data they will often perpetuate the inequities that are present in that historical data. Facebook is currently under a lawsuit from the Housing and Urban Development agency where HUD alleges that Facebook's ad targeting algorithms are showing different people different housing opportunities based on their race, which is illegal. White people more often see houses for sale, whereas minority users more often see houses for rent. And it's because the algorithms are learning from this historical data. Facebook has a team called Responsible AI, but there's also a field of research that's called responsible AI that's all about understanding how do algorithms impact society, and how can we redesign them from the beginning to make sure that they don't have harmful unintended consequences?And so this team, when they spun up, they were like "none of these algorithms have been audited for bias and that is an unintended consequence that can happen that can legitimately harm people, so we

25 min

Top Podcasts In News