Unpacking complex ideas to build a deeper understanding of how technology is changing the world. We're produced at the Berkman Klein Center for Internet and Society at Harvard University in Cambridge, Massachusetts.
A spotlight on Nieman-Berkman Klein Fellow Jonathan Jackson
Jonathan Jackson is a co-founder of Blavity Inc., a technology and media company for black millennials. Blavity’s mission is to "economically and creatively support Black millennials across the African diaspora, so they can pursue the work they love, and change the world in the process." Blavity has grown immensely since their founding in 2014 — among other things, spawning five unique sites, reaching over 7 million visitors a month, and organizing a number of technology, activism, and entrepreneurship conferences.
Jonathan Jackson is also a Joint Fellow with the Nieman Foundation and the Berkman Klein Center for Internet & Society for 2018-2019. During his time here, he says, he is looking for frameworks and unique ways to measure black cultural influence (and the economic impact of black creativity) in the US and around the world.
Jonathan sat down with the Berkman Klein Center’s Victoria Borneman to talk about his work.
Music from this episode:
"Jaspertine" by Pling - Licensed under Creative Commons Attribution Noncommercial (3.0)
More information about this work, including a transcript, can be found here:
A spotlight on 2018 Berkman Klein Fellow Amy Zhang
Berkman Klein Center interns sat down with 2018 Berkman Klein Center Fellow Amy Zhang, to discuss her work on combating online harassment and misinformation as well as her research as a Fellow.
How Youth Are Reinventing Instagram and Why Having Multiple Accounts Is Trending
According to a recent Pew Research Center study, Instagram is the second most popular platform among 13 to 17-year-olds in the US, after YouTube. Nearly 72 percent of US teenagers are on the image sharing platform.
Our Youth & Media team looked at how teens are using Instagram to figure out who they are. While seemingly just a photo-sharing platform, users have molded Instagram into a more complex social media environment, with dynamics and a shared internal language almost as complex as a typical middle or high school.
This episode was produced by Tanvi Kanchinadam, Skyler Sallick, Quinn Robinson, Jessi Whitby, Sonia Kim, Alexa Hasse, Sandra Cortesi, and Andres Lombana-Bermudez.
More information about this work, including a transcript, can be found here:
When a Bot is the Judge
We encounter algorithms all the time. There are algorithms that can guess within a fraction of a percentage point whether you’ll like a certain movie on Netflix, a post on Facebook, or a link in a Google search.
But Risk Assessment Tools now being adopted by criminal justice systems all across the country - from Arizona, to Kentucky, to Pennsylvania, to New Jersey - are made to guess whether you’re likely to flee the jurisdiction of your trial, or commit a crime again if you are released.
With stakes as high as this — human freedom — some are asking for greater caution and scrutiny regarding the tools being developed.
Chris Bavitz, managing director of the Cyberlaw Clinic at Harvard Law School, helped draft an open letter to the state legislature of Massachusetts about Risk Assessment Tools, co-signed by a dozen researchers working on the Ethics and Governance of Artificial Intelligence. He spoke with Gretchen Weber about why we need more transparency and scrutiny in the adoption of these tools.
Read the open letter here: https://cyber.harvard.edu/publications/2017/11/openletter
Fake News & How To Stop It
Even before Election Day, 2016, observers of technology & journalism were delivering warnings about the spread of fake news. Headlines like “Pope Francis Shocks World, Endorses Donald Trump For President” and “Donald Trump Protestor Speaks Out, Was Paid $3500 To Protest” would pop up, seemingly out of nowhere, and spread like wildfire.
Both of those headlines, and hundreds more like them, racked up millions of views and shares on social networks, gaining enough traction to earn mentions in the mainstream press. Fact checkers only had to dig one layer deeper to find that the original publishers of these stories were entirely fake, clickbait news sites, making up false sources, quotes, and images, often impersonating legitimate news outlets, like ABC, and taking home thousands of dollars a month in ad revenue. But by that time, the damage of fake news was done - the story of the $3500 protestor already calcified in the minds of the casual news observer as fact.
It turns out that it’s not enough to expect your average person to be able to tell the difference between news that is true and news that seems true. Unlike the food companies who create the products on our grocery shelves, news media are not required by law to be licensed, inspected, or bear a label of ingredients and nutrition facts, not that they should or could be.
But the gatekeepers of news media that we encounter in the digital age - the social media platforms like Facebook and Twitter, search engines like Google, and content hosts like YouTube - could and should be pitching in to help news consumers navigate the polluted sea of content they interact with on a daily basis.
That’s according to Berkman Klein Center co-founder Jonathan Zittrain and Zeynep Tufekci, a techno-sociologist who researches the intersection of politics, news, and the internet. They joined us recently to discuss the phenomenon of fake news and what platforms can do to stop it.
Facebook and Google have recently instituted to processes to remove fake news sites from their ad networks. And since this interview Facebook has also announced options allowing users to flag fake news, and a partnership with the factchecking website Snopes to offer a layer of verification on questionable sites.
For more on this episode visit:
CC-licensed content this week:
Neurowaxx: “Pop Circus” (http://ccmixter.org/files/Neurowaxx/14234)
Photo by Flickr user gazeronly (https://www.flickr.com/photos/gazeronly/10612167956/)
The Chilling Effect
The effects of surveillance on human behavior have long been discussed and documented in the real world. That nervous feeling you get when you notice a police officer or a security camera? The one that forces you to straighten up and be on your best behavior, even if you're doing nothing wrong? It's quite common.
The sense of being monitored can cause you to quit engaging in activities that are perfectly legal, even desirable, too. It's a kind of "chilling effect." And it turns out it even happens online.
Researcher Jon Penney wanted to know how the feeling of being watched or judged online might affect Internet users' behavior. Does knowledge of the NSA's surveillance programs affect whether people feel comfortable looking at articles on terrorism? Do threats of copyright law retaliation make people less likely to publish blog posts?
Penney's research showed that, yes, the chilling effect has hit the web. On today's podcast we talk about how he did his research, and why chilling effects are problematic for free speech and civil society.
Creative Commons photo via Flickr user fotograzio (https://www.flickr.com/photos/fotograzio/23587980033/in/photolist-BWoyEV-9sXJor-65C6vW-6HoVLi-CniMF-6Nmu25-a2vLRE-8EjbLa-5oemP4-2WnPir-68wN7D-qUAEUo-5WMdZy-CniTa-7SRuk-8wuiLW-ngSxx-auStar-7hHVm2-wZdZ-8WxYa6-6sHDJQ-8jMspN-fuEUnL-7F4sHR-npc6W-ngSz2-5YcUcm-oD777V-gyXGQj-9YzJSh-7A3qBq-gyXGYW-7mxD65-UnAYc-nsSXr-UnAHZ-oB5nAs-oD5sv1-omBFNa-BZRnk-4eugbA-4Mm6sa-4Mqh55-4Mqhju-4ikSHh-7RcGHj-6GAFVT-eApV5g-PgHJU)
Find out more about this episode here: