Ken Yeung

The Scene: The battle between free speech, fake news, and the Internet hasn’t lost any steam, and Facebook founder Mark Zuckerberg continues to find himself right in the middle of it. In an interview with Recode last week, Zuckerberg said: “I’m Jewish, and there’s a set of people who deny that the Holocaust happened. I find that deeply offensive. But at the end of the day, I don’t believe that our platform should take that down because I think there are things that different people get wrong.” The spread of false information on the Internet has become a widespread problem with real consequences. However, social media sites like Facebook continue to struggle with what content should be taken down and what should be left, even if the information is completely false or offensive.


The Takes:

Washington Square News (NYU)

Mert Erenel

    “On social media, it’s easy for us to spout our dislike for those on the opposing side of the political spectrum. We can go on for hours debating someone on Facebook... And this can all be done without the slightest worry of legal penalty. Many young Americans take that for granted. In countries like Turkey, you don’t just lose your job if you criticize the government, you may also go to prison and have your life ruined.”

    “... is a bad tweet really deserving of punishment in jail?

    Think of all the tweets of celebrities and politicians in support of National Football League players bending the knee during the American national anthem, Facebook posts of photos in support of the American [fl]ag burning, memes or jokes against political figures, both current and historical ones. The abundance of these on social media in the United States is an example of the degree to which the First Amendment allows its citizens to use such a platform without the fear of legal persecution neither from state nor federal levels of government. For others, this may be seen as a natural fundamental right; for me, it is a privilege that I wish not to lose.”


Columbia Political Review

Bani Sapra

    “In the aftermath of the presidential election, many Americans have blamed Facebook for exerting undue influence during the campaign season. The company has been accused of contributing to the spread of misinformation, racist language, and “alt-right” memes that were employed by many pro-Trump supporters.  It has also been criticized for creating a so-called “filter bubble,” in which users’ newsfeeds only connected them to others who held the same political views, thus distorting their impressions of the level of support existing for each candidate.”

    “Facebook remains dedicated to crafting a public image of its product as a blank canvas that depends on the users to personalize itself. Users’ newsfeeds, photos, and even the advertisements they receive are all shaped by the history of their preferences and previous behavior on the website.

    “Facebook is not registered as and does not see itself as a media company; it argues that it is under no moral or professional obligation to behave as an arbiter of truth. However, it is also one of the largest distributors of news online. A study by the Pew Research Center even concluded that nearly half the American adult population relies on Facebook as its primary news source. Traditionally, people have trusted that the news is reliable.”

    “However, if Facebook wishes to present itself as a reliable news source, some level of editorial oversight is needed. Mark Zuckerberg may claim that the staff at Facebook are not “arbiters of truth,” right now, but the rampant misinformation that has occurred this election season has made one fact clear: they need to be.”


The Harvard Crimson

Gabriel H. Karger

    What kind of debate qualifies as legitimate in Facebook’s eyes? The company doesn’t say. One approach is to classify hateful content, like much-scrutinized fake news, as a subset of false speech. Group-focused hate speech contains generalizations or arguments that take no time to debunk, while more involved political content requires prohibitive resources to fact-check properly.”

    “Facebook’s selective moderation suggests that legitimate content for the company is not necessarily true or respectful content, but material whose publication it deems valuable from the public’s point of view. Even if the social network could have stopped users from hearing Trump’s Muslim ban speech, for instance, doing so would have prevented voters from learning something important about the candidate’s policy preferences.”

    “This desire to inform citizens just illustrates how any outfit’s censorship practices—or lack thereof—reflect a normative set of ideas about what best serves the interests of users. When Facebook, Google, or others frame content regulation as concerned with the “safety” of users, they mask the extent to which that safety is just one piece of a broader, and possibly controversial, conception of how we should lead our digital lives.”


The Bottom Line: Later on in the interview, Zuckerberg said “The principles that we have on what we remove from the service are: If it’s going to result in real harm, real physical harm, or if you’re attacking individuals, then that content shouldn’t be on the platform.” Though, he did continue on to say that the site would not take down accounts for saying things that are “wrong,” noting that everyone makes mistakes sometimes. To suggest that false information cannot cause physical harm is a bold statement, and only time will tell if Zuckerberg will stick to his principles or change his mind.

Share