“I don’t believe that not fact-checking political ads is pro-conservative.” It’s with this awkward double negative that Mark Zuckerberg justified Facebook’s refusal to fact-check blatantly misleading political advertisements.
Zuckerberg spoke at Georgetown University this week, arguing that allowing Trump’s false ads to run on Facebook was fundamental to free speech. His position is reminiscent of English philosopher John Stuart Mill’s “marketplace of ideas”—if everyone is allowed to say whatever they want, good ideas will eventually drown out the bad. Except in Mill’s day, speech was not distributed on social media platforms with global reach, with AI algorithms choosing which content to display, and how often.
In an era of coordinated misinformation campaigns, private companies can no longer remain the sole gatekeepers of the digital marketplace, especially when their laissez-faire approach breeds lies and foreign intervention. It is time for governments to more proactively regulate Internet content in order to preserve the functioning of democracies.
Russian meddling in the 2016 U.S. presidential elections is perhaps the most notorious example of the dangers unregulated content poses. Studies conducted since 2016 estimate that there were 760 million instances of an American user clicking through a fake news story during the 2016 election cycle. That’s three fake stories per American adult.
When Zuckerberg testified before the U.S. Congress, he confirmed that the Internet Research Agency (IRA)—a Russian company behind many misinformation campaigns—generated over 80,000 Facebook posts between 2016 and 2018. In addition, the IRA spent an estimated $300,000 on false political advertisements on Facebook and Instagram that reached over 11 million people in the U.S. Exactly how many good ideas in Facebook’s idyllic marketplace would it take to drown out a targeted, cash-infused influence campaign?
Facebook’s strategy for combatting fake news is to deploy AI technology that would, in theory, detect and remove fake accounts. However, in his congressional testimony, Zuckerberg admitted that it would take Facebook an additional five to ten years to develop AI tools that can account for “the linguistic nuances of different types of content…and [that are] more accurate in flagging things.”
For the next decade—that’s three U.S. presidential elections subject to foreign meddling—the company expects us to entrust them and their inefficient technology with our electoral process. And the only concerted effort by the U.S. government to stem the spread of online misinformation so far has been to consider adopting the Honest Ads Act. Sponsored by Senator Amy Klobuchar (D-MN), the bill was introduced in the Senate nearly two years ago; it aims to regulate political advertisements on online platforms, notably Facebook and Google.
With the 2020 elections looming closer, it is high time for lawmakers to move beyond mere discussions and adopt regulatory measures that would hold companies like Facebook accountable for the misinformation proliferating on their platforms. Such regulation is not unprecedented: France offers an example of the swift action the government could take in this domain.
During the 2017 French presidential elections, misinformation ran rampant in French online publications and on social media. News of Emmanuel Macron’s secret offshore accounts in the Bahamas spread like wildfire hours before his televised debate against Marine Le Pen. A fake website mimicking the Belgian newspaper Le Soir broke the story of Saudi Arabian nationals funding Macron’s campaign. These allegations earned Macron some bad press, but unlike his American counterpart in 2016, the candidate targeted in fake news campaigns went on to win the presidency.
In 2018, Macron’s government adopted a law against fake news, defined as “inexact allegations or imputations, or news that falsely report facts, with the aim of changing the sincerity of a vote.” Thanks to this legislation, the French government can now remove false content and block websites that publish it during the three months leading up to elections. In addition, political candidates are able to sue for the removal of contested online news stories, and tech companies are obliged to disclose the sources of funding for all sponsored political content.
France has yet to hold another federal election, so it is still difficult to assess the efficacy of the law and the ability of the government to enforce it. Nonetheless, it is a crucial step in the right direction—toward sensible regulation, limited in scope to safeguarding the electoral process without granting the government sweeping control over online speech.
Of course, not every government can be entrusted with policing online discourse. Authoritarian regimes in countries like China routinely abuse their power to crush all dissenting voices and to spread propaganda. In mainland China, for instance, users will find no mention of the concentration camps in the Xinjian province where Uighur Muslims are being held and tortured today. But in democratic societies, where elected officials can be held accountable for their actions, government regulation need not become synonymous with oppression.
When the Internet first came into being, it represented a dream of an unbounded global space with no rules, no authorities, and ample opportunity for exchanging ideas. As with most utopian projects, it did not quite live up to its potential. Just as oppressed citizens could tap into the power and reach of social media networks to build revolutionary movements, so too could nefarious actors weaponize them to pump out propaganda. Finding a sensible balance between free expression and regulation, between safety and oppression will remain a formidable challenge for decades to come. But it is time for us as citizens and for our governments to take the first step to finding that balance, rather than allowing fake news to run rampant and hoping that Mark Zuckerberg will save us.
This article was written as part of a class exercise in my grad program.