Online Disinformation in the United States

Implications for Latin America

The 2016 presidential election in the United States was arguably the event that saw the issue of online disinformation erupt into the public consciousness. Indeed, research subsequently revealed that as many as 65 million Americans visited a disinformation website in the weeks leading up to 2016 election, and the phenomenon of so-called “fake news” was hotly debated both online and off. In the years since, the role of online disinformation in the 2016 election, and its potential impact on the victory of President Donald Trump, have been the subject of a growing body of empirical research. These studies show that disinformation on social media was indeed widespread. At the same time, most research finds that such disinformation did not influence the outcome of the election, though the broader Russian information warfare campaign may have.

The comparatively early experience of the United States with online disinformation and the subsequent efforts to document and measure its impact—as well as the evolving responses of policymakers, social media platforms, and others—render the U.S. a useful case study for other countries contemplating this challenge. In Latin America, disinformation—false information deliberately and often covertly spread to influence public opinion—has been a feature of recent elections in countries that include Brazil, Colombia, and Mexico. As social media’s relevance as a source of political news expands, so too will the potential reach and impact of disinformation. For regulators, platforms, and citizen groups working to respond to this challenge, the U.S. experience offers essential insight that argues against both complacency and overreach.

This policy brief, based on publicly available information and a survey of the existing academic literature, summarizes what we know about the role of online disinformation in the most recent U.S. elections and distills relevant policy implications with Latin America in mind. Taken together, the recommendations that we derive from U.S. experience—for governments, technology companies, and civil society— suggest that there is no silver bullet against online disinformation. Instead, disinformation is best addressed via agile, collaborative, multistakeholder responses that combine carefully conceived, rights respecting regulation; technological adaptations by social media platforms; and civil society-driven efforts in areas such as fact-checking and digital literacy.

Disinformation, in short, is about discrediting. From elections to the media to partisan politics to the institutions of Congress and the Presidency—doubt and chaos and distrust have very much conquered the United States. Only a concerted and sober effort to rebuild trust can win them back.

 

By the Numbers:

  •  Between 2013 and 2018, Russian social media campaigns reached tens of millions of Americans and were shared over 30 million times between 2015 and 2017.
  • A far larger number—as many as 65 million people—visited a disinformation website not necessarily linked to Russia in the final weeks of the 2016 election.
  • Russian campaigns used Twitter, Facebook, Instagram, YouTube, and other platforms and were spread both through advertisements and organic activity.
  • Russian disinformation campaigns did not stop after the 2016 election, or even after the U.S. Department of Justice indicted 13 Russian individuals and three companies for “information warfare against the United States.”
  • Despite the widespread use of disinformation, there is limited evidence that it shifts voter opinions or that it had a decisive impact (or even a significant impact at all) on the results of the 2016 or 2018 elections in the United States. Instead, the primary result has been growing distrust, derision, and partisanship across the ideological spectrum.

Policy Implications for Latin America

FOR GOVERNMENTS

  • Social media matters. In the weeks following the 2016 election, Facebook was the site most consulted for political information in the United States, with 21% of users, compared to just 2% for the Washington Post and 1.3% for The New York Times. Social media’s role as a digital “public square” is already formidable and likely only to grow. As such, the democratizing function of social media should be protected, even as governments and societies remain attentive to the impact of disinformation, the power and reach of which are dramatically amplified by social media.
  • Maintain perspective and avoid overreach. While the empirical research regarding the impact of disinformation on voter behavior is not entirely conclusive, it suggests that even vast exposure to disinformation (one in four Americans visited a fake news website in 2016) may have only a minimal impact on the electoral process. Disinformation might have additional pernicious effects on democracy that warrant careful consideration, but evidence from the U.S. experience in 2016 argues in favor of a rigorous, careful approach to the challenge of disinformation that avoids overreach. Policy solutions that make governments the arbiter of the truth or authorize prior censorship are not only inconsistent with freedom of expression standards but unsupported by the existing evidence.
  • Focus on social media advertising. The financial incentive of advertising revenue may be connected to the proliferation of disinformation. Platforms can sharply curtail both, either voluntarily or if necessary, through regulation. At a minimum, full transparency should be required regarding the identity and geographic location of any entity purchasing political advertising online.
  • Follow disinformation where it goes. Disinformation will naturally track users to the platforms and online spaces they utilize most. In the United States, Russian operatives quickly found Instagram to be most effective and shifted many of their resources there. In Latin America, messaging platforms like WhatsApp are likely to play an outsize role, as was the case in Brazil. The characteristics of particular platforms—such as the peer-to-peer, encrypted nature of WhatsApp—will demand tailored solutions.
  • Be agile. One of the biggest challenges in responding to online disinformation is that governments are often several steps behind from the beginning. Governments must develop monitoring and enforcement strategies that are proactive, and not reactive. This includes the ability to quickly expose disinformation and foreign intervention. While the U.S. intelligence community had been tracking Russia’s Internet Research Agency for years, there was no serious effort to identify and publicize their activities until after the 2016 election.
  • Improve information security and data protection. While the political impact of disinformation is debatable, the consequences of hacking and information theft are far clearer. Politicians, governments, political parties, and other public sector organizations must modernize and strengthen their information security infrastructure or risk escalating attacks. Protecting citizens’ data privacy, including by strengthening legislation.

FOR TECHNOLOGY COMPANIES 

  • Take (a share of) responsibility. Following the 2016 U.S. elections, Facebook CEO Mark Zuckerberg famously dismissed the idea that propaganda and disinformation on his company’s platform influenced the outcome, a statement he later said he regretted. Social media companies have always jealously guarded their status as mere conduits of content, wary of crossing the line from platform to publisher lest they be held liable for the content they host. In practice, however, they remain very far from this line, and as Zuckerberg subsequently acknowledged, the experience of 2016 showed that the evolution of the major social media platforms as democratic protagonists means they must play an active role in the search for solutions to online disinformation and related challenges.
  • Adapt the algorithms. Algorithms that determine what information users see can be evolved to incorporate measures of information quality, rather than simply the information most likely to keep users glued to a site—especially where such information has been identified as disinformation or artificially promoted by bots. Social media platforms should, to the extent reasonable, make algorithmic information available to the public so that users understand why they are seeing the content they see. Content oversight bodies established by companies should have full access to algorithmic data necessary to their oversight function.
  • Maximize transparency. Information about who purchases political advertisements online should be readily visible to users, as should the country from which page administrators most commonly access their accounts (with administrators prevented from using virtual private networks to disguise their locations).

FOR CIVIL SOCIETY

  • Promote digital literacy. Research suggests that voters are savvier and less susceptible to disinformation than is sometimes assumed. In the 2016 elections, voters over 60 were most susceptible to disinformation, suggesting that digital literacy plays a role in citizens’ capacity to detect and discount disinformation. In the long-term, healthy democracies will need to be inoculated against the falsehoods enabled by the digital age, especially as purveyors of disinformation grow more sophisticated (to include synthetic media or so-called deepfake videos).
  • Expand research. Social scientists have dedicated significant resources to understanding online disinformation in the 2016 U.S. elections, helping shed light on the extent and impact of this phenomenon. As disinformation itself evolves, so too will the need for continuing research to ensure that policy solutions are grounded in empirical reality. Platforms should provide researchers access to the necessary information while scrupulously protecting user data.
  • Iterate on fact-checking. Fact-checking is a logical and laudable response to the challenge of disinformation. Evidence from the 2016 U.S. elections, however, suggests that fact-check and disinformation sites are disjointed and rarely reach the same users. This is not a reason to give up, but rather to continue experimenting with fact-checking solutions and measuring their impact.
  • Remember the big picture. Disinformation’s power is in amplifying and exploiting divisions, distrust, and norm violations in the real-world political sphere. Cleaning up online spaces is only half of the puzzle when it comes to improving democratic discourse. From politicians and political parties to multinational corporations and media outlets, the task of restoring citizens’ faith in institutions and restoring facts and civility to the public sphere goes well beyond the specific challenge of online disinformation. 
 

Downloads


Suggested Content