Crossroads The ACM Magazine for Students

Sign In

Association for Computing Machinery

Magazine: Features Filter bubbles and fake news

Filter bubbles and fake news

By ,

Full text also available in the ACM Digital Library as PDF | HTML | Digital Edition

Tags: Computing / technology policy, Electronic commerce, Networks, World Wide Web

back to top 

The E.U. referendum in the U.K. and the U.S. presidential election shocked journalists, pollsters, and citizens around the world. The outcomes—the U.K. voting to leave the EU and Donald Trump being elected President—raise the question of how traditional media and polls could have been so wrong in their predictions [1]. While plenty of fingers still point to outside interference, changing demographics, and economic concerns, one scapegoat— social media—has received extra-special attention. Some critics place the fault with Facebook, Google, and other social media platforms for allowing the spread of "fake news," pointing to, for example, the creation and facilitation of echo chambers, where users are no longer exposed to outside options and views [2]. The New York Times reported that following Trump's victory, executives at Facebook began to privately consider the role their company and platform had on the election [1]. However, Facebook CEO Mark Zuckerberg downplayed the company's role in the election, saying, "Voters make decisions based on their lived experience" and the theory that fake news shared on Facebook "influenced the election in any way, is a pretty crazy idea." [3].

So we ask, did social media play a role in these election upsets? In this article, we examine whether social media really did play a role, if fake news and filter bubbles had an effect, and if they did, what can be done about it in the future.

back to top  Social Media Effect

Social media trends (in terms of, say, numbers of posts, shares, and likes) on the day of the elections favored both Brexit and Trump, [4] which ran counter to the narrative of reputable polling and traditional media. Trump had more followers across social media platforms. He did, however, and continues to, push his messages through social media rather than traditional media channels, and had higher engagement rates compared to his opponent. One of the most shared posts on social media leading to the election was "Why I'm Voting For Donald Trump," a blog by a well-known conservative female blogger [4]. The trend was similar during the E.U. Referendum in the U.K. The official "Leave" campaign had more followers and engagement on social media platforms, as well as more success in spreading pro-leave hashtags and messages [5].

Moreover, according to Pew Research, 61% of millennials use Facebook as their primary source for political news [6]. Such news has been shown to have a direct effect on political actions, attitudes, and outcomes. In 2012, a study reported in Nature described a randomized controlled trial of political mobilization messages delivered to 61 million Facebook users during the 2010 U.S. congressional elections. It found the messages directly influenced political self-expression, information seeking, and real-world voting behavior [7]. This is not surprising, as the dominant funding strategy of most social-media platforms follows the assumption that sponsored social-media posts, or advertisements, can change the buying behavior of their users [8]. In another example, the past decade has seen a rise in political movements (such as the Arab Spring and Black Lives Matter) that start on and are sustained through social-media platforms like Twitter and Facebook.

Hampton and Hargittai [9] offered evidence that the demographic most disconnected from social media and the web was also the most likely to be Trump supporters. Voters without a college degree supported Trump by a nine-percentage-point margin, whereas in past elections they were equally likely to support Democrats as Republicans. These gaps widen greatly when pollsters looked at white non-college-educated voters, who supported Trump by 39 percentage points in 2016, the largest margin of support from this demographic since 1980 [9]. This group in particular—white noncollege-educated—were also most likely to not have access to the Internet, and of those who did were most likely to not use social media [6].

Additionally, Pew Research reported that while millennials (people born 1977 to 1995) get their political news from social-media platforms, most Americans still rely on their local TV news stations and other traditional mass-media sources [6]. These are the same mass-media sources where Trump received significantly more attention and coverage compared to Hillary Clinton [6]. And while social-media-savvy millennials overwhelmingly supported Clinton, voter turnout from this demographic was lower than in both the 2008 and 2012 presidential elections. Further data analysis shows Clinton supporters were most likely to be engaged on social media platforms like Twitter and Reddit [9].

Why then did a first-pass look at social data trends show more support for Trump than Clinton if social-media users seemed to show more support for Clinton? The answer is still being investigated, but one answer may be botnets. Two recent studies—one from researchers at the University of Southern California and the other from Oxford University, the University of Washington, and Corvinus University of Budapest—both showed AI-controlled bots were spreading pro-Trump content in overwhelming numbers. Kollanyi et al. [11] from Oxford University estimate that one-third of pro-Trump tweets came from automated bots, which they classified by how often these accounts tweeted, at what time of day, and their relation to other accounts. This created the illusion of more support for Trump on Twitter than there may have been naturally [10,11]. Kollanyi et al. [11] noted similar automated patterns on Twitter leading up to the EU referendum in the U.K. in which pro-Leave tweets greatly outnumbered pro-Stay tweets.

back to top  Filter Bubble

Another criticism of social media is that it constructs "filter bubbles," digital echo chambers where users see content and posts that agree only with their preexisting beliefs [12]. While there is an active dialogue taking place as to whether filter bubbles exist, here, we highlight work that explores whether it contributed to the 2016 election results.

In 2015, Facebook funded a study that showed that while Facebook's own newsfeed algorithm might favor posts that support a user's political beliefs, the related filter-bubble effect is due to the user's network and past engagement behavior (such as clicking only on certain news stories); that is, it is not the fault of the news-feed algorithm but the choices of users themselves. They also found this favoritism effect is small overall. The study showed users are only 6% less likely to see a post that conflicts with their political views when compared to an unfiltered newsfeed [13].

Personal recommendation systems, or systems that learn and react to individual users, are claimed to be one cause of filter bubbles.

Personal recommendation systems, or systems that learn and react to individual users, are claimed to be one cause of filter bubbles [12]. Other studies have shown that personalized recommendations can actually expose users to content they might not have found on their own [14] and that personalized recommendations are not used as extensively as once thought [15]. A 2011 national survey by Pew Research found Facebook use is actually correlated with knowing and interacting with a greater variety people from different backgrounds and demographics [16]. This correlation persists despite controlling for the demographic characteristics of Facebook users compared the U.S. population as a whole. In a sense, social media may actually be bursting filter bubbles. This same survey showed that people who are offline are more likely to be socially isolated and have less diverse social relationships, thereby being exposed to less-diverse ideas and viewpoints [16].

While the studies are compelling, evidence of filter bubbles and their effect on users continues to grow. For example, several researchers have criticized the 2015 Facebook news-feed study mentioned earlier. Specifically, Zeynep Tufekci rebutted many of the findings and methodology of the study [17], accusing it of underplaying its most important conclusion that the newsfeed algorithm decides placement of posts and this placement greatly influences what users click and read. Tufekci also highlighted that the sampling was not random and thus cannot be generalized across all Facebook users. Even if one takes the Facebook study at face value, it still shows the filter-bubble effect is real and the algorithm actively suppresses posts that conflict with a user's political viewpoint. Other recent studies (such as Del Vicario et al. [18] on the sharing of scientific and conspiracy stories on Facebook) found evidence of the formation of echo chambers that cause "confusion about causation, and thus encourage speculation, rumors, and mistrust."

back to top  Fake News

On the theme of "speculation, rumors, and mistrust," fake news is another issue that has plagued social media platforms during, as well as after, the U.S. elections. Fake news is a recent popular and purposefully ambiguous term for false news stories that are packaged and published as if they were genuine. The ambiguity of this term—an inherent property of what it tries to label— makes its use attractive across the political spectrum, where any information that conflicts with an ideology can be labeled "fake." The Times published an article [19] chronicling the spread of fake news on social media, saying one such fake story was shared at least 16,000 times on Twitter and more than 350,000 times on Facebook. According to BuzzFeed [20], in the months before the U.S. elections, fake news stories on Facebook actually outperformed real news from mainstream news outlets. BuzzFeed [20] said these fake news stories overwhelmingly favored Trump. For example, a fake news story reported that Pope Francis endorsed Trump and was shared more than one million times on social media feeds. Pope Francis, an advocate for refugees, made no such endorsement. Not only were these fake news sources shared on social media platforms they were also shared by Trump himself, as well as by members of his campaign [9].

Fake news stories have a real effect offline as well. A shooting took place in a Washington, D.C., pizzeria called Comet Ping Pong after fake news stories and conspiracy theories spread about it being part of a child trafficking ring [21]. Army Lt. Gen. Michael Flynn (Ret.), an early National Security Adviser under Trump, shared fake news stories related to this so-called "Pizza-gate" scandal more than 16 times, according to a Politico review of his Twitter posts [22].

Although fake news may be a problem (though not uniquely to social media), its dissemination may not break out of the filter bubble at its point of origin. Several studies have shown the spread of fake news is similar to epidemics compared to real news stories and that such stories usually stay within the same communities [23]; that is, these stories tend to not reach or convince outsiders. Likewise, Mark Zuckerberg said in a Facebook post following the U.S. election, "Of all the content on Facebook, more than 99% of what people see is authentic. Only a very small amount is fake news and hoaxes. Overall, this makes it extremely unlikely hoaxes changed the outcome of this election in one direction or the other" [3]. He did not provide any data or evidence to back this claim.

As the dust continues to settle from the election, research into the role of social media and digital media platforms as key influencers will continue. We acknowledge how unlikely it is analysts will ever reach a commonly accepted explanation for the election outcomes. However, it is imperative to acknowledge the need to study such potential cause and effects. We have thus laid out the potential research questions; we now turn to possible solutions.

back to top  Future Solutions

Even though fake news and filter bubbles are a problem that indeed affected the U.S. presidential election, social media platforms like Facebook and Google are exploring ways to reduce these influences on their platforms. Both Google and Facebook announced (November 2016) the banning of websites that publish fake news from their advertising networks, effectively killing the revenue stream of these sites [24]. Facebook has also created new tools to flag fake content and is partnering with third-party fact-checking organizations like Snopes and Politifact [25]. It is also developing better automatic fake-news-detection systems that will limit the spread of such content.

We already see legal and political tensions as Twitter implements internal policies for flagging hate speech and closing specific accounts.

Researchers and software developers have been looking into tools to help break out of filter bubbles [26], including filtering algorithms and user interfaces that give users better control and allow more diversity. Other tools (such as browser plugin Ghostery and search engine DuckDuckGo) are being developed to help anonymize users' actions online, thus disabling personalized recommendations.

Bot and spam detection is another major area of research. Many social-media platforms already use a range of tools, from machine learning to social network analysis, to detect and stop bots. Independent groups and researchers have also developed tools to detect bots; for example, researchers at Indiana University have developed BotOrNot (http://truthy.indiana.edu/botornot/), a service that allows users to check if a particular Twitter user is, in fact, a bot.

back to top  Difficult Questions

In addition to technical enhancements and design choices, what other avenues, even public policymaking, are available for combating these issues? This may be a particularly difficult question in the U.S. due to free-speech protections under the First Amendment of the U.S. Constitution. We already see legal and political tension as Twitter implements internal policies for flagging hate speech and closing specific accounts. Others have suggested a reinstatement of media- and civicliteracy initiatives to help users discern for themselves which news sources are indeed trustworthy.

These issues of fake news and filter bubbles are vague, nuanced, and pre-date social media, with no ready solution, but it is vital that researchers continue to explore and investigate them from diverse technical and social perspectives. Their skills, knowledge, and voices are needed more than ever to address them.

back to top  References

[1] Isaac, M. Facebook, in cross hairs after election, is said to question its influence. The New York Times (Nov. 12, 2016); https://www.nytimes.com/2016/11/14/technology/facebook-is-said-to-question-its-influence-in-election.html

[2] Isaac, M. and Ember, S. For election day influence, Twitter ruled social media. The New York Times (Nov. 8, 2016); http://www.nytimes.com/2016/11/09/technology/for-election-day-chatter-twitter-ruled-social-media.html

[3] Kokalitcheva, K. Mark Zuckerberg says fake news on Facebook affecting the election is a 'crazy idea.' Fortune (Nov. 11, 2016); http://fortune.com/2016/11/11/facebook-election-fake-news-mark-zuckerberg/?iid=sr-link1

[4] El-Bermawy, M.M. Your filter bubble is destroying democracy. Wired (Nov. 18, 2016); https://www.wired.com/2016/11/filter-bubble-destroying-democracy/

[5] Sigdyal, P. and Wells, N. Twitter users scream 'leave' in Brexit vote, but 'remain' gains ground. CNBC (June 23, 2016); http://www.cnbc.com/2016/06/23/twitter-users-scream-leave-in-brexit-vote-but-remain-gains-ground.html

[6] Mitchell, A., Gottfried, J., and Matsa, K.E. Facebook top source for political news among millennials. Pew Research Center, June 1, 2015; http://www.journalism.org/2015/06/01/facebook-top-source-for-political-news-among-millennials/

[7] Bond, R.M., Fariss, C.J., Jones, J.J., Kramer, A.D.I., Marlow, C., Se le, J.E., and Fowler, J.H. A 61-million-person experiment in social influence and political mobilization. Nature 489, 7415 (Sept. 2012), 295–298.

[8] Taylor, D.G., Lewin, J.E., and Strutton, D. Friends, fans, and followers: Do ads work on social networks? Journal of Advertising Research 51, 1 (2011), 258–275.

[9] Hampton, K. and Hargittai, E. Stop blaming Facebook for Trump's election win. The Hill (Nov. 23, 2016); http://thehill.com/blogs/pundits-blog/presidential-campaign/307438-stop-blaming-facebook-for-trumps-election-win

[10] Fields, J., Sengupta, S., White, J., Spetka, S. et al. Botnet Campaign Detection on Twitter. Master of science thesis in computer and information Sciences, Department of Computer Sciences, SUNY Polytechnic Institute, Utica, NY, 2016; https://dspace.sunyconnect.suny.edu/handle/1951/68351

[11] Kollanyi, B., Howard, P.N., and Woolley, S.C. Bots and automation over Twitter during the third U.S. presidential debate. Political Bots (Oct. 27, 2016); http://politicalbots.org/wp-content/uploads/2016/10/Data-Memo-Third-Presidential-Debate.pdf

[12] Pariser, E. The Filter Bubble: How the New Personalized Web Is Changing What We Read and How We Think. Penguin, New York, 2011.

[13] Bakshy, E., Messing, S., and Adamic, L. Exposure to ideologically diverse news and opinion on Facebook. Science 348, 6239 (2015), 1130–1132.

[14] Hosanagar, K., Fleder, D., Lee, D., and Buja, A. Will the global village fracture into tribes? Recommender systems and their effects on consumer fragmentation. Management Science 60, 4 (2013), 805–823.

[15] Weisberg, J. Bubble trouble: Is web personalization turning us into solipsistic twits. Slate (June 10, 2011); http://www.slate.com/articles/news_and_politics/the_big_idea/2011/06/bubble_trouble.html

[16] Hampton, K., Sessions Goulet, L., Rainie, L., and Purcell, K. Social networking sites and our lives. Pew Research, June 16, 2011; http://www.pewinternet.org/2011/06/16/social-networking-sites-and-our-lives/

[17] Tufekci, Z. How Facebook's algorithm suppresses content diversity (modestly) and how the newsfeed rules the clicks. Medium (May 7, 2015); https://medium.com/message/how-facebook-s-algorithm-suppresses-content-diversity-modestly-how-the-newsfeed-rules-the-clicks-b5f8a4bb7bab#.kw4xqeif0

[18] Del Vicario, M., Bessi, A., Zollo, F., Petroni, F., Scala, A., Caldarelli, G., Stanley, E., and Quattrociocchi, W. The spreading of misinformation online. Proceedings of the National Academy of Sciences 113, 3 (2016), 554–559.

[19] Maheshwari, S. How fake news goes viral: A case study. The New York Times (Nov. 20, 2016); http://www.nytimes.com/2016/11/20/business/media/how-fake-news-spreads.html

[20] Silverman, C. This analysis shows how viral fake election news stories outperformed real news on Facebook. BuzzFeed (Nov. 16, 2016); https://www.buzzfeed.com/craigsilverman/viral-fake-election-news-outperformed-real-news-on-facebook?utm_term=.xu3M8zonA#.xqbRqV1D8

[21] Kang, C. Fake news onslaught targets pizzeria as nest of child-trafficking. The New York Times (Nov. 21, 2016); http://www.nytimes.com/2016/11/21/technology/fact-check-this-pizzeria-is-not-a-child-trafficking-site.html

[22] Bender, B. and Hanna, A. Flynn under fire for fake news. Politico (Dec. 5, 2016); http://www.politico.com/story/2016/12/michael-flynn-conspiracy-pizzeria-trump-232227

[23] Jin, F., Dougherty, E., Saraf, P., Cao, Y., and Ramakrishnan, N. Epidemiological modeling of news and rumors on Twitter. In Proceedings of the Seventh Workshop on Social Network Mining and Analysis. ACM Press, New York, 2013, article no. 8.

[24] Kottasova, I. Facebook and Google to stop ads from appearing on fake news sites. CNN (Nov. 15, 2016); http://money.cnn.com/2016/11/15/technology/facebook-google-fake-news-presidential-election/index.html

[25] Heath, A. Facebook is going to use Snopes and other fact-checkers to combat and bury 'fake news.' Business Insider (Dec. 15, 2016); http://www.businessinsider.com/facebook-will-fact-check-label-fake-news-in-news-feed-2016-12

[26] Resnick, P., Kelly Garrett, R., Kriplean, T., Munson, S.A., and Jomini Stroud, N. Bursting your (filter) bubble: Strategies for promoting diverse exposure. In Proceedings of the 2013 Conference on Computer Supported Cooperative Work (San Antonio, TX, Feb. 23–27). ACM Press, New York, 2013, 95–100.

back to top  Authors

Dominic DiFranzo is a post-doctoral associate in the Social Media Lab at Cornell University, Ithaca, NY. He holds a Ph.D. in computer science from the Rensselaer Polytechnic Institute, Troy, NY, and was a member of the Tetherless World Constellation.

Kristine Gloria-Garcia joined the Aspen Institute Communications and Society Program as a project manager in September 2016; previously, she served as a visiting researcher at the Internet Policy Research Initiative at MIT, Cambridge, MA, and as a privacy research fellow at the Startup Policy Lab. She holds a Ph.D. in cognitive science from Rensselaer Polytechnic Institute, Troy, NY, and a master's in media studies from the University of Texas at Austin.

back to top 

© 2017 ACM 1528-4972/17/03 $15.00

The Digital Library is published by the Association for Computing Machinery. Copyright © 2017 ACM, Inc.


There are no comments at this time.


To comment you must create or log in with your ACM account.