Magazine: Features
Misinformation, Bots, and Algorithms: Dr. Emilio Ferrara on Technology's Impact on Democracy
FREE CONTENT FEATURE
The rising threat of generative AI in misinformation campaigns may eventually erode trust in democratic institution.
Misinformation, Bots, and Algorithms: Dr. Emilio Ferrara on Technology's Impact on Democracy
Full text also available in the ACM Digital Library as PDF | HTML | Digital Edition
Dr. Emilio Ferrara discusses the impact of technology, AI, and social media on elections, misinformation, and democracy. He highlights how algorithms shape political discourse, the role of social bots in spreading misinformation, and the risks posed by generative AI (GenAI) in creating hyper-targeted false narratives. Ferrara emphasizes the need for robust detection methods, platform accountability, and balanced regulation to protect electoral integrity while preserving free speech. He also explores potential solutions, such as blockchains for transparency and AI-driven monitoring tools, urging collaboration between researchers, policymakers, and technologists to address these challenges.
Gabriela Pinto (GP): Could you share your journey into the field of computational social science and how it led to your work on elections and technology?
Emilio Ferrara (EF): My journey into computational social science began with an intrinsic fascination for understanding human behavior in complex techno-social systems through data. During my graduate studies, I explored the dynamics of social networks and how information diffuses through these systems. This naturally progressed into investigating the intersection of technology and society, particularly the role of social media in shaping public opinion. Over time, I focused on elections as a critical domain where technology's impact on societal processes could be of concern. My most recent projects involve dissecting how emerging technologies, such as AI, especially GenAI, might influence electoral integrity and democratic processes globally.
Technology's Impact on Elections
GP: In your research, how significant has technology been in shaping modern elections, particularly through social media?
EF: Technology, in history, has always fundamentally reshaped electoral processes. Social media platforms are no exception, since they serve as both amplifiers of political discourse and arenas where narratives compete for people attention. Although social media enable unprecedented reach and engagement, they also introduce vulnerabilities and risks, such as the rapid spread of misinformation and the manipulation of public opinion through bots or coordinated campaigns, as much of our work has shown over the years. This dual role underscores the need for rigorous analysis and more research.
GP: What role do you think algorithms play in amplifying or suppressing certain political narratives during elections?
EF: Algorithms are central in determining what content users see, thereby shaping their perceptions. For example, we recently showed that when subscribing to X [formerly Twitter] a few user signals (likes, follows) are sufficient for the recommendation algorithm to cast them into "political bubbles." Furthermore, by prioritizing engagement, these systems often amplify polarizing or inflammatory content, indirectly favoring divisive narratives. Conversely, algorithms can also suppress dissenting voices or marginalized viewpoints if not designed with fairness in mind. Understanding and mitigating these biases is a critical area of our ongoing research.
Misinformation and Social Bots
GP: Could you elaborate on how misinformation spreads through social networks, and what patterns are unique during elections?
EF: Misinformation often spreads out of tightly-knit communities where trust in the source supersedes factual accuracy. During elections, this dynamic is amplified by the increased level of emotional intensity due to the saliency of the election outcome. Unique patterns include the emergence of "information cascades," where false claims gain traction through rapid sharing, and the use of coordinated bot networks to lend artificial credibility to misleading narratives. We have also recently seen how GenAI can be used to create fictitious imagery, portraying certain candidates in more positive (or negative) ways, and even manufacture voice podcasts to (mis)attribute words or ideas to a given candidate.
GP: What are the most effective strategies for detecting and mitigating the influence of social bots in electoral campaigns?
EF: Effective strategies combine advanced machine learning techniques similar to those we have developed over the years, combined with human expertise. Detecting bots involves analyzing behavioral patterns, such as high-frequency posting or retweeting specific accounts. AI is required not only to tackle the scale of the problem, in large platforms, but also to identify higher-order patterns that might escape the human eye. Mitigation of these issues requires platform cooperation to tackle malicious accounts swiftly, alongside with policymaking to outline an effective regulatory landscape encouraging platforms to serve the public good.
GP: Have you noticed any differences in misinformation tactics across different countries or political systems?
EF: Yes, tactics vary widely. In Western democracies, deliberate misinformation campaigns often exploit free speech protections, focusing on divisive social issues. In more restrictive regimes, state-sponsored campaigns dominate, leveraging propaganda to control narratives and suppress dissent. Cultural nuances and platform mechanics also influence these strategies.
Misinformation often spreads out of tightly-knit communities where trust in the source supersedes factual accuracy. During elections, this dynamic is amplified
Regulatory and Ethical Concerns
GP: What are the biggest challenges in regulating the use of AI and technology in electoral contexts?
EF: Regulating AI within (but also beyond) elections faces challenges such as defining the scope and intent of harmful applications, and balancing interventions with ethical considerations. The rapid evolution of AI technology often outpaces regulatory frameworks, necessitating new policy-making approaches.
GP: How can policymakers strike a balance between regulating technology to ensure fair elections and protecting freedom of speech?
EF: Policymakers might need to prioritize transparency and accountability. For instance, mandating disclosures for political ads and the algorithms behind content moderation can foster trust without impinging on free speech. Collaboration between policymakers, researchers, technologists, and civil society is essential to craft balanced policies; for example, a Senate bill codified into California State Law in 2019 was crafted by policymakers who requested our inputs, to regulate the use of social bots for election interference.
Future of Technology in Elections
GP: With the rise of GenAI, what potential risks or benefits do you foresee for future elections?
EF: GenAI presents both opportunities and risks. It could enhance voter engagement through personalized outreach, translation, or explanation of complex political notions. However, it also poses risks, such as the creation of hyper-realistic, hyper-targeted misinformation, which could undermine trust in electoral processes.
GP: What role do you think block-chain technology or decentralized platforms could play in ensuring transparency in elections?
EF: Blockchain technology offers promising avenues for enhancing electoral transparency. It can ensure secure, tamper-proof record-keeping for votes and campaign financing. A decade ago, I published an influential paper that argued for blockchain-based proof of identity online—only recently we are seeing some of these ideas being implemented. Decentralized platforms can enable uncensored access to information, though scalability and accessibility remain challenges.
GP: If you were advising election officials, what tools or strategies would you recommend to ensure a fair and transparent process in the digital age?
EF: I would recommend leveraging tools for real-time monitoring of online discourse to identify misinformation and coordinated campaigns. Implementing robust cybersecurity measures to protect election infrastructure is also critical. Additionally, fostering public-private partnerships can help address technology-related vulnerabilities effectively.
GP: What steps can individuals take to critically evaluate information they encounter online during election cycles?
EF: Individuals should adopt a skeptical approach to online content, verifying sources and cross-referencing information with reputable outlets. Digital literacy programs can equip citizens with skills to discern credible information from propaganda or misinformation. It's fair to say that in the future we shouldn't simply blindly trust what our eyes can see (it might be GenAI).
GP: What areas of research in technology and elections do you believe are most pressing for the academic and policy-making communities to address?
EF: Key areas include understanding the socio-technical interplay of algorithms and user behavior, especially the potential effects on voting behavior; developing countermeasures against generative AI-driven misinformation and hyper-targeted influence campaigns; and exploring equitable frameworks for regulating technology globally. Bridging the gap between academic research, policymakers, and technologists is essential to address these pressing challenges.
Dr. Emilio Ferrara is a professor of computer science at the University of Southern California (USC), where his research explores the intersection of artificial intelligence, network science, and computational social science. His work focuses on understanding and modeling complex social phenomena, including misinformation spread, social bots, and influence campaigns on digital platforms. He has made significant contributions to the detection and mitigation of online manipulation, applying machine learning and data-driven approaches to analyze large-scale social networks. Dr. Ferrara has published extensively in high-impact journals and conferences, with research appearing in venues such as Nature Communications, PNAS, and ICWSM. His work has been widely recognized for its societal impact, particularly in areas related to the dynamics of online discourse and the broader implications of algorithmic decision-making on society.
Gabriela Pinto is a Ph.D. student in computer science at the University of Southern California, where her research focuses on social media and natural language processing (NLP). She has published work exploring how language and online platforms intersect, with an emphasis on leveraging NLP techniques to analyze digital communication and its societal impact.
Copyright held by owner/author.
The Digital Library is published by the Association for Computing Machinery. Copyright © 2025 ACM, Inc.