XRDS

Crossroads The ACM Magazine for Students

Sign In

Association for Computing Machinery

Magazine: Features
Understanding Influence Operations: Lessons from Dr. Kathleen Carley

Understanding Influence Operations: Lessons from Dr. Kathleen Carley

By ,

Full text also available in the ACM Digital Library as PDF | HTML | Digital Edition

Tags: Collaborative and social computing theory, concepts and paradigms, Security and privacy, Social engineering attacks

back to top 

I recently had the opportunity to speak with Dr. Kathleen Carley, a leading expert in computational social science, social cybersecurity, and network analysis. A professor at Carnegie Mellon University (CMU) since 2002, Dr. Carley has worked extensively on topics ranging from cybersecurity and misinformation detection to social policy and public health messaging. Her research spans multiple disciplines but is unified by a central theme—understanding how information flows through networks and how it can be manipulated for political, social, or commercial purposes.

Dr. Carley's work in social cybersecurity has evolved significantly over the years. Prior to founding the Center for Informed Democracy & Social Cybersecurity (IDeaS Lab) in 2019, she directed CASOS (Center for Computational Analysis of Social and Organizational Systems), where she focused on simulation and network science. The IDeaS Lab grew out of CASOS as her research expanded into topics like hate speech, online harm, and the spread of misinformation.

Interestingly, her interest in social cybersecurity was already taking shape before the lab was established. In 2019, she wrote an article with the Army University Press [1] and contributed to a National Academies of Sciences book [2], both of which explored how misinformation campaigns and influence operations function. These publications helped set the stage for what would become IDeaS Lab, a dedicated research hub for understanding online manipulation and developing strategies to counter it.

Since its inception, the IDeaS Lab has expanded rapidly, developing new tools and technologies to assess misinformation globally. The lab's work includes tracking social media manipulation across platforms like X (formerly Twitter), Reddit, Instagram, and Telegram; developing simulations that model how false narratives spread; and creating automated detection systems to flag cyber-intrusions and coordinated campaigns. At its core, the lab is focused on "social-cybersecurity," which Dr. Carley describes as the intersection of social sciences and cybersecurity. Their goal is not only to detect and analyze threats but also to help organizations prepare for and mitigate these risks through training and simulations. One of their unique approaches involves BEND maneuvers, a framework that enables simulation and detection of influence operations and allow organizations to see how different strategies could play out in a controlled environment. This framework is a key to both ongoing work and commercialized solutions to help corporations and government agencies prepare for social cyber-attacks. One example is Netanomics. Initially developed through CASOS, Netanomics applies network science principles to business, security, and social analytics, helping organizations analyze large-scale data to detect patterns, threats, and opportunities.

Dr. Carley and I spoke about IDeaS ongoing research projects. OMEN takes a "system of systems" approach to modeling information environments and features AESOP, a tool that generates potential storylines based on influence operations. "Media Manipulation" focuses on studying how false information spreads through social networks and identifying coordinated efforts. The "Automated Early Warning System for Cyber-Intrusion Detection" combines traditional cybersecurity with social network analysis to detect cyber threats faster. "Polarization Projects" investigate how ideological divides are exacerbated by social media algorithms and online echo chambers.

Our conversation turned to Dr. Carley's research comparing patterns of online manipulation in Canada, Singapore, and across Southeast Asia. While the methods used in misinformation campaigns tend to be similar, the intent behind them varies based on political, social, or economic goals. In some cases, actors are state-sponsored, while in others, they are commercial entities leveraging the same techniques to influence consumer behavior.

One of the biggest challenges in this field is the rapid advancement of AI-generated media, particularly deepfakes. Over the past few months, her team has observed an increase in synthetic video and audio content designed to deceive audiences. The growing sophistication of bot networks and coordinated disinformation campaigns also remains a pressing concern. Dr. Carley emphasized that tackling these issues will require better technological tools, clearer policy frameworks, and increased public awareness. Success, in her view, would mean a future where misinformation and manipulation campaigns are quickly detected and mitigated, without infringing on free speech.

One disturbing trend Dr. Carley highlighted is the disproportionate targeting of women in deepfake attacks. While deepfakes can be used for various types of misinformation, women—especially those in public roles—are often subjected to non-consensual, manipulated media designed to damage reputations, intimidate, or silence them. This trend has become increasingly prevalent in political and social contexts, making it a growing area of concern for researchers studying online harassment and digital threats.

As our time drew to a close, we discussed the common perception that bots are purely malicious. Dr. Carley pointed out that while many bots are used to amplify misinformation and manipulate public opinion, not all bots are harmful. Some are designed for positive purposes such as distributing emergency alerts, summarizing news, or supporting public health initiatives. The challenge is distinguishing between malicious bot networks and legitimate automation.

Speaking with Dr. Carley was incredibly insightful. Her research is at the cutting edge of social cybersecurity, misinformation detection, and influence operations, with direct applications in national security, public policy, and social media governance. As the world grapples with deepfakes, AI-generated propaganda, and increasingly sophisticated manipulation tactics, her work will only become more crucial in shaping how we detect, understand, and respond to these threats.

back to top  References

[1] Beskow, D. M. and Carley, K. M. Social cybersecurity: An emerging national security requirement. Military Review (2019); https://www.armyupress.army.mil/Journals/Military-Review/English-Edition-Archives/Mar-Apr-2019/117-Cybersecurity/b

[2] National Research Council. Dynamic Social Network Modeling and Analysis: Workshop Summary and Papers. The National Academies Press, Washington, DC, 2023; https://doi.org/10.17226/10735

back to top  Author

Jack Thoene is an M.S. of electrical engineering student at Northwestern University, where he is a member of the VAK Sustainable Computing Lab advised by Professor Nivedita Arora. He holds a B.S in control systems and robotics from the U.S. Naval Academy, and now researches low power computing strategies and energy harvesting circuits for battery free wearables and IoT devices.

back to top 

Copyright is held by the owner/author(s). Publication rights licensed to ACM.

The Digital Library is published by the Association for Computing Machinery. Copyright © 2025 ACM, Inc.