Carnegie Mellon University

Social media apps on a iPhone

January 08, 2024

The Spread of Misinformation

By Giordana Verrengia

Krista Burns

Isabel Murdock and her research collaborators have earned a victory lap.

Murdock, a fourth-year Ph.D. student in electrical and computer engineering, along with her advisors, Osman Yagan, a research professor in the department, and Kathleen M. Carley, a professor of societal computing, are continuing to investigate how social media platforms contribute to the spread of misinformation. All three are members of CMU’s center for Informed Democracy and Social-Cybersecurity (IDeaS) - CMU’s center for research on online harms such as misinformation.

Their recent paper, “An Agent-Based Model of Reddit Interactions and Moderation,” was presented at November’s IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (ASONAM 2023), where it received the Best Paper Award. 

Social media platforms have been recognized as tools for conducting misinformation campaigns, often of a political- or health-related focus. Reddit is an interesting platform to examine because it has a decentralized moderation system and a community-based network structure. 

“Communities on Reddit can have varying standards for acceptable user behavior based on their intended audiences and topics of discussion,” Murdock said. “This moderation approach limits the oversight that the platform can perform and places more power within the hands of the users who are moderators. Consequently, misinformation may spread more widely in certain communities, whose community standards are weaker and whose moderators are less likely to remove misleading or incorrect content.” Reddit is among the most popular social media sites in the US and globally, netting over 1.5 billion monthly visits to its website. 

These concerns motivated the team to produce their key deliverable–an agent-based model of Reddit interactions that simulates the diffusion of misinformation on the site. “We show how the model produces results in alignment with real-world behaviors and can be customized to run specific experiments related to moderation and bad actors,” they said in their academic paper. 

This agent-based model is unique among other approaches because it incorporates a user-to-community network structure, which involves posts being shared indirectly between users. This model is also capable of simulating the impacts of various types of users and different moderation strategies rather than being limited to a single approach. 

The model in question was validated using empirical data that included over 100,000 posts and 800,000 comments taken directly from Reddit. It simulates two types of moderation–content removal and user banning. Murdock and her advisors found that both of those practices decreased the simulated spread of disinformation and the impact of bad actors. Their focus moving forward is to explore policy recommendations. 

“Our next steps are to integrate this model of a community-based platform with models of followership-based and friendship-based platforms. By combining these models, we aim to simulate information diffusion across multiple, diverse platforms,” said Murdock, who is also part of CASOS, an interdisciplinary research center through the School of Computer Science of which Carley is the director. 

“This will allow us to model the potential cross-platform effects of moderation practices taken on individual platforms and propose policy recommendations for limiting misinformation spread across the social media ecosystem.” 

This research was supported in part by the National Science Foundation (NSF), the Army Research Office, the Defence Science and Technology, and the Knight Foundation through the CMU IDeaS Center.