Early Detection and Prevention of Malicious User Behavior on Twitter Using Deep Learning Techniques

Abstract

Organized misinformation campaigns on Twitter continue to proliferate, even as the platform acknowledges such activities through its transparency center. These deceptive initiatives significantly impact vital societal issues, including climate change, thus spurring research aimed at pinpointing and intercepting these malicious actors. Present-day algorithms for detecting bots harness an array of data drawn from user profiles, tweets, and network configurations, delivering commendable outcomes. Yet, these strategies mainly concentrate on postincident identification of malevolent users, hinging on static training datasets that categorize individuals based on historical activities. Diverging from this approach, we advocate for a forward-thinking methodology, which utilizes user data to foresee and mitigate potential threats before their realization, thereby cultivating more secure, equitable, and unbiased online communities. To this end, our proposed technique forecasts malevolent activities by tracing the projected trajectories of user embeddings before any malevolent action materializes. For validation, we employed a dynamic directed multigraph paradigm to chronicle the evolving engagements between Twitter users. When juxtaposed against the identical dataset, our technique eclipses contemporary methodologies by an impressive 40.66% in F score (F1 score) in the anticipatory identification of harmful users. Furthermore, we undertook a model evaluation exercise to gauge the efficiency of distinct system elements.