Detecting and responding to hostile disinformation activities on social media using machine learning and deep neural networks

Cartwright, Barry and Frank, Richard and Weir, George and Padda, Karmvir (2022) Detecting and responding to hostile disinformation activities on social media using machine learning and deep neural networks. Neural Computing and Applications, 34 (18). pp. 15141-15163. ISSN 0941-0643 (https://doi.org/10.1007/s00521-022-07296-0)

[thumbnail of Cartwright-etal-NCA-2022-Detecting-and-responding-to-hostile-disinformation-activities-on-social-media]
Preview
Text. Filename: Cartwright_etal_NCA_2022_Detecting_and_responding_to_hostile_disinformation_activities_on_social_media.pdf
Final Published Version
License: Creative Commons Attribution 4.0 logo

Download (2MB)| Preview

Abstract

Disinformation attacks that make use of social media platforms, e.g., the attacks orchestrated by the Russian “Internet Research Agency” during the 2016 U.S. Presidential election campaign and the 2016 Brexit referendum in the U.K., have led to increasing demands from governmental agencies for AI tools that are capable of identifying such attacks in their earliest stages, rather than responding to them in retrospect. This research undertaken on behalf the of the Canadian Armed Forces and Department of National Defence. Our ultimate objective is the development of an integrated set of machine-learning algorithms which will mobilize artificial intelligence to identify hostile disinformation activities in “near-real-time.” Employing The Dark Crawler, the Posit Toolkit, TensorFlow (Deep Neural Networks), plus the Random Forest classifier and short-text classification programs known as LibShortText and LibLinear, we have analyzed a wide sample of social media posts that exemplify the “fake news” that was disseminated by Russia’s Internet Research Agency, comparing them to “real news” posts in order to develop an automated means of classification.