Mirela Silva, University of Florida
Use of influence tactics (persuasion, emotional, gain/loss framing) is key in many human interactions, including advertisements, written requests, and news articles. However, they have been used and abused for cyber social engineering and human-targeted attacks, such as phishing, disinformation, and deceptive ads. In this emerging deceptive and abusive online ecosystem, important research questions emerge: Does deceptive material online leverage influence disproportionately, compared to innocuous, neutral texts? Can machine learning methods accurately expose the influence in text as part of user interventions to prevent them from being deceived by triggering their more analytical thinking mode? In this talk, I present my research on Lumen (a learning-based framework that exposes influence cues in texts) and Potentiam (a newly developed dataset of 3K texts comprised of disinformation, phishing, hyperpartisan news, and mainstream news). Potentiam was labeled by multiple annotators following a carefully designed qualitative methodology. Evaluation of Lumen in comparison to other learning models showed that Lumen and LSTM presented the best F1-micro score, but Lumen yielded better interpretability. Our results highlight the promise of ML to expose influence cues in text, towards the goal of application in automatic labeling tools to improve the accuracy of human-based detection and reduce the likelihood of users falling for deceptive online content.
Mirela Silva, University of Florida
author = {Mirela Silva},
title = {Thinking Slow: Exposing Influence as a Hallmark of Cyber Social Engineering and {Human-Targeted} Deception},
year = {2022},
address = {Santa Clara, CA},
publisher = {USENIX Association},
month = feb
}