Evaluating Changes to Fake Account Verification Systems


Fedor Kozlov, Isabella Yuen, Jakub Kowalczyk, Daniel Bernhardt, and David Freeman, Facebook, Inc; Paul Pearce, Facebook, Inc and Georgia Institute of Technology; Ivan Ivanov, Facebook, Inc


Online social networks (OSNs) such as Facebook, Twitter, and LinkedIn give hundreds of millions of individuals around the world the ability to communicate and build communities. However, the extensive user base of OSNs provides considerable opportunity for malicious actors to abuse the system, with fake accounts generating the vast majority of harmful actions and content. Social networks employ sophisticated detection mechanisms based on machine-learning classifiers and graph analysis to identify and remediate the actions of fake accounts. Disabling or deleting these detected accounts is not tractable when the number of false positives (i.e., real users disabled) is significant in absolute terms. Using challenge-based verification systems, such as CAPTCHAs or phone confirmation, as a response for detected fake accounts, can enable erroneously detected real users to recover their access, while also making it difficult for attackers to abuse the platform.

In order to maintain a verification system's effectiveness over time, it is important to iterate on the system to improve the real user experience and adapt the platform's response to adversarial actions. However, at present there is no established method to evaluate how effective each iteration is at stopping fake accounts and letting real users through. This paper proposes a method of assessing the effectiveness of experimental iterations for OSN verification systems, and presents an evaluation of this method against human-labelled ground truth data using production Facebook data. Our method reduces the volume of necessary human labelled data by 70%, decreases the time necessary for classification by 81%, has suitable precision/recall for making decisions in response to experiments, and enables continuous monitoring of the effectiveness of the applied experimental changes.

Open Access Media

USENIX is committed to Open Access to the research presented at our events. Papers and proceedings are freely available to everyone once the event begins. Any video, audio, and/or slides that are posted after the event are also free and open to everyone. Support USENIX and our commitment to Open Access.

@inproceedings {259713,
author = {Fedor Kozlov and Isabella Yuen and Jakub Kowalczyk and Daniel Bernhardt and David Freeman and Paul Pearce and Ivan Ivanov},
title = {Evaluating Changes to Fake Account Verification Systems},
booktitle = {23rd International Symposium on Research in Attacks, Intrusions and Defenses ({RAID} 2020)},
year = {2020},
isbn = {978-1-939133-18-2},
address = {San Sebastian},
pages = {135--148},
url = {https://www.usenix.org/conference/raid2020/presentation/kozlov},
publisher = {{USENIX} Association},
month = oct,