All the times listed below are in Pacific Daylight Time (PDT).
Proceedings and Papers
The symposium papers and full proceedings are available to registered attendees now and will be available to everyone beginning Monday, August 9, 2021. Paper abstracts and proceedings front matter are available to everyone now. Copyright to the individual works is retained by the author[s].
Proceedings Front Matter
Proceedings Cover |
Title Page, Copyright Page, and List of Organizers |
Table of Contents |
Message from the Program Co-Chairs
Downloads for Registered Attendees
(Sign in to your USENIX account to download these files.)
Friday, August 6–Sunday, August 8
SOUPS 2021 Workshops
Please check each workshop's website for the specific program schedule.
7:15 am–8:15 am
Authentication
Session Chair: Rick Wash, Michigan State University
Towards Usable and Secure Location-based Smartphone Authentication
Geumhwan Cho, Sungkyunkwan University, South Korea; Sungsu Kwag and Jun Ho Huh, Samsung Research, South Korea; Bedeuro Kim, Sungkyunkwan University, South Korea; Choong-Hoon Lee, Samsung Research, South Korea; Hyoungshick Kim, Sungkyunkwan University, South Korea
The concept of using location information to unlock smartphones is widely available on Android phones. To date, however, not much research has been conducted on investigating security and usability requirements for designing such location-based authentication services. To bridge this gap, we interviewed 18 participants, studying users' perceptions and identifying key design requirements such as the need to support fine-grained indoor location registration and location (unlock coverage) size adjustment. We then conducted a field study with 29 participants and a fully-functioning application to study real-world usage behaviors. On average, the participants were able to reduce about 36% of manual unlock attempts by using our application for three weeks. 28 participants enduringly used registered locations to unlock their phones despite being able to delete them during the study and unlock manually instead. Worryingly, however, 23 participants registered at least one insecure location - defined as a location where an unwanted adversary can physically access their phones - as a trusted location mainly due to convenience or low (perceived) likelihood of phones being attacked. 52 out of 65 total registered locations were classified as insecure by the definition above. Interestingly, regardless of whether locations were considered secure or insecure, the participants preferred to select large phone unlock coverage areas.
Please do not use !?_ or your License Plate Number: Analyzing Password Policies in German Companies
Eva Gerlitz, Fraunhofer FKIE; Maximilian Häring, University of Bonn; Matthew Smith, University of Bonn, Fraunhofer FKIE
Password composition policies (PCPs) set rules that are intended to increase the security of user-chosen passwords. We conducted an online survey and investigated the employee-facing authentication methods of 83 German companies and the extracted 64 PCPs. We compared the password policies to recommendations proposed by institutions and related work. We found that many companies still require several character classes to be used as well as mandating regular password changes. Short and complex passwords are more often enforced than alternative mechanisms, such as minimum-strength requirements, that related work found more usable. Many of the policies were in line with recommendations given through the German Federal Office for Information Security (BSI). At the same time, there is high heterogeneity in the reported elements. Based on a selection of the main elements (password age, complexity, minimal length), at most seven out of the 64 PCPs are identical. The company size does not seem to play a significant role in the configuration of the PCPs.
Using a Blocklist to Improve the Security of User Selection of Android Patterns
Collins W. Munyendo and Miles Grant, The George Washington University; Philipp Markert, Ruhr University Bochum; Timothy J. Forman, United States Navy; Adam J. Aviv, The George Washington University
Android patterns remain a popular method for unlocking smartphones, despite evidence suggesting that many users choose easily guessable patterns. In this paper, we explore the usage of blocklists to improve the security of user-chosen patterns by disallowing common patterns, a feature currently unavailable on Android but used by Apple during PIN selection. In a user study run on participants' smartphones (n = 1006), we tested 5 different blocklist sizes and compared them to a control treatment. We find that even the smallest blocklist (12 patterns) had benefits, reducing a simulated attacker's success rate after 30 guesses from 24 % to 20 %. The largest blocklist (581 patterns) reduced the percentage of correctly guessed patterns after 30 attempts down to only 2 %. In terms of usability, blocklists had limited negative impact on short-term recall rates and entry times, with reported SUS values indicating reasonable usability when selecting patterns in the presence of a blocklist. Based on our simulated attacker performance results for different blocklist sizes, we recommend blocking 100 patterns for a good balance between usability and security.
User Perceptions of the Usability and Security of Smartphones as FIDO2 Roaming Authenticators
Kentrell Owens, University of Washington; Olabode Anise, Google; Amanda Krauss, Duo Security; Blase Ur, University of Chicago
The FIDO2 standard aims to replace passwords with public-key cryptography for user authentication on the web. Doing so has benefits for both usability (e.g., not needing to remember passwords) and security (e.g., eliminating phishing). Users can authenticate with FIDO2 in one of two ways. With platform authenticators, users authenticate to trusted hardware on the same device on which they are accessing a website. However, they must re-register for each website separately on each device. With roaming authenticators, such as USB security keys, they only need to register once, transferring the security key across devices. However, users might not be willing to pay for a USB security key, carry it around, or figure out how to plug it into different devices. These drawbacks have driven recent efforts to enable smartphones to serve as roaming authenticators. We conducted the first user study of FIDO2 passwordless authentication using smartphones as roaming authenticators. In a between-subjects design, 97 participants used either their smartphone as a FIDO2 roaming authenticator (via a prototype called Neo) or a password to log into a fictitious bank for two weeks. We found that participants accurately recognized Neo's strong security benefits over passwords. However, despite Neo's conceptual usability benefits, participants found Neo substantially less usable than passwords both in objective measures (e.g., timing to accomplish tasks) and in perception. Their critiques of Neo included concerns about phone availability, account recovery/backup, and setup difficulties. Our results highlight key challenges and opportunities for spurring adoption of smartphones as FIDO2 roaming authenticators.
8:15 am–8:30 am
Break
8:30 am–9:30 am
User Attitudes and (Mis)understandings
Session Chair: Blase Ur, University of Chicago
Never ever or no matter what: Investigating Adoption Intentions and Misconceptions about the Corona-Warn-App in Germany
Maximilian Häring, University of Bonn; Eva Gerlitz, Fraunhofer FKIE; Christian Tiefenau, University of Bonn; Matthew Smith, University of Bonn, Fraunhofer FKIE; Dominik Wermke and Sascha Fahl, CISPA, University of Hannover; Yasemin Acar, Max Planck Institute for Security and Privacy
To help tackle the COVID-19 pandemic, the tech community has put forward proximity detection apps to help warn people who might have been exposed to the coronavirus. The privacy implications of such apps have been discussed both in academic circles and the general population. The discussion in Germany focused on the trade-off between a centralized or decentralized approach for data collection and processing and their implications. Specifically, privacy dominated the public debate about the proposed "Corona-Warn-App." This paper presents a study with a quota sample of the German population (n=744) to assess what the population knew about the soon-to-be-released app and their willingness to use it. We also presented participants potential properties the app could have and asked them how these would affect their usage intention. Based on our findings, we discuss our participants' views on privacy and functionality, including their perception of selected centralized and decentralized features. We also examine a wide range of false beliefs and information that was not communicated successfully. Especially technical details, such as that the app would use Bluetooth, as opposed to location services, were unknown to many participants. Our results give insights on the complicated relationship of trust in the government and public communication on the population's willingness to adopt the app.
Understanding Users' Knowledge about the Privacy and Security of Browser Extensions
Ankit Kariryaa, University of Copehagen & University of Bremen; Gian-Luca Savino and Carolin Stellmacher, University of Bremen; Johannes Schöning, University of Bremen & University of St. Gallen
Browser extensions enrich users' browsing experience, e.g., by blocking unwanted advertisements on websites. To perform these functions, users must grant certain permissions during the installation process. These permissions, however, give very limited information about the fact that they allow the extension to access user's personal data and browsing behaviour, posing security and privacy risks. To understand users' awareness of these privileges and the associated threats, we conducted an online survey with 353 participants, focusing on users' attitude, knowledge, and preference towards extensions' permission requests. We found that users report interest in seeking information, trust the developers but do little to protect their data. They have limited knowledge about the technical abilities of browser extensions and prefer permission statements that evoke a clear mental model. Based on our findings we derive recommendations for the improvement of browser extension permission dialogues through clear language, technical improvements and distinct responsibilities.
Replication: Effects of Media on the Mental Models of Technical Users
Khadija Baig, Elisa Kazan, Kalpana Hundlani, Sana Maqsood, and Sonia Chiasson, Carleton University
Media has been observed to influence users' mental models in several domains. It was recently demonstrated that fictional television and movies have a strong influence on non-technical end users' mental models of security. We extended this study to explore its effect on 23 participants with technical backgrounds, given that misconceptions amongst this group could have important organisational impacts or could influence other non-technical end users. Our qualitative analysis reveals that technical participants sourced their mental models from both their academic or professional lives and from different forms of media (like news, cinema, forums, and social media). They were capable of identifying unrealistic depictions of hacking in the provided video clips and most could offer simplistic explanations about why these were problematic. We found that they generally had more nuanced understanding of the issues than non-technical end users, but they were not immune to misinformation from mass media.
Comparing Security and Privacy Attitudes Among U.S. Users of Different Smartphone and Smart-Speaker Platforms
Desiree Abrokwa, Shruti Das, Omer Akgul, and Michelle L. Mazurek, University of Maryland
Many studies of mobile security and privacy are, for simplicity, limited to either only Android users or only iOS users. However, it is not clear whether there are systematic differences in the privacy and security knowledge or preferences of users who select these two platforms. Understanding these differences could provide important context about the generalizability of research results. This paper reports on a survey (n=493) with a demographically diverse sample of U.S. Android and iOS users. We compare users of these platforms using validated privacy and security scales (IUIPC-8 and SA-6) as well as previously deployed attitudinal and knowledge questions from Pew Research Center. As a secondary analysis, we also investigate potential differences among users of different smart-speaker platforms, including Amazon Echo and Google Home. We find no significant differences in privacy attitudes of different platform users, but we do find that Android users have more technology knowledge than iOS users. In addition, we find evidence (via comparison with Pew data) that Prolific participants have more technology knowledge than the general U.S. population.
9:30 am–10:15 am
Social, Mentorship, and Networking Events
Mentoring Program: See the Mentoring Program page for more information.
10:15 am–11:15 am
Keynote Address
Mitigating Technology Abuse in Intimate Partner Violence
Thomas Ristenpart, Cornell Tech
Our research group studies the role technology plays in intimate partner violence (IPV). Via interviews with survivors and support professionals, online measurement studies, and investigation of malicious software tools purpose-built for abuse, we have documented how abusers exploit technology to control, harass, stalk, and otherwise harm their current or former partner. To help survivors, we work with technology companies and lawmakers to affect positive changes and, more directly, we have put into practice a new interventional approach that we call clinical computer security. Our Clinic to End Tech Abuse works in partnership with the New York City Mayor's Office to End Domestic and Gender-Based Violence to help survivors navigate technology abuse and, ultimately, empower their use of technology.
In this talk I will provide a brief overview of our work on IPV technology abuse, and use it as a case study for "advocate-scientist" models that blend basic research with direct advocacy work in close collaboration with a community.
This talk will cover joint work with a large number of collaborators and clinic volunteers. See https://ipvtechresearch.org and https://ceta.tech.cornell.edu for more information.
Thomas Ristenpart, Cornell Tech
Thomas Ristenpart is an Associate Professor at Cornell Tech and a member of the Computer Science department at Cornell University. His research spans a wide range of computer security topics, with recent focuses including digital privacy and safety in intimate partner violence, mitigating abuse and harassment online, cloud computing security, improvements to authentication mechanisms including passwords, confidentiality and privacy in machine learning, and topics in applied and theoretical cryptography.
11:15 am–11:30 am
Break
11:30 am–12:30 pm
Perspectives and Policy
Session Chair: Josephine Wolff, Tufts University
"How I Know For Sure": People's Perspectives on Solely Automated Decision-Making (SADM)
Smirity Kaushik, University of Illinois at Urbana-Champaign; Yaxing Yao, University of Maryland, Baltimore County; Pierre Dewitte, Katholieke Universiteit Leuven Centre for IT & IP; Yang Wang, University of Illinois at Urbana-Champaign
IAPP SOUPS Privacy Award
Algorithms are used to make automated decisions that can affect individuals in numerous domains. The General Data Protection Regulation (GDPR) of the European Union (EU) has granted citizens some rights regarding solely automated decision-making (SADM) including obtaining an explanation of such processing. It is unclear, however, how organizations should support people in effectively exercising such rights. We conducted an online survey to understand people's perspectives on SADM. We found that our respondents had several misunderstandings about the SADM right such as they can opt out of SADM ahead of time. We also identified various attributes of SADM that our respondents desired to understand, including new attributes (e.g., actionable information about what they can practically do to improve future decision outcome) not covered by implementation guidelines of the GDPR. Our respondents also anticipated many challenges with SADM including not knowing when SADM is applied to them. We discuss design implications of our results on how to support people in coping with SADM, for instance, the design of icons to represent SADM processing as well as explanation templates that cover a common set of attributes and can be personalized to explain a specific SADM decision about a person.
A Fait Accompli? An Empirical Study into the Absence of Consent to Third-Party Tracking in Android Apps
Konrad Kollnig and Reuben Binns, University of Oxford; Pierre Dewitte, KU Leuven; Max Van Kleek, Ge Wang, Daniel Omeiza, Helena Webb, and Nigel Shadbolt, University of Oxford
Third-party tracking allows companies to collect users' behavioural data and track their activity across digital devices. This can put deep insights into users' private lives into the hands of strangers, and often happens without users' awareness or explicit consent. EU and UK data protection law, however, requires consent, both 1) to access and store information on users' devices and 2) to legitimate the processing of personal data as part of third-party tracking, as we analyse in this paper.
This paper further investigates whether and to what extent consent is implemented in mobile apps. First, we analyse a representative sample of apps from the Google Play Store. We find that most apps engage in third-party tracking, but few obtained consent before doing so, indicating potentially widespread violations of EU and UK privacy law. Second, we examine the most common third-party tracking libraries in detail. While most acknowledge that they rely on app developers to obtain consent on their behalf, they typically fail to put in place robust measures to ensure this: disclosure of consent requirements is limited; default consent implementations are lacking; and compliance guidance is difficult to find, hard to read, and poorly maintained.
"Whether it's moral is a whole other story": Consumer perspectives on privacy regulations and corporate data practices
Leah Zhang-Kennedy, University of Waterloo; Sonia Chiasson, Carleton University
Privacy laws govern the collection, use, and disclosure of personal information by businesses. Through an online survey with 300 participants and a follow-up interview with 32 participants, we investigate Canadians' awareness of their privacy rights and how businesses manage their personal information. Further, we explore how Canadians respond to hypothetical privacy violations using ten scenarios adapted from real cases. Our participants are generally aware of having privacy rights but have insufficient knowledge and resources to exercise those rights properly. Participants did not necessarily equate compliance with the law as sufficient for ethical conduct. Through our analysis, we identified a "moral code" that consumers rely on to assess privacy violations based on the core moral values of trust, transparency, control, and access.
Pursuing Usable and Useful Data Downloads Under GDPR/CCPA Access Rights via Co-Design
Sophie Veys, Daniel Serrano, Madison Stamos, and Margot Herman, University of Chicago; Nathan Reitinger and Michelle L. Mazurek, University of Maryland; Blase Ur, University of Chicago
Data privacy regulations like GDPR and CCPA define a right of access empowering consumers to view the data companies store about them. Companies satisfy these requirements in part via data downloads, or downloadable archives containing this information. Data downloads vary in format, organization, comprehensiveness, and content. It is unknown, however, whether current data downloads actually achieve the transparency goals embodied by the right of access. In this paper, we report on the first exploration of the design of data downloads. Through 12 focus groups involving 42 participants, we gathered reactions to six companies' data downloads. Using co-design techniques, we solicited ideas for future data download designs, formats, and tools. Most participants indicated that current offerings need improvement to be useful, emphasizing the need for better filtration, visualization, and summarization to help them hone in on key information.
12:30 pm–12:45 pm
Break
12:45 pm–1:45 pm
Inclusive Privacy and Security
Session Chair: Lorrie Cranor, Carnegie Mellon University
Facial Recognition: Understanding Privacy Concerns and Attitudes Across Increasingly Diverse Deployment Scenarios
Shikun Zhang, Yuanyuan Feng, and Norman Sadeh, Carnegie Mellon University
The rapid growth of facial recognition technology across ever more diverse contexts calls for a better understanding of how people feel about these deployments — whether they see value in them or are concerned about their privacy, and to what extent they have generally grown accustomed to them. We present a qualitative analysis of data gathered as part of a 10-day experience sampling study with 123 participants who were presented with realistic deployment scenarios of facial recognition as they went about their daily lives. Responses capturing their attitudes towards these deployments were collected both in situ and through daily evening surveys, in which participants were asked to reflect on their experiences and reactions. Ten follow-up interviews were conducted to further triangulate the data from the study. Our results highlight both the perceived benefits and concerns people express when faced with different facial recognition deployment scenarios. Participants reported concerns about the accuracy of the technology, including possible bias in its analysis, privacy concerns about the type of information being collected or inferred, and more generally, the dragnet effect resulting from the widespread deployment. Based on our findings, we discuss strategies and guidelines for informing the deployment of facial recognition, particularly focusing on ensuring that people are given adequate levels of transparency and control.
"I'm Literally Just Hoping This Will Work:'' Obstacles Blocking the Online Security and Privacy of Users with Visual Disabilities
Daniela Napoli, Khadija Baig, Sana Maqsood, and Sonia Chiasson, Carleton University
To successfully manage security and privacy threats, users must be able to perceive the relevant information. However, a number of accessibility obstacles impede the access of such information for users with visual disabilities, and could mislead them into incorrectly assessing their security and privacy. We explore how these users protect their online security and privacy. We observed their behaviours when navigating Gmail, Amazon, and a phishing site imitating CNIB, a well-known organization for our participants. We further investigate their real world concerns through semi-structured interviews. Our analysis uncovered severe usability issues which led users to engage in risky behaviours or to compromise between accessibility or security. Our work confirms the findings from related literature and provides novel insights, such as how security and accessibility software (e.g., JAWS) can hinder users' abilities to identify risks. We organize our main findings around four states of security and privacy experienced by users while completing sensitive tasks, and provide design recommendations for communicating security and privacy information to users with visual disabilities.
WebAlly: Making Visual Task-based CAPTCHAs Transferable for People with Visual Impairments
Zhuohao Zhang and Zhilin Zhang, University of Illinois at Urbana-Champaign; Haolin Yuan, Johns Hopkins University; Natã M. Barbosa, University of Illinois at Urbana-Champaign; Sauvik Das, Georgia Tech; Yang Wang, University of Illinois at Urbana-Champaign
Task-based visual CAPTCHAs are a significant accessibility hurdle for people with visual impairments (PVIs). What if PVIs could transfer task-based visual CAPTCHAs to a helper to solve? How might PVIs want such a system configured in terms of from whom they would solicit help and how they would compensate this help? To answer these questions, we implemented and evaluated a proof-of-concept assistive transfer system — WebAlly — that makes task-based CAPTCHAs transferable by allowing PVIs to source just-in-time, remote control help from a trusted contact. In an exploratory, role-play study with 10 pairs of participants — a PVI and a friend or a family member — we asked participants to use WebAlly in four different configurations that varied in source of help (friend vs. stranger) and compensation (paid vs. volunteer). We found that PVIs liked having WebAlly as an additional option for solving visual CAPTCHAs, when other options that preserve their independence fail. In addition, many PVIs and their friends felt that using the system would bring their relationship closer. We discuss design implications for transferable CAPTCHAs and assistive transfer systems more broadly, e.g., the importance of complementing rather than replacing PVIs' existing workflows.
Designing Toxic Content Classification for a Diversity of Perspectives
Deepak Kumar, Stanford University; Patrick Gage Kelley and Sunny Consolvo, Google; Joshua Mason, University of Illinois at Urbana-Champaign; Elie Bursztein, Google; Zakir Durumeric, Stanford University; Kurt Thomas, Google; Michael Bailey, University of Illinois at Urbana-Champaign
In this work, we demonstrate how existing classifiers for identifying toxic comments online fail to generalize to the diverse concerns of Internet users. We survey 17,280 participants to understand how user expectations for what constitutes toxic content differ across demographics, beliefs, and personal experiences. We find that groups historically at-risk of harassment—such as people who identify as LGBTQ+ or young adults—are more likely to to flag a random comment drawn from Reddit, Twitter, or 4chan as toxic, as are people who have personally experienced harassment in the past. Based on our findings, we show how current one-size-fits-all toxicity classification algorithms, like the Perspective API from Jigsaw, can improve in accuracy by 86% on average through personalized model tuning. Ultimately, we highlight current pitfalls and new design directions that can improve the equity and efficacy of toxic content classifiers for all users.
1:45 pm–2:00 pm
Break
2:00 pm–2:45 pm
Poster Session
Check out the cool new ideas and the latest preliminary research on display at the SOUPS 2021 Poster Session. View the list of accepted posters.
Tuesday, August 10
7:00 am–8:00 am
Phishing and Account Compromise
Session Chair: Adam J. Aviv, The George Washington University
Why They Ignore English Emails: The Challenges of Non-Native Speakers in Identifying Phishing Emails
Ayako A. Hasegawa, Naomi Yamashita, and Mitsuaki Akiyama, NTT; Tatsuya Mori, Waseda University / NICT / RIKEN AIP
Prior work in cybersecurity and risk management has shown that non-native speakers of the language used in phishing emails are more susceptible to such attacks. Despite much research on behaviors English speakers use to avoid phishing attacks, little is known about behaviors of non-native speakers. Therefore, we conducted an online survey with 862 non-native English speakers (284 Germans, 276 South Koreans, and 302 Japanese). Our findings show that participants, especially those who lacked confidence in English, had a higher tendency to ignore English emails without careful inspection than emails in their native languages. Furthermore, both the German and South Korean participants generally followed the instructions in the email in their native languages without careful inspection. Finally, our qualitative analysis revealed five main factors that formed the participants' concerns in identifying English phishing emails. These findings highlight the importance of providing non-native speakers with specific anti-phishing interventions that differ from those for native speakers.
SoK: Still Plenty of Phish in the Sea — A Taxonomy of User-Oriented Phishing Interventions and Avenues for Future Research
Anjuli Franz, Verena Zimmermann, Gregor Albrecht, Katrin Hartwig, Christian Reuter, Alexander Benlian, and Joachim Vogt, Technical University of Darmstadt
Phishing is a prevalent cyber threat, targeting individuals and organizations alike. Previous approaches on anti-phishing measures have started to recognize the role of the user, who, at the center of the target, builds the last line of defense. However, user-oriented phishing interventions are fragmented across a diverse research landscape, which has not been systematized to date. This makes it challenging to gain an overview of the various approaches taken by prior works. In this paper, we present a taxonomy of phishing interventions based on a systematic literature analysis. We shed light on the diversity of existing approaches by analyzing them with respect to the intervention type, the addressed phishing attack vector, the time at which the intervention takes place, and the required user interaction. Furthermore, we highlight shortcomings and challenges emerging from both our literature sample and prior meta-analyses, and discuss them in the light of current movements in the field of usable security. With this article, we hope to provide useful directions for future works on phishing interventions.
Investigating Web Service Account Remediation Advice
Lorenzo Neil, Elijah Bouma-Sims, and Evan Lafontaine, North Carolina State University; Yasemin Acar, Max Planck Institute for Security and Privacy; Bradley Reaves, North Carolina State University
Online web services are susceptible to account compromises where adversaries gain access to a user's account. Once compromised, an account must be restored to its pre-compromise state in a process we term "account remediation." Account remediation is a technically complex process that in most cases is left to the user, though some web services provide guidance to users through help documentation. The quality of this account remediation advice is of paramount importance in assisting victims of account compromise, yet it is unclear if this advice is complete or suitable. In this paper, we analyze account remediation advice from 57 popular U.S.- based web services. We identify five key phases of account remediation, use this five-phase model to develop a codebook of account remediation advice, then analyze topic coverage. We find that only 39% of the web services studied provided advice for all phases of account remediation. We also find that highly-ranked websites and sites with a previously disclosed data breach have more complete coverage than other sites. Our findings show that account remediation should be more carefully and systematically considered by service providers, security researchers, and consumer advocates, and our detailed analysis will aid in creating better guidelines for users and services.
Knowledge and Capabilities that Non-Expert Users Bring to Phishing Detection
Rick Wash, Norbert Nthala, and Emilee Rader, Michigan State University
Phishing emails are scam communications that pretend to be something they are not in order to get people to take actions they otherwise would not. We surveyed a demographically matched sample of 297 people from across the United States and asked them to share their descriptions of a specific experience with a phishing email. Analyzing these experiences, we found that email users' experiences detecting phishing messages have many properties in common with how IT experts identify phishing. We also found that email users bring unique knowledge and valuable capabilities to this identification process that neither technical controls nor IT experts have. We suggest that targeting training toward how to use this uniqueness is likely to improve phishing prevention.
8:00 am–8:15 am
Lightning Talks 1
8:15 am–8:30 am
Break
8:30 am–9:30 am
Security in Practice
Session Chair: Yasemin Acar, Max Planck Institute for Security and Privacy
Code Reviewing as Methodology for Online Security Studies with Developers - A Case Study with Freelancers on Password Storage
Anastasia Danilova, Alena Naiakshina, and Anna Rasgauski, University of Bonn; Matthew Smith, University of Bonn, Fraunhofer FKIE
While ample experience with end-user studies exists, only little is known about studies with software developers in a security context. In past research investigating the security behavior of software developers, participants often had to complete programming tasks. However, programming tasks require a large amount of participants' time and effort, which often results in high costs and small sample sizes. We therefore tested a new methodology for security developer studies. In an online study, we asked freelance developers to write code reviews for password-storage code snippets. Since developers often tend to focus on functionality first and security later, similar to end users, we prompted half the participants for security. Although the freelancers indicated that they feel responsible for security, our results showed that they did not focus on security in their code reviews, even in a security-critical task such as password-storage. Almost half the participants wanted to release the insecure code snippets. However, we found that security prompting had a significant effect on the security awareness. To provide further insight into this line of work, we compared our results with similar password-storage studies containing programming tasks, and discussed code reviewing as a new methodology for future security research with developers.
"I have no idea what they're trying to accomplish:" Enthusiastic and Casual Signal Users' Understanding of Signal PINs
Daniel V. Bailey and Philipp Markert, Ruhr University Bochum, Germany; Adam J. Aviv, The George Washington University
We conducted an online study with n = 235 Signal users on their understanding, and usage of PINs in Signal. In our study, we observe a split in PIN management and composition strategies between users who can explain the purpose of the Signal PINs (56%; enthusiasts) and users who cannot (44%; casual users). Encouraging adoption of PINs by Signal appears quite successful: only 14% opted-out of setting a PIN entirely. Among those who did set a PIN, most enthusiasts had long, complex alphanumeric PINs generated by and saved in a password manager. Meanwhile more casual Signal users mostly relied on short numeric-only PINs. Our results suggest that better communication about the purpose of the Signal PIN could help more casual users understand the features PINs enable (such as that it is not simply a personal identification number). This communication could encourage a stronger security posture.
On the Limited Impact of Visualizing Encryption: Perceptions of E2E Messaging Security
Christian Stransky, Leibniz University Hannover; Dominik Wermke, CISPA Helmholtz Center for Information Security; Johanna Schrader, Leibniz University Hannover; Nicolas Huaman, CISPA Helmholtz Center for Information Security; Yasemin Acar, Max Planck Institute for Security and Privacy; Anna Lena Fehlhaber, Leibniz University Hannover; Miranda Wei, University of Washington; Blase Ur, University of Chicago; Sascha Fahl, CISPA, Leibniz University Hannover
Communication tools with end-to-end (E2E) encryption help users maintain their privacy. Although messengers like WhatsApp and Signal bring E2E encryption to a broad audience, past work has documented misconceptions of their security and privacy properties. Through a series of five online studies with 683 total participants, we investigated whether making an app's E2E encryption more visible improves perceptions of trust, security, and privacy. We first investigated why participants use particular messaging tools, validating a prior finding that many users mistakenly think SMS and e-mail are more secure than E2E-encrypted messengers. We then studied the effect of making E2E encryption more visible in a messaging app. We compared six different text disclosures, three different icons, and three different animations of the encryption process. We found that simple text disclosures that messages are "encrypted" are sufficient. Surprisingly, the icons negatively impacted perceptions. While qualitative responses to the animations showed they successfully conveyed and emphasized "security" and "encryption," the animations did not significantly impact participants' quantitative perceptions of the overall trustworthiness, security, and privacy of E2E-encrypted messaging. We confirmed and unpacked this result through a validation study, finding that user perceptions depend more on preconceived expectations and an app's reputation than visualizations of security mechanisms.
Concerned but Ineffective: User Perceptions, Methods, and Challenges when Sanitizing Old Devices for Disposal
Jason Ceci and Hassan Khan, University of Guelph; Urs Hengartner and Daniel Vogel, University of Waterloo
Consumers are upgrading their devices more often due to continuous advances in hardware. Old devices need to be sanitized (i.e., personal data removed with low recovery probability) before selling, donating, throwing away, or recycling the device ("disposal"), but previous works have shown that users frequently fail to do that. We aim to understand the sources of misconceptions that result in risks to personal data. Through a survey (n=131), we measure where the old devices end up and how they are sanitized. Our survey shows that while most users dispose of their devices, a large proportion of participants (73%) kept at least one old device, often due to data leakage concerns. Among disposed-of devices, 25% of participants reported using methods to erase their data that are insecure. To further explore the processes that were undertaken to sanitize devices and sources of misconception, we invite a subset of respondents (n=35) for interviews. Our interviews uncover the reasons for poor device sanitizing practices—misleading data deletion interfaces and prompts, lack of knowledge, and complex and slow disk wiping procedures. We provide suggestions for device manufacturers and retailers on how to improve privacy, trust, and convenience when sanitizing old devices.
9:30 am–10:15am
Social, Mentorship, and Networking Events
Mentoring Program: See the Mentoring Program page for more information.
10:15 am–11:15 am
Ubiquitous Computing
Session Chair: Jose M. Such, King's College London
Exploring Authentication for Security-Sensitive Tasks on Smart Home Voice Assistants
Alexander Ponticello, Matthias Fassl, and Katharina Krombholz, CISPA Helmholtz Center for Information Security
Smart home assistants such as Amazon Alexa and Google Home are primarily used for day-to-day tasks like checking the weather or controlling other IoT devices. Security-sensitive use cases such as online banking and voice-controlled door locks are already available and are expected to become more popular in the future.
However, the current state-of-the-art authentication for smart home assistants consists of users saying low-security PINs aloud, which does not meet the security requirements of security-sensitive tasks. Therefore, we explore the design space for future authentication mechanisms.
We conducted semi-structured interviews with N = 16 Alexa-users incorporating four high-risk scenarios. Using these scenarios, we explored perceived risks, mitigation strategies, and design-aspects to create secure experiences. Among other things, we found that participants are primarily concerned about eavesdropping bystanders, do not trust voice-based PINs, and would prefer trustworthy voice recognition. Our results also suggest that they have context-dependent (location and bystanders) requirements for smart home assistant authentication. Based on our findings, we construct design recommendations to inform the design of future authentication mechanisms.
"The Thing Doesn't Have a Name": Learning from Emergent Real-World Interventions in Smart Home Security
Brennen Bouwmeester, Elsa Rebeca Turcios Rodríguez, Carlos H. Gañán, Michel van Eeten, and Simon Parkin, TU Delft
Many consumer Internet-of-Things (IoT) devices are, and will remain, subject to compromise, often without the owner's knowledge. Internet Service Providers (ISPs) are among the actors best-placed to coordinate the remediation of these problems. They receive infection data and can notify customers of recommended remediation actions. There is insufficient understanding of what happens in peoples' homes and businesses during attempts to remediate infected IoT devices. We coordinate with an ISP and conduct remote think-aloud observations with 17 customers who have an infected device, capturing their initial efforts to follow best-practice remediation steps. We identify real, personal consequences from wide-scale interventions which lack situated guidance for applying advice. Combining observations and thematic analysis, we synthesize the personal stories of the successes and struggles of these customers. Most participants think they were able to pinpoint the infected device; however, there were common issues such as not knowing how to comply with the recommended actions, remediations regarded as requiring excessive effort, a lack of feedback on success, and a perceived lack of support from device manufacturers. Only 4 of 17 participants were able to successfully complete all remediation steps. We provide recommendations relevant to various stakeholders, to focus where emergent interventions can be improved.
Evaluating and Redefining Smartphone Permissions with Contextualized Justifications for Mobile Augmented Reality Apps
David Harborth, Goethe University Frankfurt am Main; Alisa Frik, ICSI, University of California Berkeley
Augmented reality (AR), and specifically mobile augmented reality (MAR) gained much public attention after the success of Pokémon Go in 2016, and since then has found application in online games, social media, entertainment, real estate, interior design, and other services. MAR apps are highly dependent on real time context-specific information provided by the different sensors and data processing capabilities of smartphones (e.g., LiDAR, gyroscope or object recognition). This dependency raises crucial privacy issues for end users. We evaluate whether the existing access permission systems, initially developed for non-AR apps, as well as proposed new permissions, relevant for MAR apps, provide sufficient and clear information to the users. We address this research goal in two online survey-based experiments with a total of 581 participants. Based on our results, we argue that it is necessary to increase transparency about MAR apps' data practices by requesting users' permissions to access certain novel and privacy invasive resources and functionalities commonly used in MAR apps, such as speech and face recognition. We also find that adding justifications, contextualized to the data collection practices of the app, improves transparency and can mitigate privacy concerns, at least in the context of data utilized to the users' benefit. Better understanding of the app's practices and lower concerns, in turn, increase the intentions to grant permissions. We provide recommendations for better transparency in MAR apps.
PowerCut and Obfuscator: An Exploration of the Design Space for Privacy-Preserving Interventions for Smart Speakers
Varun Chandrasekaran, Suman Banerjee, Bilge Mutlu, and Kassem Fawaz, UW-Madison
The pervasive use of smart speakers has raised numerous privacy concerns. While work to date provides an understanding of user perceptions of these threats, limited research focuses on how we can mitigate these concerns, either through redesigning the smart speaker or through dedicated privacy-preserving interventions. In this paper, we present the design and prototyping of two privacy-preserving interventions: 'Obfuscator' targeted at disabling recording at the microphones, and 'PowerCut' targeted at disabling power to the smart speaker. We present our findings from a technology probe study involving 24 households that interacted with our prototypes; the primary objective was to gain a better understanding of the design space for technological interventions that might address these concerns. Our data and findings reveal complex trade-offs among utility, privacy, and usability and stresses the importance of multi-functionality, aesthetics, ease-of-use, and form factor. We discuss the implications of our findings for the development of subsequent interventions and the future design of smart speakers.
11:15 am–11:30 am
Lightning Talks 2
11:30 am–11:45 am
Break
11:45 am–12:45 pm
Developers
Session Chair: Mary Ellen Zurko, MIT Lincoln Laboratory
A Qualitative Usability Evaluation of the Clang Static Analyzer and libFuzzer with CS Students and CTF Players
Stephan Plöger, Fraunhofer FKIE; Mischa Meier, University of Bonn; Matthew Smith, University of Bonn, Fraunhofer FKIE
Testing software for bugs and vulnerabilities is an essential aspect of secure software development. Two paradigms are particularly prevalent in this domain: static and dynamic software testing. Static analysis has seen widespread adoption across the industry, while dynamic analysis, in particular fuzzing, has recently received much attention in academic circles as well as being used very successfully by large corporations such as Google, where for instance, over 20,000 bugs have been found and fixed in the Chrome project alone. Despite these kinds of success stories, fuzzing has not yet seen the kind of industry adoption static analysis has.
To get first insights, we examine the usability of the static analyzer Clang Static Analyzer and the fuzzer libFuzzer. To this end, we conducted the first qualitative usability evaluation of the Clang Static Analyzer and libFuzzer. We conducted a mixed factorial design study with 32 CS masters students and six competitive Capture the Flag (CTF) players. Our results show that our participants encountered severe usability issues trying to get libFuzzer to run at all.
In contrast to that, most of our participants were able to run the Clang Static Analyzer without significant problems. This shows that, at least in this case, the usability of libFuzzer was worse than of the Clang Static Analyzer. We make suggestions on how libFuzzer could be improved and how both tools compare.
Deciding on Personalized Ads: Nudging Developers About User Privacy
Mohammad Tahaei, University of Edinburgh; Alisa Frik, ICSI and University of California, Berkeley; Kami Vaniea, University of Edinburgh
Mobile advertising networks present personalized advertisements to developers as a way to increase revenue. These types of ads use data about users to select potentially more relevant content. However, choice framing also impacts app developers' decisions which in turn impacts their users' privacy. Currently, ad networks provide choices in developer-facing dashboards that control the types of information collected by the ad network as well as how users will be asked for consent. Framing and nudging have been shown to impact users' choices about privacy, we anticipate that they have a similar impact on choices made by developers. We conducted a survey-based online experiment with 400 participants with experience in mobile app development. Across six conditions, we varied the choice framing of options around ad personalization. Participants in the condition where privacy consequences of ads personalization are highlighted in the options are significantly (11.06 times) more likely to choose non-personalized ads compared to participants in the Control condition with no information about privacy. Participants' choice of ad type is driven by impact on revenue, user privacy, and relevance to users. Our findings suggest that developers are impacted by interfaces and need transparent options.
Benefits and Drawbacks of Adopting a Secure Programming Language: Rust as a Case Study
Kelsey R. Fulton and Anna Chan, University of Maryland; Daniel Votipka, Tufts University; Michael Hicks and Michelle L. Mazurek, University of Maryland
Programming languages such as Rust and Go were developed to combat common and potentially devastating memory safety-related vulnerabilities. But adoption of new, more secure languages can be fraught and complex. To better understand the benefits and challenges of adopting Rust in particular, we conducted semi-structured interviews with professional, primarily senior software developers who have worked with Rust on their teams or tried to introduce it (n = 16), and we deployed a survey to the Rust development community (n = 178). We asked participants about their personal experiences using Rust, as well as experiences using Rust at their companies. We find a range of positive features, including good tooling and documentation, benefits for the development lifecycle, and improvement of overall secure coding skills, as well as drawbacks including a steep learning curve, limited library support, and concerns about the ability to hire additional Rust developers in the future. Our results have implications for promoting the adoption of Rust specifically and secure programming languages and tools more generally.
An Analysis of the Role of Situated Learning in Starting a Security Culture in a Software Company
Anwesh Tuladhar, Daniel Lende, Jay Ligatti, and Xinming Ou, University of South Florida
Awarded Distinguished Paper!
We conducted an ethnographic study of a software development company to explore if and how a development team adopts security practices into the development lifecycle. A PhD student in computer science with prior training in qualitative research methods was embedded in the company for eight months. The researcher joined the company as a software engineer and participated in all development activities as a new hire would, while also making observations on the development practices. During the fieldwork, we observed a positive shift in the development team's practices regarding secure development. Our analysis of data indicates that the shift can be attributed to enabling all software engineers to see how security knowledge could be applied to the specific software products they worked on. We also observed that by working with other developers to apply security knowledge under the concrete context where the software products were built, developers who possessed security expertise and wanted to push for more secure development practices (security advocates) could be effective in achieving this goal. Our data point to an interactive learning process where software engineers in a development team acquire knowledge, apply it in practice, and contribute to the team, leading to the creation of a set of preferred practices, or "culture" of the team. This learning process can be understood through the lens of the situated learning framework, where it is recognized that knowledge transfer happens within a community of practice, and applying the knowledge is the key in individuals (software engineers) acquiring it and the community (development team) embodying such knowledge in its practice. Our data show that enabling a situated learning environment for security gives rise to security-aware software engineers. We discuss the roles of management and security advocates in driving the learning process to start a security culture in a software company.
12:45 pm–1:00 pm
Break
1:00 pm–2:00 pm
Work (and Learn) from Home
Session Chair: Lujo Bauer, Carnegie Mellon University
Examining the Examiners: Students' Privacy and Security Perceptions of Online Proctoring Services
David G. Balash, Dongkun Kim, and Darika Shaibekova, The George Washington University; Rahel A. Fainchtein and Micah Sherr, Georgetown University; Adam J. Aviv, The George Washington University
In response to the Covid-19 pandemic, educational institutions quickly transitioned to remote learning. The problem of how to perform student assessment in an online environment has become increasingly relevant, leading many institutions and educators to turn to online proctoring services to administer remote exams. These services employ various student monitoring methods to curb cheating, including restricted ("lockdown") browser modes, video/screen monitoring, local network traffic analysis, and eye tracking. In this paper, we explore the security and privacy perceptions of the student test-takers being proctored. We analyze user reviews of proctoring services' browser extensions and subsequently perform an online survey (n=102). Our findings indicate that participants are concerned about both the amount and the personal nature of the information shared with the exam proctoring companies. However, many participants also recognize a trade-off between pandemic safety concerns and the arguably invasive means by which proctoring services ensure exam integrity. Our findings also suggest that institutional power dynamics and students' trust in their institutions may dissuade students' opposition to remote proctoring.
Virtual Classrooms and Real Harms: Remote Learning at U.S. Universities
Shaanan Cohney, Princeton University / University of Melbourne; Ross Teixeira, Anne Kohlbrenner, Arvind Narayanan, and Mihir Kshirsagar, Princeton University; Yan Shvartzshnaider, Princeton University / York University; Madelyn Sanfilippo, Princeton University / University of Illinois at Urbana-Champaign
Universities have been forced to rely on remote educational technology to facilitate the rapid shift to online learning. In doing so, they acquire new risks of security vulnerabilities and privacy violations. To help universities navigate this landscape, we develop a model that describes the actors, incentives, and risks, informed by surveying 49 educators and 14 administrators at U.S. universities. Next, we develop a methodology for administrators to assess security and privacy risks of these products. We then conduct a privacy and security analysis of 23 popular platforms using a combination of sociological analyses of privacy policies and 129 state laws, alongside a technical assessment of platform software. Based on our findings, we develop recommendations for universities to mitigate the risks to their stakeholders.
Challenges and Threats of Mass Telecommuting: A Qualitative Study of Workers
Borke Obada-Obieh, Yue Huang, and Konstantin Beznosov, University of British Columbia
This paper reports the security and privacy challenges and threats that people experience while working from home. We conducted semi-structured interviews with 24 participants working from home in the three weeks preceding the study. We asked questions related to participants' challenges with telecommuting. Our results suggest that participants experienced challenges, threats, and potential outcomes of threats associated with the technological, human, organizational, and environmental dimensions. We also discovered two threat models: one in which the employer's asset is at stake and another in which the employee's privacy is compromised. We believe these insights can lead to better support for employees and possibly reduce cyber-attacks associated with telecommuting during the pandemic and beyond.
Understanding Privacy Attitudes and Concerns Towards Remote Communications During the COVID-19 Pandemic
Pardis Emami-Naeini, Tiona Francisco, Tadayoshi Kohno, and Franziska Roesner, University of Washington
Since December 2019, the COVID-19 pandemic has caused people around the world to exercise social distancing, which has led to an abrupt rise in the adoption of remote communications for working, socializing, and learning from home. As remote communications will outlast the pandemic, it is crucial to protect users' security and respect their privacy in this unprecedented setting, and that requires a thorough understanding of their behaviors, attitudes, and concerns toward various aspects of remote communications. To this end, we conducted an online study with 220 worldwide Prolific participants. We found that privacy and security are among the most frequently mentioned factors impacting participants' attitude and comfort level with conferencing tools and meeting locations. Open-ended responses revealed that most participants lacked autonomy when choosing conferencing tools or using microphone/webcam in their remote meetings, which in several cases contradicted their personal privacy and security preferences. Based on our findings, we distill several recommendations on how employers, educators, and tool developers can inform and empower users to make privacy-protective decisions when engaging in remote communications.
2:00 pm–2:15 pm
2:15 pm–2:30 pm
Closing Remarks
General Chair: Sonia Chiasson, Carleton University