All the times listed below are in Pacific Daylight Time (PDT).
View a welcome message from SOUPS 2020 General Chair Heather Lipford which includes an overview of how the SOUPS 2020 virtual event is structured.
Proceedings and Papers
The symposium papers and full proceedings are available to registered attendees now and will be available to everyone beginning Monday, August 10, 2020. Paper abstracts and proceedings front matter are available to everyone now. Copyright to the individual works is retained by the author[s].
Proceedings Front Matter
Proceedings Cover | Title Page, Copyright Page, and List of Organizers | Table of Contents | Message from the Program Co-Chairs
Downloads for Registered Attendees
(Sign in to your USENIX account to download these files.)
Monday, August 10, 2020
7:00 am–7:15 am
Opening Remarks and Awards
General Chair: Heather Richter Lipford, University of North Carolina at Charlotte, and Vice General Chair: Sonia Chiasson, Carleton University
7:15 am–8:15 am
Session Chair: Blase Ur, University of Chicago
Gokul Chettoor Jayakrishnan, Gangadhara Reddy Sirigireddy, Sukanya Vaddepalli, Vijayanand Banahatti, and Sachin Premsukh Lodha, TCS Research, Tata Consultancy Services Limited, Pune, India; Sankalp Suneel Pandit, Former employee of TCS Research, Tata Consultancy Services Limited, Pune, India
Usage of weak passwords for authentication within an organization can be exploited during cyberattacks leading to unauthorized account access, denial of service, data and identity theft, sabotage etc. Such attacks could bring financial and reputational losses apart from legal consequences. Organizational password policies came into being in an attempt to encourage users to create more complex and diverse passwords. However, it has been observed that people show similar behavior in adopting those policies and end up creating passwords with similar patterns. Security training has been found to be a popular mechanism in an enterprise setting, of which, game-based trainings have shown positive impact with an added advantage of being immersive. In this paper, we present a serious game-based training on creating password security awareness among enterprise users. The training involves promoting understanding among users about various common password heuristics during password creation. This study focuses on two research questions: 1) Can a game-based password awareness training teach participants about the various password heuristics? 2) Can such a training improve the organizational password diversity? With a participation of 4,906 employees from our enterprise in the study, we were able to observe effects of game-based training on password awareness. We also found insights during the study to show that users created diverse passwords.
Florian M. Farke, Ruhr University Bochum; Lennart Lorenz, tracekey solutions GmbH; Theodor Schnitzler, Philipp Markert, and Markus Dürmuth, Ruhr University Bochum
The goal of the FIDO2 project is to provide secure and usable alternatives to password-based authentication on the Web. It relies on public-key credentials, which a user can provide via security tokens, biometrics, knowledge-based factors, or combinations. In this work, we report the results of a qualitative study accompanying the deployment of FIDO2-enabled security tokens for primary authentication in a web application of a small software company operating in the life sciences industry. We assisted the company in implementing and setting up FIDO2-enabled authentication on its public test and evaluation server. Over four weeks, we observed the authentication routine of 8 employees out of 10 employees regularly using the web application, including sales representatives, software developers, project managers, and account managers. We gathered data through login diaries, server logs, and semi-structured interviews to assess themes regarding usability, perceived security, and deployability. We found that participants had several concerns, like losing the security token and longer authentication times, while the security benefits were largely intangible or perceived as unnecessary.
Raina Samuel, New Jersey Institute of Technology; Philipp Markert, Ruhr University Bochum; Adam J. Aviv, The George Washington University; Iulian Neamtiu, New Jersey Institute of Technology
Knock Codes are a knowledge-based unlock authentication scheme used on LG smartphones where a user enters a code by tapping or "knocking" a sequence on a 2x2 grid. While a lesser-used authentication method, as compared to PINs or Android patterns, there is likely a large number of Knock Code users; we estimate, 700,000--2,500,000 in the US alone. In this paper, we studied Knock Codes security asking participants in an online study to select codes on mobile devices in three settings: a control treatment, a blocklist treatment, and a treatment with a larger, 2x3 grid. We find that Knock Codes are significantly weaker than other deployed authentication, e.g., PINs or Android patterns. In a simulated attacker setting, 2x3 grids offered no additional security. Blocklisting, on the other hand, was more beneficial, making Knock Codes' security similar to Android patterns. Participants expressed positive perceptions of Knock Codes, yet usability was challenged. SUS values were "marginal" or "ok" across treatments. Based on these findings, we recommend deploying blocklists for selecting a Knock Code because they improve security but have a limited impact on usability perceptions.
Kevin Lee, Benjamin Kaiser, Jonathan Mayer, and Arvind Narayanan, Princeton University
We examined the authentication procedures used by five prepaid wireless carriers when a customer attempted to change their SIM card. These procedures are an important line of defense against attackers who seek to hijack victims’ phone numbers by posing as the victim and calling the carrier to request that service be transferred to a SIM card the attacker possesses. We found that all five carriers used insecure authentication challenges that could be easily subverted by attackers. We also found that attackers generally only needed to target the most vulnerable authentication challenges, because the rest could be bypassed. Authentication of SIM swap requests presents a classic usability-security trade-off, with carriers underemphasizing security. In an anecdotal evaluation of postpaid accounts at three carriers, presented in Appendix A, we also found—very tentatively—that some carriers may have implemented stronger authentication for postpaid accounts than for prepaid accounts.
To quantify the downstream effects of these vulnerabilities, we reverse-engineered the authentication policies of over 140 websites that offer phone-based authentication. We rated the level of vulnerability of users of each website to a SIM swap attack, and have released our findings as an annotated dataset on issms2fasecure.com. Notably, we found 17 websites on which user accounts can be compromised based on a SIM swap alone, i.e., without a password compromise. We encountered failures in vulnerability disclosure processes that resulted in these vulnerabilities remaining unfixed by nine of the 17 companies despite our responsible disclosure. Finally, we analyzed enterprise MFA solutions from three vendors, finding that two of them give users inadequate control over the security-usability tradeoff.
8:15 am–8:30 am
8:30 am–9:45 am
Appreciating User Needs and Concerns
Session Chair: Lujo Bauer, Carnegie Mellon University
Jayati Dev, Indiana University; Pablo Moriano, Oak Ridge National Laboratory; L. Jean Camp, Indiana University
The purpose of this study is to understand the privacy concerns and behavior of non-WEIRD populations in online messaging platforms. Analysis of surveys (n=674) of WhatsApp users in Saudi Arabia and India revealed that Saudis had significantly higher concerns about being contacted by strangers. In contrast, Indians showed significantly higher concerns with respect to social contact from professional colleagues. Demographics impinge privacy preferences in both populations, but in different ways. Results from regression analysis show that there are statistically significant differences between the privacy behaviors of Saudis and Indians. In both cases, privacy concerns were strongly correlated with their reported privacy behaviors. Despite the differences, we identified technical solutions that could address the concerns of both populations of participants. We close by discussing the applicability of our recommendations, specifically those on transparency and consent, to other applications and domains.
Nora McDonald, Alison Larsen, and Allison Battisti, University of Maryland, Baltimore County; Galina Madjaroff, University of Maryland; Aaron Massey and Helena Mentis, University of Maryland, Baltimore County
This paper investigates qualitatively what happens when couples facing a spectrum of options must arrive at consensual choices together. We conducted an observational study of couples experiencing memory concerns (one or both) while the partners engaged in the process of reviewing and selecting “Safety Setting” options for online activities. Couples’ choices tended to be influenced by a desire to secure shared assets through mutual surveillance and a desire to preserve autonomy by granting freedom in social and personal activities. The availability of choice suits the uneven and unpredictable process of memory loss and couples’ acknowledged uncertainty about its trajectory, leading them to anticipate changing Safety Settings as one or both of them experience further cognitive decline. Reflecting these three decision drivers, we conclude with implications for a design system that offers flexibility and adaptability in variety of settings, accommodates the uncertainty of memory loss, preserves autonomy, and supports collaborative management of shared assets.
Valerie Fanelle, Sepideh Karimi, Aditi Shah, Bharath Subramanian, and Sauvik Das, Georgia Institute of Technology
For people with visual impairments (PVIs), audio CAPTCHAs are accessible alternatives to standard visual CAPTCHAs. However, current audio CAPTCHA designs are slower to complete and less accurate than their visual counterparts. We designed and evaluated four novel audio CAPTCHAs that we hypothesized would increase accuracy and speed. To evaluate our designs along these measures, we ran a three-session, within-subjects experiment with 67 PVIs from around the world --- the majority being from the U.S. and India. Thirty three participants completed all three sessions, each separated by one week. These participants completed a total of 39 distinct audio CAPTCHA challenges across our prototype designs and the control, all presented in random order. Most importantly, all four of our new designs were significantly more accurate and faster than the control condition, and were rated as preferable over the control. A post-hoc security evaluation suggested that our designs had different strengths and weaknesses vis-a-vis two adversaries: a random guessing adversary and a NLP adversary. Ultimately, our results suggest that the best design to use is dependent on use-context.
Christine Geeng, Jevan Hutson, and Franziska Roesner, University of Washington
Sexting, while becoming commonplace in the age of mobile phones, is still not well-studied outside of the context of youth or risk prevention. Taking the perspective that sexting is a normal intimacy-building behavior rather than a deviant practice that should be prevented, this work studies the computer security and privacy mental models and technology use of adults who sext. We conducted an online survey of adults who have sexted (N=247), asking participants about which platforms they use to sext, their general sexting behaviors, what security and privacy concerns they have around sexting, and how they mitigate those concerns. We find, for example, significant concerns around sexts participants send "getting out" and being misused, as well as concerns around receiving unsolicited sexts. We also find that while participants use some technical strategies (e.g., using platforms with disappearing messages), they commonly rely on non-technical strategies like trust in their partner to mitigate concerns. We ground our findings in Citron's legal framework of sexual privacy to support individual autonomy, intimacy, and equality, and we make design recommendations for communication platforms to support usable security and privacy for sexting.
Borke Obada-Obieh, University of British Columbia; Lucrezia Spagnolo, Vesta Social Innovation Technologies; Konstantin Beznosov, University of British Columbia
According to the United States Department of Justice, every 73 seconds, an American is sexually assaulted. However, sexual assault is under-reported. Globally, 95% of sexual assault cases are unreported, and at most, 5 out of every 1,000 perpetrators end up in prison. Online anonymous third-party reporting systems (O-TPRSs) are being developed to encourage reporting of sexual assaults and to apprehend serial offenders. This paper reports survivors’ concerns with trusting and using an O-TPRS. We conducted focus groups and interviews with 35 participants who are sexual assault survivors, support workers, or both. We asked questions related to participants’ concerns with trusting an O-TPRS. Our results suggest that participants had technological and emotional concerns that are related to survivors’ security and privacy. We provide insights into the challenges of designing O-TPRSs to increase the reporting of sexual assault.
10:30 am–11:30 am
Implementing Security and Privacy: Design and Development
Session Chair: Katharina Krombholz, CISPA Helmholtz Center for Information Security
Replication: On the Ecological Validity of Online Security Developer Studies: Exploring Deception in a Password-Storage Study with Freelancers
Anastasia Danilova, Alena Naiakshina, and Johanna Deuter, University of Bonn; Matthew Smith, University of Bonn and Fraunhofer FKIE
Recruiting professional developers for studies can be challenging and one major concern for studies examining security development issues is their ecological validity—does the study adequately reflect the real world? Naiakshina et al. [CHI'19] examined the ecological validity of a password storage study conducted with students [CCS'17, SOUPS'18] by hiring freelancers from Freelancer.com. In the hope of increasing the ecologically validity, Naiakshina et al. used a deception study design wherein freelance developers were hired for a regular job using a company front created for the study, instead of openly telling the freelancers that they were taking part in a study. Based on their results, Naiakshina et al. propose the use of online freelancers to be examined further, to supplement other recruitment channels such as CS students and GitHub users. The deception in their study was used with the aim that results would reflect the real work of online freelancers. However, deception needs to be used with careful consideration, which can entail additional study design work and negotiations with ethical oversight bodies. In this paper, we take a closer look at the deception used in Naiakshina et al.’s study. Therefore, we replicate Naiakshina et al.’s work but announce and run it as a study on Freelancer.com. Our findings suggest that for this password storage study deception did not have a large effect and the open recruitment without deception was a viable recruitment method.
Innovation Inaction or In Action? The Role of User Experience in the Security and Privacy Design of Smart Home Cameras
George Chalhoub and Ivan Flechais, University of Oxford; Norbert Nthala, Michigan State University; Ruba Abu-Salma, University College London (UCL) & Inria
Smart homes are under attack. Threats can harm both the security of these homes and the privacy of their inhabitants. As a result, in addition to delivering pleasing and aesthetic devices, smart home product designers need to factor security and privacy into the design of their devices. Further, the need for user-centered security and privacy design is particularly important for such an environment, given that inhabitants are demographically-diverse (eg, age, gender, educational level) and have different skills and (dis)abilities.
Prior work has explored different usable security and privacy solutions for smart homes; however, the applicability of user experience (UX) principles to security and privacy design is under-explored. In this paper, we present a qualitative study to explore the development of smart home cameras manufactured by three companies. We conduct semi-structured interviews with 20 designers and their collaborators, and analyze these interviews using Grounded Theory. We find that UX was seen as helpful by our participants in fostering innovation in the design of privacy solutions. However, UX was not used or considered in the design of security solutions due to an explicit need for established, tried-and-tested solutions (ie, previous traditional security solutions that were seen as effective and reliable to fix certain design problems). Drawing from the findings of our study, we propose a model of UX factors influencing security and privacy design of smart home cameras. We also extract a set of recommendations to improve the security and privacy design of smart cameras. We finally outline several areas for future investigation.
An Ethnographic Understanding of Software (In)Security and a Co-Creation Model to Improve Secure Software Development
Hernan Palombo, Armin Ziaie Tabari, Daniel Lende, Jay Ligatti, and Xinming Ou, University of South Florida
We present an ethnographic study of secure software development processes in a software company using the anthropological research method of participant observation. Two PhD students in computer science trained in qualitative methods were embedded in a software company for 1.5 years of total research time. The researchers participated in everyday work activities such as coding and meetings, and observed software (in)security phenomena both through investigating historical data (code repositories and ticketing system records), and through pen-testing the developed software and observing developers’ and management’s reactions to the discovered vulnerabilities. Our study found that 1) security vulnerabilities are sometimes intentionally introduced and/or overlooked due to the difficulty in managing the various stakeholders' responsibilities in an economic ecosystem, and cannot be simply blamed on developers’ lack of knowledge or skills; 2) accidental vulnerabilities discovered in the pen-testing process produce different reactions in the development team, often times contrary to what a security researcher would predict. These findings highlight the nuanced nature of the root causes of software vulnerabilities and indicate the need to take into account a significant amount of contextual information to understand how and why software vulnerabilities emerge during software development. Rather than simply addressing deficits in developer knowledge or practice, this research sheds light on at times forgotten human factors that significantly impact the security of software developed by actual companies. Our analysis also shows that improving software security in the development process can benefit from a co-creation model, where security experts work side by side with software developers to better identify security concerns and provide tools that are readily applicable within the specific context of the software development workflow.
Justin Smith, Lafayette College; Lisa Nguyen Quang Do and Emerson Murphy-Hill, Google
Static analysis tools can help prevent security incidents, but to do so, they must enable developers to resolve the defects they detect. Unfortunately, developers often struggle to interact with the interfaces of these tools, leading to tool abandonment, and consequently the proliferation of preventable vulnerabilities. Simply put, the usability of static analysis tools is crucial. The usable security community has successfully identified and remedied usability issues in end user security applications, like PGP and Tor browsers, by conducting usability evaluations. Inspired by the success of these studies, we conducted a heuristic walkthrough evaluation and user study focused on four security-oriented static analysis tools. Through the lens of these evaluations, we identify several issues that detract from the usability of static analysis tools. The issues we identified range from workflows that do not support developers to interface features that do not scale. We make these findings actionable by outlining how our results can be used to improve the state-of-the-art in static analysis tool interfaces.
11:30 am–11:45 am
11:45 am–12:45 pm
Session Chair: Lorrie Cranor, Carnegie Mellon University
Matt Mitchell, The Ford Foundation
Matt will speak on his experience providing usable, practical, security and privacy to those on the margins. This presentation will focus on the unique privacy and security needs of oppressed and marginalized communities. Matt will speak on ways we can make the outcomes of our work closer to our goals by including these "personas" while in the process. Matt will highlight examples in application development, user research, user experience, and research papers.
Matt Mitchell is a hacker, and tech fellow to the BUILD program at the Ford Foundation. In his work there Matt develops digital security strategy for the foundation's grantee partners.
Matt is a well-known security researcher, operational security trainer, and data journalist who founded & leads CryptoHarlem, impromptu workshops teaching basic cryptography tools to the predominately African American community in upper Manhattan. Matt trained people as an independent trainer for Global Journalist Security) in digital safety &security. Matt also trained activists in operational and information security. His personal work focuses on marginalized, aggressively monitored, over-policed populations in the United States. He is formerly a tech advisor to the Human Rights Foundation, and the Internet Freedom Festival, .Matt was also a member of the advisory board to the Open Technology Fund, the Internet Freedom Festival, the Digital Security Exchange, Citizen Clinic at Berkeley University Center for Long Term Cybersecurity, The 4th Amendment Center at the National Association of Criminal Defense Lawyers, and TurnOut.
Tuesday, August 11, 2020
7:00 am–8:15 am
Implementing Security and Privacy: Operation, Administration, Vulnerability Analysis
Session Chair: Mary Ellen Zurko, MIT Lincoln Laboratory
Security, Availability, and Multiple Information Sources: Exploring Update Behavior of System Administrators
Christian Tiefenau and Maximilian Häring, University of Bonn; Katharina Krombholz, CISPA Helmholtz Center for Information Security; Emanuel von Zezschwitz, University of Bonn, Fraunhofer FKIE
Experts agree that keeping systems up to date is a powerful security measure. Previous work found that users sometimes explicitly refrain from performing timely updates, e.g., due to bad experiences which has a negative impact on end-user security. Another important user group has been investigated less extensively: system administrators, who are responsible for keeping complex and heterogeneous system landscapes available and secure.
In this paper, we sought to understand administrators' behavior, experiences, and attitudes regarding updates in a corporate environment. Based on the results of an interview study, we developed an online survey and quantified common practices and obstacles (e.g., downtime or lack of information about updates). The findings indicate that even experienced administrators struggle with update processes as the consequences of an update are sometimes hard to assess. Therefore, we argue that more usable monitoring and update processes are essential to guarantee IT security at scale.
Benjamin Reinheimer, Lukas Aldag, Peter Mayer, Mattia Mossano, and Reyhan Duezguen, SECUSO - Security, Usability, Society, Karlsruhe Institute of Technology; Bettina Lofthouse, Landesamt für Geoinformation und Landesvermessung Niedersachsen; Tatiana von Landesberger, Interactive Graphics Systems Group, Technische Universität Darmstadt; Melanie Volkamer, SECUSO - Security, Usability, Society, Karlsruhe Institute of Technology
Security awareness and education programmes are rolled out in more and more organisations. However, their effectiveness over time and, correspondingly, appropriate intervals to remind users’ awareness and knowledge are an open question. In an attempt to address this open question, we present a field investigation in a German organisation from the public administration sector. With overall 409 employees, we evaluated (a) the effectiveness of their newly deployed security awareness and education programme in the phishing context over time and (b) the effectiveness of four different reminder measures – administered after the initial effect had worn off to a degree that no significant improvement to before its deployment was detected anymore. We find a significantly improved performance of correctly identifying phishing and legitimate emails directly after and four months after the programme’s deployment. This was not the case anymore after six months, indicating that reminding users after half a year is recommended. The investigation of the reminder measures indicates that measures based on videos and interactive examples perform best, lasting for at least another six months.
Steffen Becker, Carina Wiesen, and Nils Albartus, Ruhr University Bochum, Max Planck Institute for Cybersecurity and Privacy; Nikol Rummel, Ruhr University Bochum; Christof Paar, Max Planck Institute for Cybersecurity and Privacy
Understanding the internals of Integrated Circuits (ICs), referred to as Hardware Reverse Engineering (HRE), is of interest to both legitimate and malicious parties. HRE is a complex process in which semi-automated steps are interwoven with human sense-making processes. Currently, little is known about the technical and cognitive processes which determine the success of HRE. This paper performs an initial investigation on how reverse engineers solve problems, how manual and automated analysis methods interact, and which cognitive factors play a role. We present the results of an exploratory behavioral study with eight participants that was conducted after they had completed a 14-week training. We explored the validity of our findings by comparing them with the behavior (stategies applied and solution time) of an HRE expert. The participants were observed while solving a realistic HRE task. We tested cognitive abilities of our participants and collected large sets of behavioral data from log files. By comparing the least and most efficient reverse engineers, we were able to observe successful strategies. Moreover, our analyses suggest a phase model for reverse engineering, consisting of three phases. Our results further indicate that the cognitive factor Working Memory (WM) plays a role in efficiently solving HRE problems. Our exploratory study builds the foundation for future research in this topic and outlines ideas for designing cognitively difficult countermeasures (“cognitive obfuscation”) against HRE.
Industry Responses to the European Directive on Security of Network and Information Systems (NIS): Understanding policy implementation practices across critical infrastructures
Ola Aleksandra Michalec, Dirk van der Linden, Sveta Milyaeva, and Awais Rashid, University of Bristol
As traditional legacy systems that run critical national infrastructures (CNI) are increasingly digitized for performance monitoring and efficiency, significant attention has been brought to improving their cyber security. Network and Information Systems Security (NIS) Directive is the first European scale attempt to establish a high standard of cyber security among CNIs. NIS raises questions about defining scope, providing evidence or mobilizing funding. Most importantly, there is the fundamental question whether it would become a tick-box exercise or lead to long-term improvements in security practices. We interviewed 30 cyber security practitioners in the UK to gather an in-depth understanding of NIS implementation and its probable futures. Our analysis found that the emerging field of Operational Technology Security is yet to formulate norms, standards and career trajectories. We are, therefore, at a critical junction, where the scope of the profession is shaping together with the need for evidence-based policy advice. Our findings are twofold: (1) a number of security tropes (e.g., “security solutions are the same across the sectors”), which may drive implementation of regulations such as NIS; (2) a classification of cyber security practices mapping the diversity of policy interpretations. We conclude with recommendations for policymakers and CNI operators.
"You've Got Your Nice List of Bugs, Now What?" Vulnerability Discovery and Management Processes in the Wild
Noura Alomar, University of California, Berkeley; Primal Wijesekera, University of California, Berkeley, and International Computer Science Institute (ICSI); Edward Qiu, University of California, Berkeley; Serge Egelman, University of California, Berkeley, and International Computer Science Institute (ICSI)
Organizational security teams have begun to specialize, and as a result, the existence of red, blue, and purple teams have been used as signals for an organization's security maturity. There is also now a rise in the use of third-party contractors who offer services such as incident response or penetration testing. Additionally, bug bounty programs are not only gaining popularity, but also are perceived as cost-effective replacements for internal security teams. Due to the many strategies to secure organizations, determining which strategy is best suited for a given situation may be a difficult task. To understand how these varying strategies are applied in practice and to understand non-technical challenges faced by professionals, we conducted 53 interviews with security practitioners in technical and managerial roles tasked with vulnerability discovery or management. We found that organizations often struggle with vulnerability remediation and that vulnerability discovery efforts are hindered by significant trust, communication, funding, and staffing issues. Based on our findings, we offer recommendations for how organizations can better apply these strategies.
8:15 am–8:30 am
8:30 am–9:30 am
Applications and User Perceptions
Session Chair: Adam Aviv, The George Washington University
Alexandra Mai and Katharina Pfeffer, SBA Research; Matthias Gusenbauer, Tokyo Institute of Technology, SBA Research; Edgar Weippl, University of Vienna; Katharina Krombholz, CISPA Helmholtz Center for Information Security
Frequent reports of monetary loss, fraud, and user-caused security incidents in the context of cryptocurrencies emphasize the need for human-centered research in this domain. We contribute the first qualitative user study (N=29) on user mental models of cryptocurrency systems and the associated threat landscape. Using Grounded Theory, we reveal misconceptions affecting users' security and privacy.
Our results suggest that current cryptocurrency tools (e.g., wallets and exchanges) are not capable of counteracting threats caused by these misconceptions. Hence, users frequently fail to securely manage their private keys or assume to be anonymous when they are not. Based on our findings, we contribute actionable advice, grounded in the mental models of users, to improve the usability and secure usage of cryptocurrency systems.
Cloudy with a Chance of Misconceptions: Exploring Users' Perceptions and Expectations of Security and Privacy in Cloud Office Suites
Dominik Wermke, Nicolas Huaman, Christian Stransky, Niklas Busch, Yasemin Acar, and Sascha Fahl, Leibniz University Hannover
Cloud Office suites such as Google Docs or Microsoft Office 365 are widely used and introduce security and privacy risks to documents and sensitive user information. Users may not know how, where and by whom their documents are accessible and stored, and it is currently unclear how they understand and mitigate risks. We conduct surveys with 200 cloud office users from the U.S. and Germany to investigate their experiences and behaviours with cloud office suites. We explore their security and privacy perceptions and expectations, as well as their intuitions for how cloud office suites should ideally handle security and privacy. We find that our participants seem to be aware of basic general security implications, storage models, and access by others, although some of their threat models seem underdeveloped, often due to lacking technical knowledge. Our participants have strong opinions on how comfortable they are with the access of certain parties, but are somewhat unsure about who actually has access to their documents. Based on our findings, we distill recommendations for different groups associated with cloud office suites, which can help inform future standards, regulations, implementations, and configuration options.
Peter Story, Daniel Smullen, Alessandro Acquisti, Lorrie Faith Cranor, and Norman Sadeh, Carnegie Mellon University; Florian Schaub, University of Michigan
Despite experts agreeing on many security best practices, there remains a gap between their advice and users' behavior. One example is the low adoption of secure mobile payments in the United States, despite widespread prevalence of credit and debit card fraud. Prior work has proposed nudging interventions to help users adopt security experts' recommendations. We designed and tested nudging interventions based on protection motivation theory (PMT) and implementation intentions (II) to encourage participants to use secure mobile payments. We designed the interventions using an interview study with 20 participants, and then tested them in a longitudinal, between-subjects field experiment with 411 participants. In one condition, drawing on PMT, we informed participants about the threat of card fraud and the protection offered by mobile payments. In a second condition, we combined the PMT intervention with an II-based intervention, and asked participants to formulate a plan to make a mobile payment in the week ahead. A third condition acted as a control. Both PMT-only and PMT+II interventions made participants more likely to make mobile payments relative to the control group. The results suggest that PMT and implementation intention-based nudges can help people translate their desire to behave securely into actual behavior.
Do Privacy and Security Matter to Everyone? Quantifying and Clustering User-Centric Considerations About Smart Home Device Adoption
Natã M. Barbosa, Zhuohao Zhang, and Yang Wang, University of Illinois at Urbana-Champaign
While consumer adoption of smart home devices continues to grow, privacy concerns reportedly remain a roadblock to mass adoption. However, it is unclear exactly how the interplay between privacy and other factors affect consumers' purchase decisions, and whether such considerations are held only by certain consumer groups but not others. In order to unpack the decision-making process of smart home device adoption, we conducted a mixed-method analysis using online survey data collected from 631 US participants. Our analysis uncovered motivators and blockers of purchase decisions, along with their relative importance. We found that consumers can be segmented based on their considerations into three clusters: affordability-oriented, privacy-oriented, and reliability-oriented. We present an in-depth quantification of consumer considerations on smart home device adoption along with desired privacy and security features consumers wish to use to protect their privacy in the smart home.
9:30 am–10:15 am
Social, Mentorship, and Networking Events
10:15 am–11:15 am
Privacy, Tracking, and Measurement
Session Chair: Robert Reeder, Google
Enhancing Privacy through the Visual Design of Privacy Notices: Exploring the Interplay of Curiosity, Control and Affect
Agnieszka Kitkowska, Karlstad University; Mark Warner, Northumbria University; Yefim Shulman, Tel Aviv University; Erik Wästlund and Leonardo A. Martucci, Karlstad University
When people sign-up to new online services, privacy notices are the initial means by which data handling practices are communicated. Yet, their design seldom ensures users' privacy comprehension or provides people with privacy choices, resulting in negative feelings associated with the sign-up process. In this paper, we investigate how to improve privacy notice design to enhance privacy comprehension and control, while inducing more positive feelings towards these notices. In an online experiment ($N=620$), we examine the factors of curiosity, privacy concerns, trust, and time. We study how these factors and visual designs of notices (framing and control) influence privacy comprehension, intention to disclose, and affect (negative-positive). Our results show that, depending on an individual's level of curiosity, control can influence privacy comprehension, disclosure, and affect. We demonstrate that affect moderates the relationship between privacy concerns and disclosure. We elaborate on our results, highlighting how privacy notices that activate curiosity and provide control, could enhance usability and strengthen privacy-conscious behaviors.
"I Have a Narrow Thought Process": Constraints on Explanations Connecting Inferences and Self-Perceptions
Emilee Rader, Samantha Hautea, and Anjali Munasinghe, Michigan State University
Most people are unfamiliar with the kinds of inferences that platforms like Facebook and Google can automatically associate with them, despite the existence of interfaces designed to provide transparency to end users. We conducted a study to investigate people's reactions upon being exposed to these inferences, to learn if and how they perceived the inferences to be connected to themselves. Through qualitative analysis, we found that the evidence participants used to relate the inferences with their self-perceptions was bounded by what they remembered about their own past behaviors in connection with the platform. Inferences that participants felt were implausible given their own behavior were rationalized as being related to family members, outdated, or could fit anyone with similar demographic characteristics. Participants also identified some inferences they believed had no connection with themselves whatsoever. We discuss implications for how participants' reasoning might lead to expectations about what kinds of inferences are possible, and what this means for people's ability to make informed privacy decisions regarding consent and disclosure.
Replication: Why We Still Can't Browse in Peace: On the Uniqueness and Reidentifiability of Web Browsing Histories
Sarah Bird, Ilana Segall, and Martin Lopatka, Mozilla
We examine the threat to individuals' privacy based on the feasibility of reidentifying users through distinctive profiles of their browsing history visible to websites and third parties. This work replicates and extends the 2012 paper Why Johnny Can't Browse in Peace: On the Uniqueness of Web Browsing History Patterns. The original work demonstrated that browsing profiles are highly distinctive and stable. We reproduce those results and extend the original work to detail the privacy risk posed by the aggregation of browsing histories. Our dataset consists of two weeks of browsing data from ~52,000 Firefox users. Our work replicates the original paper's core findings by identifying 48,919 distinct browsing profiles, of which 99% are unique. High uniqueness holds even when histories are truncated to just 100 top sites. We then find that for users who visited 50 or more distinct domains in the two-week data collection period, ~50% can be reidentified using the top 10k sites. Reidentifiability rose to over 80% for users that browsed 150 or more distinct domains. Finally, we observe numerous third parties pervasive enough to gather web histories sufficient to leverage browsing history as an identifier.
Camille Cobb and Milijana Surbatovich, Carnegie Mellon University; Anna Kawakami, Wellesley College; Mahmood Sharif, NortonLifeLock; Lujo Bauer, Carnegie Mellon University; Anupam Das, North Carolina State University; Limin Jia, Carnegie Mellon University
Smart-home devices are becoming increasingly ubiquitous and interconnected with other devices and services, such as phones, fitness trackers, cars, and social media accounts. Built-in connections between these services are still emerging, but end-user-programming tools such as If-This-Then-That (IFTTT) have existed for almost a decade, allowing users to create rules (called applets in IFTTT) that dictate interactions between devices and services. Previous work found potential secrecy or integrity violations in many applets, but did so without examining how individual users interact with the service. In this work, we study the risks of real-world use of IFTTT by collecting and analyzing 732 applets installed by 28 participants and participants' responses to several survey questions. We found that significantly fewer applets than previously thought pose realistic secrecy or integrity risks to the users who install them. Perhaps consistently, participants were generally not concerned about potential harms, even when these were explained to them. However, examining participants' applets led us to identify several new types of privacy risks, which challenge some assumptions inherent in previous analyses that focus on secrecy and integrity risks. For example, we found that many applets involve monitoring incidental users: family, friends, and neighbors who may interact with someone else's smart-home devices, possibly without realizing it. We discuss what our findings imply for automatically identifying potentially harmful applets.
11:15 am–11:30 am
11:30 am–12:30 pm
SOUPS 2020 Poster Session and Lightning Talks
Johanna Johansen, Reliable Systems Group; Simone Fischer-Hübner, University of Oslo, Norway
Stuart Schechter, DiceKeys / UC Berkeley
Giselle Mendonça and Nathan Malkin, UC Berkeley
Aakanksha Saha and Katie Chuzie, Microsoft
Khir Henderson, Morgan State University
Mike Farrell, Transcend
Users Are still Not the Enemy: Applying Security Usability Principles to Covid-19 Return-To-Work Security
Ann-Marie Horcher, Central Michigan University
12:30 pm–12:45 pm
General Chair: Heather Richter Lipford, University of North Carolina at Charlotte, and Vice General Chair: Sonia Chiasson, Carleton University