The papers below have been accepted for publication at SOUPS 2022. The full program will be available soon.
Improving Password Generation Through the Design of a Password Composition Policy Description Language
Anuj Gautam, Shan Lalani, and Scott Ruoti, The University of Tennessee
Password managers help users more effectively manage their passwords, yet the adoption of password generation is minimal. One explanation for this problem is that websites' password composition policies (PCPs) can reject generated passwords, creating a usability impediment. To address this issue, we design a PCP language that websites use to describe their PCP and that managers use to generate compliant passwords. We develop this language using an iterative process involving an extensive collection of PCPs scraped from the Web. We provide libraries for adopting our PCP language into websites and password managers and build proof-of-concept prototypes to verify the real-world feasibility of our PCP language. Using a 25-person user study, we demonstrate that our language and libraries are easy to pick up and correctly use for novice developers. Finally, we replicate and extend past research evaluating Web PCPs, showing that half of PCPs fail to require passwords that resist offline attacks when considering that users prefer certain character classes when selecting their passwords.
Industrial practitioners' mental models of adversarial machine learning
Lukas Bieringer, QuantPi; Kathrin Grosse, University of Cagliari; Michael Backes, CISPA Helmholtz Center for Information Security; Battista Biggio, University of Cagliari, Pluribus One; Katharina Krombholz, CISPA Helmholtz Center for Information Security
Although machine learning is widely used in practice, little is known about practitioners' understanding of potential security challenges. In this work, we close this substantial gap and contribute a qualitative study focusing on developers' mental models of the machine learning pipeline and potentially vulnerable components. Similar studies have helped in other security fields to discover root causes or improve risk communication. Our study reveals two facets of practitioners' mental models of machine learning security. Firstly, practitioners often confuse machine learning security with threats and defences that are not directly related to machine learning. Secondly, in contrast to most academic research, our participants perceive security of machine learning as not solely related to individual models, but rather in the context of entire workflows that consist of multiple components. Jointly with our additional findings, these two facets provide a foundation to substantiate mental models for machine learning security and have implications for the integration of adversarial machine learning into corporate workflows, decreasing practitioners' reported uncertainty, and appropriate regulatory frameworks for machine learning security.
Being Hacked: Understanding Victims' Experiences of IoT Hacking
Asreen Rostami, RISE Research Institutes of Sweden & Stockholm University; Minna Vigren, Stockholm University; Shahid Raza, RISE Research Institutes of Sweden; Barry Brown, Stockholm University & Department of Computer Science, University of Copenhagen
From light bulbs to smart locks, IoT is increasingly embedded into our homes and lives. This opens up new vulnerabilities as IoT devices can be hacked and manipulated to cause harm or discomfort. In this paper we document users' experiences of having their IoT systems hacked through 210 self-reports from Reddit, device support forums, and Amazon review pages. These reports and the discussion around them show how uncertainty is at the heart of 'being hacked'. Hacks are sometimes difficult to detect, and users can mistake unusual IoT behaviour as evidence of a hack, yet this can still cause considerable emotional hurt and harm. In discussion, we shift from seeing hacks as technical system failings to be repaired, to seeing them as sites for care and user support. Such a shift in perspective opens a new front in designing for hacking - not just prevention but alleviating harm.
Password policies of most top websites fail to follow best practices
Kevin Lee, Sten Sjöberg, and Arvind Narayanan, Department of Computer Science and Center for Information Technology Policy, Princeton University
We examined the policies of 120 of the most popular websites for when a user creates a new password for their account. Despite well-established advice that has emerged from the research community, we found that only 13% of websites followed all relevant best practices in their password policies. Specifically, 75% of websites do not stop users from choosing the most common passwords—like "abc123456" and "P@$$w0rd", while 45% burden users by requiring specific character classes in their passwords for minimal security benefit. We found low adoption of password strength meters—a widely touted intervention to encourage stronger passwords, appearing on only 19% of websites. Even among those sites, we found nearly half misusing them to steer users to include certain character classes, and not for their intended purpose of encouraging freely-constructed strong passwords.
Aunties, Strangers, and the FBI: Online Privacy Concerns and Experiences of Muslim-American Women
Tanisha Afnan and Yixin Zou, University of Michigan School of Information; Maryam Mustafa, Lahore University of Management Sciences; Mustafa Naseem and Florian Schaub, University of Michigan School of Information
Women who identify with Islam in the United States come from many different race, class, and cultural communities. They are also more likely to be first or second-generation immigrants. This combination of different marginal identities (religious affiliation, gender, immigration status, and race) exposes Muslim-American women to unique online privacy risks and consequences. We conducted 21 semi-structured interviews to understand how Muslim-American women perceive digital privacy risks related to three contexts: government surveillance, Islamophobia, and social surveillance. We find that privacy concerns held by Muslim-American women unfolded with respect to three dimensions of identity: as a result of their identity as Muslim-Americans broadly (e.g., Islamophobic online harassment), as Muslim-American women more specifically (e.g., reputational harms within one's cultural community for posting taboo content), and as a product of their own individual practices of Islam (e.g., constructing female-only spaces to share photos of oneself without a hijab). We discuss how these intersectional privacy concerns add to and expand on existing pro-privacy design principles, and lessons learned from our participants' privacy-protective strategies for improving the digital experiences of this community.
Runtime Permissions for Privacy in Proactive Intelligent Assistants
Nathan Malkin and David Wagner, University of California, Berkeley; Serge Egelman, University of California, Berkeley & International Computer Science Institute
Intelligent voice assistants may soon become proactive, offering suggestions without being directly invoked. Such behavior increases privacy risks, since proactive operation requires continuous monitoring of conversations. To mitigate this problem, our study proposes and evaluates one potential privacy control, in which the assistant requests permission for the information it wishes to use immediately after hearing it.
To find out how people would react to runtime permission requests, we recruited 23 pairs of participants to hold conversations while receiving ambient suggestions from a proactive assistant, which we simulated in real time using the Wizard of Oz technique. The interactive sessions featured different modes and designs of runtime permission requests and were followed by in-depth interviews about people's preferences and concerns. Most participants were excited about the devices despite their continuous listening, but wanted control over the assistant's actions and their own data. They generally prioritized an interruption-free experience above more fine-grained control over what the device would hear.
If You Can’t Get Them to the Lab: Evaluating a Virtual Study Environment with Security Information Workers
Nicolas Huaman, Alexander Krause, and Dominik Wermke, CISPA Helmholtz Center for Information Security; Jan H. Klemmer and Christian Stransky, Leibniz University Hannover; Yasemin Acar, George Washington University; Sascha Fahl, CISPA Helmholtz Center for Information Security
Usable security and privacy researchers use many study methodologies, including interviews, surveys, and laboratory studies. Of those, lab studies allow for particularly flexible setups, including programming experiments or usability evaluations of software. However, lab studies also come with challenges: Often, it is particularly challenging to recruit enough skilled participants for in-person studies. Especially researchers studying security information workers reported on similar recruitment challenges in the past. Additionally, situations like the COVID-19 pandemic can make in-person lab studies even more challenging. Finally, institutions with limited resources may not be able to conduct lab studies. Therefore, we present and evaluate a novel virtual study environment prototype, called OLab, that allows researchers to conduct lab-like studies remotely using a commodity browser. Our environment overcomes lab-like study challenges and supports flexible setups and comprehensive data collection. In an iterative engineering process, we design and implement a prototype based on requirements we identified in previous work and conduct a comprehensive evaluation including a cognitive walkthrough with usable security experts, a guided and supervised online study with DevOps, and an unguided and unsupervised online study with computer science students. We can confirm that our prototype supports a wide variety of lab-like study setups and received positive feedback from all study participants.
An open door may tempt a saint: Examining situational and individual determinants of privacy-invading behavior
Markus Langer, Saarland University, Industrial and Organizational Psychology; Rudolf Siegel and Michael Schilling, CISPA Helmholtz Center for Information Security; Tim Hunsicker and Cornelius J. König, Saarland University, Industrial and Organizational Psychology
Digital life enables situations where people invade other’s privacy -- sometimes with harmful intentions but often also without such. Given negative effects on victims of privacy invasions, research has examined technical options to prevent privacy-invading behavior (PIB). However, little is known about the sociotechnical environment where PIB occurs. Therefore, our study N=95) examined possible situational (effort necessary to invade privacy) and individual determinants (e.g., personality) of PIB in a three-phase experiment. 1) Laboratory phase: participants were immersed into the scenario; 2) privacy-invasion-phase at home: automatically and covertly capturing participants’ PIB; 3) debriefing-phase at home: capturing whether participants admit PIB. Our results contribute to understanding the sociotechnical environment in which PIB occurs showing that most participants engaged in PIB, that the likelihood of PIB increased when it required less effort, that participants less likely admitted PIB for more sensitive information, and that individual characteristics affected whether participants admitted PIB. We discuss implications for privacy research and design.
Replication: Stories as Informal Lessons about Security
Katharina Pfeffer and Alexandra Mai, SBA Research; Edgar Weippl, University of Vienna; Emilee Rader, Michigan State University; Katharina Krombholz, CISPA Helmholtz Center for Information Security
Anecdotal stories about security threats told to non-experts by friends, peers, or the media have been shown to be important in forming mental models and secure behaviors. In 2012, Rader et al. conducted a survey (n=301) of security stories with a student sample to determine factors that influence security perceptions and behavior. We replicated this survey with a more diverse sample (n=299), including different age groups and educational backgrounds. We were able to confirm many of the original findings, providing further evidence that certain characteristics of stories increase the likelihood of learning and retelling. Moreover, we contribute new insights into how people learn from stories, such as that younger and higher educated people are less likely to change their thinking or be emotionally influenced by stories. We (re)discovered all of the threat themes found by Rader et al., suggesting that these threats have not been eliminated in the last decade, and found new ones such as ransomware and data breaches. Our findings help to improve the design of security advise and education for non-experts.
Let’s Hash: Helping Developers with Password Security
Lisa Geierhaas and Anna-Marie Ortloff, University of Bonn; Matthew Smith, University of Bonn, FKIE Fraunhofer; Alena Naiakshina, Ruhr University Bochum
Awarded Distinguished Paper!
Software developers are rarely security experts and often struggle with security-related programming tasks. The resources developers use to work on them, such as Stack-Overflow or Documentation, have a significant impact on the security of the code they produce. However, work by Acar et al. (SP'16) has shown that these resources are often either easy to use but insecure or secure but hard to use. In a study by Naiakshina et al. (SOUPS'18), it was shown that developers who did not use resources to copy and paste code did not produce any secure solutions at all. This highlights how essential programming resources are for security. Inspired by the Let's Encrypt and Certbot that support admins in configuring TLS, we created a programming aid called Let's Hash to help developers create secure password authentication code easily. We created two versions. The first is a collection of code snippets developers can use, and the second adds a wizard interface on top that guides developers through the decisions which need to be made and creates the complete code for them. To evaluate the security and usability of Let's Hash, we conducted a study with 179 freelance developers, asking them to solve three password programming tasks. Both versions of Let's Hash significantly outperformed the baseline condition in which developers used their regular resources. On average, Let's Hash users were between 5 and 32 times as likely to create secure code than those in the control condition.
Evaluating the Usability of Privacy Choice Mechanisms
Hana Habib and Lorrie Faith Cranor, Carnegie Mellon University
Privacy choice interfaces commonly take the form of cookie consent banners, advertising choices, sharing settings, and prompts to enable location and other system services. However, a growing body of research has repeatedly demonstrated that existing consent and privacy choice mechanisms are difficult for people to use. Our work synthesizes the approaches used in prior usability evaluations of privacy choice interactions and contributes a framework for conducting future evaluations. We first identify a comprehensive definition of usability for the privacy-choice context consisting of seven aspects: user needs, ability & effort, awareness, comprehension, sentiment, decision reversal, and nudging patterns. We then classify research methods and study designs for performing privacy choice usability evaluations. Next, we draw on classic approaches to usability testing and prior work in this space to identify a framework that can be applied to evaluations of different types of privacy choice interactions. Usability evaluations applying this framework can yield design recommendations that would improve the usability of these choice mechanisms, ameliorating some of the considerable user burden involved in privacy management.
Normative and Non-Social Beliefs about Sensor Data: Implications for Collective Privacy Management
Emilee Rader, Michigan State University
Sensors embedded in wearable and smart home devices collect data that can be used to infer sensitive, private details about people's lives. Privacy norms have been proposed as a foundation upon which people might coordinate to set and enforce preferences for acceptable or unacceptable data practices. Through a qualitative study, this research explored whether normative beliefs influenced participants' reactions to plausible but unexpected inferences that could be made from sensor data collected by everyday wearable and smart home devices. Some reactions were grounded in normative beliefs involving existing disclosure taboos, while others stigmatized the choice to limit one's use of technologies to preserve one's privacy. The visible nature of others' technology use contradicts individual concern about sensor data privacy, which may lead to an incorrect assumption that privacy is not important to other people. Findings suggest that this is a barrier to collective privacy management, and that awareness interventions focused on information about the beliefs of other users may be helpful for collective action related to data privacy.
DualCheck: Exploiting Human Verification Tasks for Opportunistic Online Safety Microlearning
Ryo Yoshikawa, Hideya Ochiai, and Koji Yatani, The University of Tokyo
Learning online safety and ethics is becoming more critical for the general user population. However, they do not receive such learning opportunities regularly, and are often left behind. We were therefore motivated to design an interactive system to provide more frequent learning opportunities to the general user population. This paper presents our explorations on the integration of opportunistic microlearning about online safety and ethics into human verification. Our instantiation of this concept, called DualCheck, asks users to respond to questions related to online safety and ethics while human verification would be executed in a similar manner to reCAPTCHA v2. In this manner, DualCheck offers users microlearning opportunities when they use online services. Our 15-day user study confirmed the positive learning effect of DualCheck. The quantitative and qualitative results revealed participants' positive experience with attitude toward DualCheck, and also found its significantly higher perceived usability than text-based CAPTCHA and picture-based reCAPTCHA.
"As soon as it's a risk, I want to require MFA": How Administrators Configure Risk-based Authentication
Philipp Markert and Theodor Schnitzler, Ruhr University Bochum; Maximilian Golla, Max Planck Institute for Security and Privacy; Markus Dürmuth, Leibniz University Hannover
Risk-based authentication (RBA) complements standard password-based logins by using knowledge about previously observed user behavior to prevent malicious login attempts. Correctly configured, RBA holds the opportunity to increase the overall security without burdening the user by limiting unnecessary security prompts to a minimum. Thus, it is crucial to understand how administrators interact with off-the-shelf RBA systems that assign a risk score to a login and require administrators to configure adequate responses.
In this paper, we let n=28 system administrators configure RBA using a mock-up system modeled after Amazon Cognito. In subsequent semi-structured interviews, we asked them about the intentions behind their configurations and experiences with the RBA system. We find that administrators want to have a thorough understanding of the system they configure, show the importance of default settings as they are either directly adopted or depict an important orientation, and identify several confusing wordings. Based on our findings, we give recommendations for service providers who offer risk-based authentication to ensure both usable and secure logins for everyone.
Presenting Suspicious Details in User-Facing E-mail Headers Does Not Improve Phishing Detection
Sarah Zheng and Ingolf Becker, UCL
Phishing requires humans to fall for impersonated sources. Sender authenticity can often be inferred from e-mail header information commonly displayed by e-mail clients, such as sender and recipient details. People may be biased by convincing e-mail content and overlook these details, and subsequently fall for phishing. This study tests whether people are better at detecting phishing e-mails when they are only presented with user-facing e-mail headers, instead of full e-mails. Results from a representative sample show that most phishing e-mails were detected by less than 30% of the participants, regardless of which e-mail part was displayed. In fact, phishing detection was worst when only e-mail headers were provided. Thus, people still fall for phishing, because they do not recognize online impersonation tactics. No personal traits, e-mail characteristics, nor URL interactions reliably predicted phishing detection abilities. These findings highlight the need for novel approaches to help users with evaluating e-mail authenticity.
Replication: The Effect of Differential Privacy Communication on German Users' Comprehension and Data Sharing Attitudes
Patrick Kühtreiber, Viktoriya Pak, and Delphine Reinhardt, University of Göttingen
Differential privacy (DP) has become a standard for privacy-preserving data collection. However, there is little understanding of users' comprehension of this privacy technique, which could increase users' willingness to share personal data. Xiong et al.'s 2020 study tackles this problem by investigating the effect of differential privacy communication to laypeople, with an average of 466 participants per study primarily from USA and India. Since privacy decisions have been shown to depend on participants' culture in multiple past studies, we have replicated this study with German participants to compare the results with the original study and to gain further insights about differential privacy communication in a different cultural context. After having translated the original questionnaire into German, we conducted two studies with an average of 728 participants. While we could confirm that participants did not fully understand differential privacy and that a new method to communicate the effects of differential privacy is needed, participants in our study were more willing to share data than the participants from USA and India. This finding is surprising, as Germans have been shown to be more worried about their privacy than other cultures.
Investigating How University Students in the United States Encounter and Deal With Misinformation in Private WhatsApp Chats During COVID-19
K. J. Kevin Feng, Princeton University; Kevin Song, Kejing Li, Oishee Chakrabarti, and Marshini Chetty, University of Chicago
Misinformation can spread easily in end-to-end encrypted messaging platforms such as WhatsApp where many groups of people are communicating with each other. Approaches to combat misinformation may also differ amongst younger and older adults. In this paper, we investigate how young adults encountered and dealt with misinformation on WhatsApp in private group chats during the first year of the COVID-19 pandemic. To do so, we conducted a qualitative interview study with 16 WhatsApp users who were university students based in the United States. We uncovered three main findings. First, all participants encountered misinformation multiple times a week in group chats, often attributing the source of misinformation to be well-intentioned family members. Second, although participants were able to identify misinformation and fact-check using diverse methods, they often remained passive to avoid negatively impacting family relations. Third, participants agreed that WhatsApp bears a responsibility to curb misinformation on the platform but expressed concerns about its ability to do so given the platform's steadfast commitment to content privacy. Our findings suggest that conventional content moderation techniques used by open platforms such as Twitter and Facebook are unfit to tackle misinformation on WhatsApp. We offer alternative design suggestions that take into consideration the social nuances and privacy commitments of end-to-end encrypted group chats. Our paper also contributes to discussions between platform designers, researchers, and end users on misinformation in privacy-preserving environments more broadly.
Understanding Non-Experts’ Security- and Privacy-Related Questions on a Q&A Site
Ayako A. Hasegawa, NICT; Naomi Yamashita, NTT / Kyoto University; Tatsuya Mori, Waseda University / NICT / RIKEN AIP; Daisuke Inoue, NICT; Mitsuaki Akiyama, NTT
Non-expert users are often forced to make decisions about security and privacy in their daily lives. Prior research has shown that non-expert users ask strangers for advice about digital media use online. In this study, to clarify the security and privacy concerns of non-expert users in their daily lives, we investigated security- and privacy-related question posts on a Question-and-Answer (Q&A) site for non-expert users. We conducted a thematic analysis of 445 question posts. We identified seven themes among the questions and found that users asked about cyberattacks the most, followed by authentication and security software. We also found that there was a strong demand for answers, especially for questions related to privacy abuse and account/device management. Our findings provide key insights into what non-experts are struggling with when it comes to privacy and security and will help service providers and researchers make improvements to address these concerns.
Anti-Privacy and Anti-Security Advice on TikTok: Case Studies of Technology-Enabled Surveillance and Control in Intimate Partner and Parent-Child Relationships
Miranda Wei, Eric Zeng, Tadayoshi Kohno, and Franziska Roesner, Paul G. Allen School of Computer Science & Engineering, University of Washington
Modern technologies including smartphones, AirTags, and tracking apps enable surveillance and control in interpersonal relationships. In this work, we study videos posted on TikTok that give advice for how to surveil or control others through technology, focusing on two interpersonal contexts: intimate partner relationships and parent-child relationships. We collected 98 videos across both contexts and investigate (a) what types of surveillance or control techniques the videos describe, (b) what assets are being targeted, (c) the reasons that TikTok creators give for using these techniques, and (d) defensive techniques discussed. Additionally, we make observations about how social factors -- including social acceptability, gender, and TikTok culture -- are critical context for the existence of this anti-privacy and anti-security advice. We discuss the use of TikTok as a rich source of qualitative data for future studies and make recommendations for technology designers around interpersonal surveillance and control.
Comparing User Perceptions of Anti-Stalkerware Apps with the Technical Reality
Matthias Fassl and Simon Anell, CISPA Helmholtz Center for Information Security; Sabine Houy, Umeå University; Martina Lindorfer, TU Wien; Katharina Krombholz, CISPA Helmholtz Center for Information Security
Every year an increasing number of users face stalkerware on their phones. Many of them are victims of intimate partner surveillance (IPS) who are unsure how to identify or remove stalkerware from their phones. An intuitive approach would be to choose anti-stalkerware from the app store. However, a mismatch between user expectations and the technical capabilities can produce an illusion of security and risk compensation behavior (i.e., the Peltzmann effect).
We compare users' perceptions of anti-stalkerware with the technical reality. First, we applied thematic analysis to app reviews to analyze user perceptions. Then, we performed a cognitive walkthrough of two prominent anti-stalkerware apps available on the Google PlayStore and reverse-engineered them to understand their detection features.
Our results suggest that users base their trust on the look and feel of the app, the number and type of alerts, and the apps' affordances. We also found that app capabilities do not correspond to the users' perceptions and expectations, impacting their practical effectiveness. We discuss different stakeholders' options to remedy these challenges and better align user perceptions with the technical reality.
Is it a concern or a preference? An investigation into the ability of privacy scales to capture and distinguish granular privacy constructs
Jessica Colnago, Google; Lorrie Faith Cranor and Alessandro Acquisti, Carnegie Mellon University; Kate Hazel Stanton, University of Pittsburgh
IAPP SOUPS Privacy Award
Privacy scales are frequently used to capture survey participants' perspectives on privacy, but their utility hangs on their ability to reliably measure constructs associated with privacy. We investigate a set of common constructs (the intended objects of measurement by privacy scales) used in privacy surveys: privacy attitude, privacy preference, privacy concern, privacy expectation, privacy decision, and privacy behavior. First, we explore expert understanding of these constructs. Next, we investigate survey participants' understanding of statements used in privacy scales aimed at measuring them. We ask a balanced sample of Prolific participants in the United States to identify the extent to which different constructs describe each of a set of 30 statements drawn from scales used commonly in the privacy literature and 39 that we developed. Our analysis reveals considerable misalignment between the constructs associated with the statements and participant understanding. Many statements used in scales or that we developed with the intention to measure constructs such as privacy concern, are seen by survey participants as describing other constructs, such as privacy preferences. We also find that no statement uniquely measured any one construct, though some more reliably track their target construct than others. Our findings constitute an epistemological problem for use of scales in the existing literature (are they capturing what we think they capture?) and a practical problem for construction of new scales (how to ensure construct validity in the face of ill-defined constructs and evolving privacy landscape?). We use methods from corpus linguistics to identify characteristics of those statements most reliably associated with their target construct, and provide a set of provisional suggestions for future statement construction. Finally, we discuss the implication of our results for the privacy research community.
Sharing without Scaring: Enabling Smartphones to Become Aware of Temporary Sharing
Jiayi Chen and Urs Hengartner, University of Waterloo; Hassan Khan, University of Guelph
Smartphone owners often hand over their device to another person for temporary sharing, such as for showing pictures to a friend or entertaining a child with games. This device sharing can result in privacy concerns since the owner's personal data may become vulnerable to unauthorized access. Existing solutions have usability problems and neglect human factors of sharing practices. For example, since device sharing implies trust between people, explicitly hiding data may signal mistrust. Besides, an owner may fail to enable a sharing-protection mechanism due to forgetfulness or lack of risk perception. Therefore, we propose device sharing awareness (DSA), a new sharing-protection approach for temporarily shared devices, which detects a sharing event proactively and enables sharing protection subtly. DSA exploits natural handover gestures and behavioral biometrics for proactive sharing detection to transparently enable and disable a device's sharing mode without requiring explicit input. It also supports various access control strategies to fulfill sharing requirements imposed by an app. Our user study evaluates handover detection over 3,700 data clips (n=18) and comprehensive device sharing processing over 50 sessions (n=10). The evaluation results show that DSA can accurately detect handover gestures and automatically process sharing events to provide a secure sharing environment.
Exploring User Authentication with Windows Hello in a Small Business Environment
Florian M. Farke, Leona Lassak, and Jannis Pinter, Ruhr University Bochum; Markus Dürmuth, Leibniz University Hannover
Windows Hello for Business is Microsoft's latest attempt to replace passwords in Windows enterprise environments introduced with Windows 10. It addresses some of the common password problems like password leaks or phishing attacks, comes with built-in support for biometric authentication methods like fingerprint or facial recognition, and a new user interface. We conducted a qualitative study with 13 employees accompanying the introduction of Windows Hello in a small business studying its usability and deployability. Over five weeks, we measured authentication times, let participants rate their user experience, and conducted interviews at the end. In general, participants liked Windows Hello and found it more usable than the traditional Windows sign-in scheme. Windows Hello was faster and perceived as more responsive than the traditional Windows login. However, participants tended to use PINs as a replacement for their (longer) passwords instead of using biometrics. Lack of hardware support (no biometric hardware available), the form factor of device or setup of their workplace (e.g., biometric hardware on the other side of the table) were some reasons to not use biometrics but stick with a well-known authentication method like a PIN.
“Fast, Easy, Convenient.” Studying Adoption and Perception of Digital Covid Certificates
Franziska Herbert, Marvin Kowalewski, Theodor Schnitzler, and Leona Lassak, Ruhr University Bochum; Markus Dürmuth, Leibniz University Hannover
Digital vaccination, recovery, and test certificates play an important role in enforcing access restrictions to certain parts of the public life in Europe during the current phase of the COVID-19 pandemic. Such certificates represent an interesting showcase for digital security and privacy in the context of sensitive personal data.
In this paper, we take a look at which types of certificates and related apps people in Germany use for which purposes, which factors influence their adoption, and which misconceptions exist concerning the security and use of certificates. To this end, we report the results of a census-representative online survey in Germany (n=800) conducted in December 2021, complemented with 30 qualitative street interviews.
Most participants favor digital certificates over paper-based variants due to their ease of use and seamless integration into dedicated smartphone apps - more than 75% of participants have installed one or more eligible app(s) on their phone. We find that older age, higher privacy concerns related to apps, and not being vaccinated are factors hindering the adoption of digital certificates.
Increasing security without decreasing usability: A comparison of various verifiable voting systems
Melanie Volkamer, Karlsruhe Institute of Technology; Oksana Kulyk, IT University of Copenhagen; Jonas Ludwig and Niklas Fuhrberg, Karlsruhe Institute of Technology
Electronic voting researchers advocate for verifiable voting schemes to maximise election integrity. In order to maximise vote secrecy, so-called code-voting approaches were proposed. Both verifiability and code voting require voters to expend additional effort during vote casting. Verifiability has been used in actual elections, but this is not the case for code voting due to usability concerns. There is little evidence from empirical studies attesting to its usability. Our main contribution is to extend an existing verifiable voting system (used for real world elections) with a code-voting approach to improve the system's security properties. We minimise voter effort as corresponding QR codes are scanned instead of requiring manual code entry. We conducted a user study to evaluate the general usability of this proposal as well as its manipulation-detection efficacy. In particular, we found that extending the considered verifiable voting systems with code-voting approaches to enhance vote secrecy is feasible because we could not observe a significant decrease in general usability while manipulation detection improved significantly.
Detecting iPhone Security Compromise in Simulated Stalking Scenarios: Strategies and Obstacles
Andrea Gallardo, Hanseul Kim, Tianying Li, Lujo Bauer, and Lorrie Cranor, Carnegie Mellon University
Mobile phones can be abused for stalking, through methods such as location tracking, account compromise, and remote surveillance. We conducted eighteen remote semi-structured interviews in which we presented four hypothetical iPhone compromise scenarios that simulated technology-enabled abuse. We asked participants to provide advice for detecting and resolving each type of compromise. Using qualitative coding, we analyzed the interview data and identified the strategies of non-expert participants and the difficulties they faced in each scenario. We found that participants could readily delete an app and search in iOS settings or the home screen, but they were generally unable to identify or turn off location sharing in Google Maps or determine whether the iCloud account was improperly accessed. When following online advice for jailbreak detection, participants had difficulty finding a root checker app and resetting the phone. We identify underlying factors contributing to these difficulties and recommend improvements to iOS, Google Maps, and online advice to reduce the difficulties we identified.
Usability and Security of Trusted Platform Module (TPM) Library APIs
Siddharth Prakash Rao and Gabriela Limonta, Nokia Bell Labs; Janne Lindqvist, Aalto University
Trusted Platform Modules (TPMs) provide a hardware-based root of trust and secure storage and help verify their host's integrity. Software developers can interact with a TPM and utilize its functionalities using standardized APIs that various libraries have implemented. We present a qualitative study (n=9) involving task analysis and cognitive interviews that uncovered several usability and security issues with tpm2-tools, one of the widely used TPM library APIs. Towards this end, we implemented a study environment that we will release as open source to support further studies.
Our results support two major conclusions: 1) tpm2-tools APIs, as designed, are not designed to be developer-friendly, and 2) One of the major causes for these usability issues is in the TPM specifications. Since other libraries also mirror the specifications and provide no significant usability improvements, our results are likely to indicate similar issues with all current TPM library APIs. We provide recommendations for improving the TPM library APIs documentation and software, and we highlight the need for HCI experts to review TPM specifications to preemptively address usability pitfalls.
Users' Perceptions of Chrome Compromised Credential Notification
Yue Huang, Borke Obada-Obieh, and Konstantin Beznosov, University of British Columbia
This paper reports the challenges that users experienced and their concerns regarding the Chrome compromised credentials notification. We adopted a two-step approach to uncover the issues of the notification, including qualitatively analyzing users' online comments and conducting semi-structured interviews with participants who had received the notification. We found that users' issues with the notification are associated with five core aspects of the notification: the authenticity of the notification, data breach incidents, Google's knowledge of users' compromised credentials, multiple accounts being associated with one notification, and actions recommended by the notification. We also identified the detailed challenges and concerns users had regarding each aspect of the notification. Based on the results, we offer suggestions to improve the design of browser-based compromised credential notifications to support users in better protecting their online accounts.
The Nerd Factor: The Potential of S&P Adepts to Serve as a Social Resource in the User's Quest for More Secure and Privacy-Preserving Behavior
Nina Gerber, Technical University of Darmstadt; Karola Marky, Leibniz University Hannover and University of Glasgow
There are several ways to inform individuals about secure and privacy-preserving behavior in private social environments. Experts who are versed in security and privacy (S&P), who might be social peers, such as family members or friends, can provide advice or give recommendations. In this paper, we specifically investigate how S&P adepts inform peers in their private social environment about security and privacy. For this, we first conducted thirteen in-depth interviews with S&P adepts, revealing 1) their own S&P behavior and strategies in their personal lives, 2) obstacles in S&P conversations with peers, 3) situations in which S&P adepts intervene in the behavior of others, and 4) the perception of S&P adepts and stereotypes. Based on the interview results, we conducted three co-design workshop sessions with S&P adepts to explore options to better support S&P adepts informing their peers about secure and privacy-preserving behavior.
On recruiting and retaining users for security-sensitive longitudinal measurement panels
Akira Yamada, KDDI Research, Inc. and National Institute of Information and Communications Technology; Kyle Crichton, Carnegie Mellon University; Yukiko Sawaya, KDDI Research, Inc.; Jin-Dong Dong and Sarah Pearman, Carnegie Mellon University; Ayumu Kubota, KDDI Research, Inc.; Nicolas Christin, Carnegie Mellon University
Many recent studies have turned to longitudinal measurement panels to characterize how people use their computing devices under realistic conditions. In these studies, participants' devices are instrumented, and their behavior is closely monitored over long time intervals. Because such monitoring can be highly intrusive, researchers face substantial challenges recruiting and retaining participants.
We present three case studies using medium- to large-scale longitudinal panels, which all collect privacy- and security-sensitive data. In evaluating factors related to recruitment, retention, and data collection, we provide a foundation to inform the design of future long-term panel studies.
Through these studies, we observe that monetary and non-monetary incentives can be effective in recruiting panel participants, although each presents trade-offs and potential biases. Contrary to our initial expectations, we find that users do not behave any differently in their first few weeks of participation than in the remainder of their time in the study. In terms of retention, we note that personalized enrollment follow-ups can lower initial dropout rates, but they are challenging and costly to scale. Communication, including following up with inactive users, is vital to retention. However, finding the right balance of communication is equally important. Interfering with a participant's everyday device use is a sure way to lose users. Finally, we present several findings, based on practical experience, to help inform the design of the data collection process in observational panels.
"I don’t know why I check this...'' - Investigating Expert Users' Strategies to Detect Email Signature Spoofing Attacks
Peter Mayer, SECUSO - Security, Usability, Society, Karlsruhe Institute of Technology; Damian Poddebniak, Münster University of Applied Sciences; Konstantin Fischer and Marcus Brinkmann, Ruhr University Bochum; Juraj Somorovsky, Paderborn University; Angela Sasse, Ruhr University Bochum; Sebastian Schinzel, Münster University of Applied Sciences; Melanie Volkamer, SECUSO - Security, Usability, Society, Karlsruhe Institute of Technology
OpenPGP is one of the two major standards for end-to-end email security. Several studies showed that serious usability issues exist with tools implementing this standard. However, a widespread assumption is that expert users can handle these tools and detect signature spoofing attacks. We present a user study investigating expert users' strategies to detect signature spoofing attacks in Thunderbird. We observed 25 expert users while they classified eight emails as either having a legitimate signature or not. Studying expert users explicitly gives us an upper bound of attack detection rates of all users dealing with PGP signatures. 52% of participants fell for at least one out of four signature spoofing attacks. Overall, participants did not have an established strategy for evaluating email signature legitimacy. We observed our participants apply 23 different types of checks when inspecting signed emails, but only 8 of these checks tended to be useful in identifying the spoofed or invalid signatures. In performing their checks, participants were frequently startled, confused, or annoyed with the user interface, which they found supported them little. All these results paint a clear picture: Even expert users struggle to verify email signatures, usability issues in email security are not limited to novice users, and developers may need proper guidance on implementing email signature GUIs correctly.
Exploring User-Suitable Metaphors for Differentially Private Data Analyses
Farzaneh Karegar and Ala Sarah Alaqra, Karlstad University; Simone Fischer-Hübner, Karlstad University and Chalmers University of Technology
Despite recent enhancements in the deployment of differential privacy (DP), little has been done to address the human aspects of DP-enabled systems. Comprehending the complex concept of DP and the privacy protection it provides could be challenging for lay users who should make informed decisions when sharing their data. Using metaphors could be suitable to convey key protection functionalities of DP to them. Based on a three-phase framework, we extracted and generated metaphors for differentially private data analysis models (local and central). We analytically evaluated the metaphors based on experts’ feedback and then empirically evaluated them in online interviews with 30 participants. Our results showed that the metaphorical explanations can successfully convey that perturbation protects privacy and that there is a privacy-accuracy trade-off. Nonetheless, conveying information at a high level leads to incorrect expectations that negatively affect users' understanding and limits the ability to apply the concept to different contexts. In this paper, we presented the plausible suitability of metaphors and discussed the challenges of using them to facilitate informed decisions on sharing data with DP-enabled systems.
Do Password Managers Nudge Secure (Random) Passwords?
Samira Zibaei, Dinah Rinoa Malapaya, Benjamin Mercier, Amirali Salehi-Abari, and Julie Thorpe, Ontario Tech University
Passwords are the most popular authentication method due to their simplicity and widespread adoption. However, the prevalence of password reuse undermines its security. A promising strategy to mitigate the risks of password reuse is to use random passwords generated and stored by password managers, yet many users do not use them. Many web browsers have built-in password managers that employ nudges at the time of password creation. These nudges aim to persuade the selection of more secure random passwords; however, little is known about which designs are most effective. We study (n = 558) the efficacy of nudges used by three popular web browsers: Chrome, Firefox, and Safari. Our results suggest Safari’s nudge implementation is significantly more effective than the others at nudging users to adopt a randomly generated password. We examine factors that may contribute to the adoption of randomly generated passwords, reasons that people adopt a randomly generated password (or not), as well as discuss elements of Safari’s nudge design that may contribute to its success. Our findings can be useful in informing both future password manager nudge designs and interventions to encourage password manager use.
An Empirical Study of a Decentralized Identity Wallet: Usability, Security, and Perspectives on User Control
Maina Korir, University of Bedfordshire; Simon Parkin, TU Delft; Paul Dunphy, OneSpan
User-centric digital identity initiatives are emerging with a mission to shift control over online identity disclosures to the individual. However, there is little representation of prospective users in discussions of the merits of empowering users with new data management responsibilities and the acceptability of new technologies. We conducted a user study comprising a contextual inquiry and semi-structured interviews using a prototype decentralized identity wallet app with 30 online participants. Our usability analysis uncovered misunderstandings about decentralized identifiers (DIDs) and pain points relating to using QR codes and following the signposting of cross-device user journeys. In addition, the technology did not readily resolve questions about whether the user, identity provider, or relying party was in control of data at crucial moments. We also learned that users' judgments of data minimization encompass a broader scope of issues than simply the technical provision of the identity wallet. Our results contribute to understanding future user-centric identity technologies from the view of privacy and user acceptance.
Let The Right One In: Attestation as a Usable CAPTCHA Alternative
Tara Whalen, Thibault Meunier, and Mrudula Kodali, Cloudflare Inc.; Alex Davidson, Brave; Marwan Fayed and Armando Faz-Hernández, Cloudflare Inc.; Watson Ladd, Sealance Corp.; Deepak Maram, Cornell Tech; Nick Sullivan, Benedikt Christoph Wolters, Maxime Guerreiro, and Andrew Galloni, Cloudflare Inc.
CAPTCHAs are necessary to protect websites from bots and malicious crawlers, yet are increasingly solvable by automated systems. This has led to more challenging tests that require greater human effort and cultural knowledge; they may prevent bots effectively but sacrifice usability and discourage the human users they are meant to admit. We propose a new class of challenge: a Cryptographic Attestation of Personhood (CAP) as the foundation of a usable, pro-privacy alternative. Our challenge is constructed using the open Web Authentication API (WebAuthn) that is supported in most browsers. We evaluated the CAP challenge through a public demo, with an accompanying user survey. Our evaluation indicates that CAP has a strong likelihood of adoption by users who possess the necessary hardware, showing good results for effectiveness and efficiency as well as a strong expressed preference for using CAP over traditional CAPTCHA solutions. In addition to demonstrating a mechanism for more usable challenge tests, we identify some areas for improvement for the WebAuthn user experience, and reflect on the difficult usable privacy problems in this domain and how they might be mitigated.
Replication: How Well Do My Results Generalize Now? The External Validity of Online Privacy and Security Surveys
Jenny Tang, Wellesley College; Eleanor Birrell, Pomona College; Ada Lerner, Northeastern University
Privacy and security researchers often rely on data collected through online crowdsourcing platforms such as Amazon Mechanical Turk (MTurk) and Prolific. Prior work---which used data collected in the United States between 2013 and 2017---found that MTurk responses regarding security and privacy were generally representative for people under 50 or with some college education. However, the landscape of online crowdsourcing has changed significantly over the last five years, with the rise of Prolific as a major platform and the increasing presence of bots. This work attempts to replicate the prior results about the external validity of online privacy and security surveys. We conduct an online survey on MTurk (n=800), a gender-balanced survey on Prolific (n=800), and a representative survey on Prolific (n=800) and compare the responses to a probabilistic survey conducted by the Pew Research Center (n=4272). We find that MTurk response quality has degraded over the last five years, and our results do not replicate the earlier finding about the generalizability of MTurk responses. By contrast, we find that data collected through Prolific is generally representative for questions about user perceptions and experiences, but not for questions about security and privacy knowledge. We also evaluate the impact of Prolific settings, attention check questions, and statistical methods on the external validity of online surveys, and we develop recommendations about best practices for conducting online privacy and security surveys.
Balancing Power Dynamics in Smart Homes: Nannies' Perspectives on How Cameras Reflect and Affect Relationships
Julia Bernd, International Computer Science Institute; Ruba Abu-Salma, King’s College London; Junghyun Choy and Alisa Frik, International Computer Science Institute
Smart home cameras raise privacy concerns in part because they frequently collect data not only about the primary users who deployed them but also other parties -- who may be targets of intentional surveillance or incidental bystanders. Domestic employees working in smart homes must navigate a complex situation that blends privacy and social norms for homes, workplaces, and caregiving.
This paper presents findings from 25 semi-structured interviews with domestic childcare workers in the U.S. about smart home cameras, focusing on how privacy considerations interact with the dynamics of their employer-employee relationships. We show how participants’ views on camera data collection, and their desire and ability to set conditions on data use and sharing, were affected by power differentials and norms about who should control information flows in a given context. Participants’ attitudes about employers’ cameras often hinged on how employers used the data; whether participants viewed camera use as likely to reinforce negative tendencies in the employer-employee relationship; and how camera use and disclosure might reflect existing relationship tendencies.
We also suggest technical and social interventions to mitigate the adverse effects of power imbalances on domestic employees’ privacy and individual agency.