SOUPS 2023 Technical Sessions

All sessions will be held in Platinum Salon 5 unless otherwise noted. All the times listed below are in Pacific Daylight Time (PDT).

Papers are available for download below to registered attendees now. The papers and the full proceedings will be available to everyone beginning Monday, August 7, 2023. Paper abstracts are available to everyone now. Copyright to the individual works is retained by the author[s].

Proceedings Front Matter
Proceedings Cover | Title Page, Copyright Page, and List of Organizers | Message from the Program Co-Chairs | Table of Contents

Attendee Files 
SOUPS 2023 Attendee List (PDF)
SOUPS 2023 Proceedings Web Archive (78 MB ZIP)

Monday, August 7

8:00 am–9:00 am

Continental Breakfast

Platinum 5-6 Foyer

9:00 am–9:15 am

Opening Remarks and Awards

General Chairs: Patrick Gage Kelley, Google, and Apu Kapadia, Indiana University Bloomington

9:15 am–10:30 am

Cybercrimes and Misinformation

Session Chair: Kent Seamons, Brigham Young University

An Investigation of Teenager Experiences in Social Virtual Reality from Teenagers', Parents', and Bystanders' Perspectives

Elmira Deldari, University of Maryland, Baltimore County; Diana Freed, Cornell Tech; Julio Poveda, University of Maryland; Yaxing Yao, University of Maryland, Baltimore County

Available Media

The recent rise of social virtual reality (VR) platforms has introduced new technology characteristics and user experiences, which may lead to new forms of online harassment, particularly among teenagers (individuals aged 13-17). In this paper, we took a multi-stakeholder approach and investigate teenagers' experiences and safety threats in social VR from three perspectives (teenagers, parents, and bystanders) to cover complementary perspectives. Through an interview study with 24 participants (8 teenagers, 7 parents, and 9 bystanders), we found several safety threats that teenagers may face, such as virtual grooming, ability-based discrimination, unforeseeable threats in privacy rooms, etc. We highlight new forms of harassment in the social VR context, such as erotic role-play and abuse through phantom sense, as well as the discrepancies among teenagers, parents, and bystanders regarding their perceptions of such threats. We draw design implications to better support safer social VR environments for teenagers.

Fight Fire with Fire: Hacktivists' Take on Social Media Misinformation

Filipo Sharevski and Benjamin Kessell, DePaul University

Available Media

In this study, we interviewed 22 prominent hacktivists to learn their take on the increased proliferation of misinformation on social media. We found that none of them welcomes the nefarious appropriation of trolling and memes for the purpose of political (counter)argumentation and dissemination of propaganda. True to the original hacker ethos, misinformation is seen as a threat to the democratic vision of the Internet, and as such, it must be confronted head on with tried hacktivism methods: deplatforming the "misinformers" and doxing their funding and recruitment. The majority of the hacktivists we interviewed recommended interventions for promoting misinformation literacy in addition to targeted hacking campaigns. We discuss the implications of these findings relative to the emergent recasting of hacktivism as a defense of a constructive and factual social media discourse.

"Stalking is immoral but not illegal": Understanding Security, Cyber Crimes and Threats in Pakistan

Afaq Ashraf and - Taha, Lahore University of Management Sciences; Nida ul Habib Bajwa and Cornelius J. König, Universität des Saarlandes; Mobin Javed and Maryam Mustafa, Lahore University of Management Sciences

IAPP SOUPS Privacy Award

Available Media

We explore the experiences, understandings, and perceptions of cyber-threats and crimes amongst young adults in Pakistan, focusing on their mechanisms for protecting themselves, for reporting cyber threats and for managing their digital identities. Relying on data from a qualitative study with 34 participants in combination with a repertory grid analysis with 18 participants, we map users mental models and constructs of cyber crimes and threats, their understanding of digital vulnerabilities, their own personal boundaries and their moral compasses on what constitutes an invasion of privacy of other users in a country where there is little legal legislation governing cyberspace and cyber crimes. Our findings highlight the importance of platform adaptation to accommodate the unique context of countries with limited legal mandates and reporting outlets, the ways in which digital vulnerabilities impact diverse populations, and how security and privacy design can be more inclusive.

Checking, nudging or scoring? Evaluating e-mail user security tools

Sarah Y. Zheng and Ingolf Becker, UCL

Available Media

Phishing e-mail threats are increasing in sophistication. Technical measures alone do not fully prevent users from falling for them and common e-mail interfaces provide little support for users to check an e-mail's legitimacy. We designed three e-mail user security tools to improve phishing detection within a common e-mail interface and provide a formative evaluation of the usability of these features: two psychological nudges to alert users of suspicious e-mails and a "check" button to enable users to verify an email's legitimacy. Professional e-mail users (N=27) found the "suspicion score" nudge and "check" button the most useful. These alerted users of suspicious e-mails, without harming their productivity, and helped users assert trust in legitimate ones. The other nudge was too easily ignored or too disruptive to be effective. We also found that users arrive at erroneous judgements due to differing interpretations of e-mail details, even though two-thirds of them completed cybersecurity training before. These findings show that usable and therefore effective e-mail user security tools can be developed by leveraging cues of legitimacy that augment existing user behaviour, instead of emphasising technical security training.

Understanding the Viability of Gmail's Origin Indicator for Identifying the Sender

Enze Liu, Lu Sun, and Alex Bellon, UC San Diego; Grant Ho, University of Chicago; Geoffrey M. Voelker, Stefan Savage, and Imani N. S. Munyaka, UC San Diego

Available Media

The current design of email authentication mechanisms has made it challenging for email providers to establish the authenticity of email messages with complicated provenance, such as in the case of forwarding or third-party sending services, where the purported sender of an email is different from the actual originator. Email service providers such as Gmail have tried to address this issue by deploying sender identity indicators (SIIs), which seek to raise users' awareness about where a message originated and encourage safe behavior from users. However, the success of such indicators depends heavily on user interpretation and behavior, and there exists no work that empirically investigates these aspects. In this work, we conducted an interactive survey (n=180) that examined user comprehension of and behavior changes prompted by Gmail's passive SII, the 'via' indicator. Our quantitative analysis shows that although most participants (89%) noticed the indicator, it did not have a significant impact on whether users would adopt safe behaviors. Additionally, our qualitative analysis suggests that once prompted to consider why 'via' is presented, the domain name displayed after 'via' heavily influenced participants' interpretation of the message 'via' is communicating. Our work highlights the limitations of using passive indicators to assist users in making decisions about email messages with complicated provenance.

10:30 am–11:00 am

Break with Refreshments

Platinum 5-6 Foyer

11:00 am–12:30 pm

Security and Privacy in Organizations

Session Chair: Emilee Rader, Michigan State University

'Give Me Structure': Synthesis and Evaluation of a (Network) Threat Analysis Process Supporting Tier 1 Investigations in a Security Operation Center

Leon Kersten, Tom Mulders, Emmanuele Zambon, Chris Snijders, and Luca Allodi, Eindhoven University of Technology

Available Media

Current threat analysis processes followed by tier-1 (T1) analysts in a Security Operation Centers (SOC) rely mainly on tacit knowledge, and can differ greatly across analysts. The lack of structure and clear objectives to T1 analyses contributes to analyst burnout, makes operative inefficiencies hard to spot, SOC performance hard to measure (and therefore improve), and results in overall lower security for the monitored environment(s). In this work we collaborate with a commercial SOC to devise a 4-stage (network) threat analysis process to support the collection and analysis of relevant information for threat analysis. We conduct an experiment with ten T1 analysts employed in the SOC and show that analysts following the proposed process are 2.5 times more likely to produce an accurate assessment than analysts who do not. We evaluate qualitatively the effects of the process on analysts decisions, and discuss implications for practice and research.

Exploring the Security Culture of Operational Technology (OT) Organisations: the Role of External Consultancy in Overcoming Organisational Barriers

Stefanos Evripidou, University College London; Uchenna D Ani, University of Keele; Stephen Hailes and Jeremy D McK. Watson, University College London

Available Media

Operational Technology (OT) refers to systems that control and monitor industrial processes. Organisations that use OT can be found in many sectors, including water and energy, and often operate a nation's critical infrastructure. These organisations have been under a digitalisation process, which along with increasing regulatory pressures have necessitated changes in their cybersecurity practices. The lack of internal resources has often compelled these organisations to turn to external consultancy to enhance their security. Given the differences between OT and Information Technology (IT) security practices and that OT cybersecurity is still in its infancy, developing a security culture in OT environments remains a challenge, with little research investigating this topic. We have conducted 33 interviews with professionals with a security related role working in various OT sec-tors in the UK, on the subject of security culture development. Our analysis indicates three key organisational barriers to the development of a security culture: governance structures, lack of communication between functions, and the lack of OT cybersecurity expertise. Subsequently, the role of consultants and security solution vendors in overcoming these barriers through consultancy is demonstrated. We therefore argue that these stakeholders play a crucial part in the development of security culture in OT and conclude with recommendations for these organisations.

Lacking the Tools and Support to Fix Friction: Results from an Interview Study with Security Managers

Jonas Hielscher, Markus Schöps, Uta Menges, Marco Gutfleisch, Mirko Helbling, and M. Angela Sasse, Ruhr University Bochum

Available Media

Security managers often perceive employees as the key vulnerability in organizations when it comes to security threats, and complain that employees do not follow secure behaviors defined by their security policies and mechanisms. Research has shown, however, that security often interferes with employees primary job function, causing friction and reducing productivity -- so when employees circumvent security measures, it is to protect their own productivity, and that of the organization. In this study, we explore to what extent security managers are aware of the friction their security measures cause, if they are aware of usable security methods and tools they could apply to reduce friction, and if they have tried to apply them. We conducted 14 semi-structured interviews with experienced security managers (CISOs and security consultants, with an average 20 years experience) to investigate how security friction is dealt with in organizations. The results of the interviews show security managers are aware that security friction is a significant problem that often reduces productivity and increases the organization's vulnerability. They are also able to identify underlying causes, but are unable to tackle them because the organizations prioritize compliance with relevant external standards, which leaves no place for friction considerations. Given these blockers to reducing security friction in organizations, we identify a number of possible ways forward, such as: including embedding usable security in regulations and norms, developing positive key performance indicators (KPIs) for usable security measures, training security managers, and incorporating usability aspects into the daily processes to ensure security frictionless work routines for everyone.

What can central bank digital currency designers learn from asking potential users?

Svetlana Abramova and Rainer Böhme, Universität Innsbruck; Helmut Elsinger, Helmut Stix, and Martin Summer, Oesterreichische Nationalbank

Available Media

The ongoing initiatives to offer central bank money to consumers in the form of retail central bank digital currency (CBDC) have triggered discussions on its optimal design. So far, the perspective of potential users has not been considered widely. To strengthen this, we survey 2006 Austrian residents using a tailored questionnaire on attitudes towards a digital euro, selected technical features as well as potential security and privacy concerns. Only about half of the surveyed respondents express at least some interest in a digital euro. This subsample tends to attribute more importance to security aspects than to transaction data privacy. Similarly, offline functionality is preferred over a feature to make direct payments between persons. Our findings suggest central banks to embrace a more user-centric design of CBDC. This effort should include communicating the key concepts and benefits to the potential users.

"Would You Give the Same Priority to the Bank and a Game? I Do Not!" Exploring Credential Management Strategies and Obstacles during Password Manager Setup

Sabrina Amft, CISPA Helmholtz Center for Information Security; Sandra Höltervennhoff and Nicolas Huaman, Leibniz University Hannover; Yasemin Acar, George Washington University and Paderborn University; Sascha Fahl, CISPA Helmholtz Center for Information Security and Leibniz University Hannover

Available Media

Password managers allow users to improve password security by handling large numbers of strong and unique passwords without the burden of memorizing them. While users are encouraged to add all credentials to their password manager and update weak credentials, this task can require significant effort and thus jeopardize security benefits if not completed thoroughly. However, user strategies to add credentials, related obstacles, and their security implications are not well understood. To address this gap in security research, we performed a mixed-methods study, including expert reviews of 14 popular password managers and an online survey with 279 users of built-in and third-party password managers. We extend previous work by examining the status quo of password manager setup features and investigating password manager users' setup strategies. We confirm previous research and find that many participants utilize password managers for convenience, not as a security tool. They most commonly add credentials whenever a website is visited, and prioritize what they add. Similarly, passwords are often only updated when they are considered insecure. Additionally, we observe a severe distrust towards password managers, leading to users not adding important passwords. We conclude our work by giving recommendations for password manager developers to help users overcome the obstacles we identified.

Evolution of Password Expiry in Companies: Measuring the Adoption of Recommendations by the German Federal Office for Information Security

Eva Gerlitz, Fraunhofer FKIE; Maximilian Häring, University of Bonn; Matthew Smith, University of Bonn and Fraunhofer FKIE; Christian Tiefenau, University of Bonn

Available Media

In 2020, the German Federal Office for Information Security (BSI) updated its Password composition policy (PCP) guidelines for companies. This included the removal of password expiry, which research scholars have been discussing for at least 13 years. To analyze how the usage of password expiry in companies evolved, we conducted a study that surveyed German companies three times: eight months (n = 52 ), two years (n = 63 ), and three years (n = 80 ) after these changed recommendations. We compared our results to data gathered shortly before the change in 2019. We recruited participants via the BSI newsletter and found that 45% of the participants said their companies still use password expiry in 2023. The two main arguments were a) to increase security and b) because some stakeholders still required these regular changes. We discuss the given reasons and offer suggestions for research and guiding institutions.

12:30 pm–1:45 pm

Monday Luncheon and Mentoring Tables

Platinum Salon 6

See the Mentoring Program page for more information.

1:45 pm–2:45 pm

Authentication

Session Chair: Peter Mayer, University of Southern Denmark

Dissecting Nudges in Password Managers: Simple Defaults are Powerful

Samira Zibaei, Amirali Salehi-Abari, and Julie Thorpe, Ontario Tech University

Available Media

Password managers offer a feature to randomly generate a new password for the user. Despite improving account security, randomly generated passwords (RGPs) are underutilized. Many password managers employ nudges to encourage users to select a randomly generated password, but the most effective nudge design is unclear. Recent work has suggested that Safari's built-in password manager nudge might be more effective in encouraging RGP adoption than that of other browsers. However, it remains unclear what makes it more effective, and even whether this result can be attributed to Safari's nudge design or simply its demographics. We report on a detailed large-scale study (n=853) aimed at clarifying these issues. Our results support that Safari's nudge design is indeed more effective than Chrome's. By dissecting the elements of Safari's nudge, we find that its most important element is its default nudge. We additionally examine whether a social influence nudge can further enhance Safari's RGP adoption rate. Finally, we analyze and discuss the importance of a nudge being noticed by users, and its ethical considerations. Our results inform RGP nudge designs in password managers and should also be of interest to practitioners and researchers working on other types of security nudges.

Adventures in Recovery Land: Testing the Account Recovery of Popular Websites When the Second Factor is Lost

Eva Gerlitz, Fraunhofer FKIE; Maximilian Häring and Charlotte Theresa Mädler, University of Bonn; Matthew Smith, University of Bonn and Fraunhofer FKIE; Christian Tiefenau, University of Bonn

Available Media

Literature on two-factor authentication (2FA) lists users' fear of losing the second factor as one major constraint on acceptability. Nonetheless, more and more services offer or even enforce 2FA. Yet, little is published about what services do to prevent users from losing access to their accounts and how well users are guided through the process of regaining access to their accounts in case they lose their second factor. To fill this gap, we set up 2FA on 78 popular online services and apps and analyzed their user interface during the 2FA setup and recovery. Although there is no straightforward solution for account recovery when using a second factor, we identified easily fixable usability flaws. For example, in the setup phase, 28 services do not mention the possibility of losing the second factor at all. Furthermore, while it is common for services to provide a clearly visible “forgotten password”-link beneath the login field, an equivalent for 2FA is often missing, and a user is left alone with the problem. Our study provides insights for website designers and security practitioners seeking to enhance the usability of 2FA. We also discuss further directions for research.

Tangible 2FA – An In-the-Wild Investigation of User-Defined Tangibles for Two-Factor Authentication

Mark Turner, University of Glasgow; Martin Schmitz, Saarland University Saarbrücken; Morgan Masichi Bierey and Mohamed Khamis, University of Glasgow; Karola Marky, University of Glasgow and Ruhr-University Bochum

Available Media

Although two-factor authentication (2FA) mechanisms can be usable, they poorly integrate into users' daily routines, especially during mobile use. Using tangibles for 2FA is a promising alternative that beneficially combines customisable authentication routines and object geometries, personalisable to each user. Yet, it remains unclear how they integrate into daily routines. In this paper, we first let 226 participants design 2FA tangibles to understand user preferences. Second, we prototyped the most common shapes and performed a one-week long in-the-wild study (N=15) to investigate how 2FA tangibles perform in different environments. We show that most users prefer objects that a) fit in wallets, b) connect to daily items or c) are standalone. Users enjoyed interacting with 2FA tangibles and considered them a viable and more secure alternative. Yet, they voiced concerns on portability. We conclude by an outlook for a real world implementation and distribution of 2FA tangibles addressing user concerns.

Prospects for Improving Password Selection

Joram Amador, Yiran Ma, Summer Hasama, Eshaan Lumba, Gloria Lee, and Eleanor Birrell, Pomona College

Available Media

User-chosen passwords remain essential to online security, and yet users continue to choose weak, insecure passwords. In this work, we investigate whether prospect theory, a behavioral model of how people evaluate risk, can provide insights into how users choose passwords and whether it can motivate new designs for password selection mechanisms that will nudge users to select stronger passwords. We run a pair of online user studies, and we find that an intervention guided by prospect theory---which leverages the reference-dependence effect by framing a choice of a weak password as a loss relative to choosing a stronger password---causes approximately 25% of users to improve the strength of their password (significantly more than alternative interventions) and improves the strength of passwords users select. We also evaluate the relation between feedback provided and password decisions and between users' mental models and password decisions. These results provide guidance for designing and implementing password selection interfaces that will significantly improve the strength of user-chosen passwords, thereby leveraging insights from prospect theory to improve the security of systems that use password-based authentication.

2:45 pm–3:15 pm

Lightning Talks

3:15 pm–3:45 pm

Break with Refreshments

Platinum 5-6 Foyer

3:45 pm–4:45 pm

Keynote Address

Weaponizing Technology: Examining the Importance of Privacy in an Era of Unprecedented Digital Surveillance

Sapna Khatri, University of California, Los Angeles, School of Law

Available Media

The 2022 Supreme Court decision in Dobbs v. Jackson Women's Health Organization sent shockwaves across the country. From forcing people to travel across state lines to access abortion care, to carving a path for additional rights to be challenged, the decision's impact is far-reaching. Join us for this keynote presentation, which will explore the impact of Dobbs on the technology and privacy landscape. Specifically, we will examine how our digital footprint can paint a comprehensive picture of our daily lives—one that can easily be weaponized against us when accessing a suite of sexual and reproductive health care. This keynote will pay special attention to how our digital surveillance economy preys on marginalized communities and the need for thoughtful, privacy-protective measures as technology advances.

Sapna Khatri, University of California, Los Angeles, School of Law

Sapna Khatri, J.D., is the Sears Clinical Teaching Fellow at UCLA Law School. Her policy work spans a range of privacy, gender, and reproductive justice work at the Center on Reproductive Health, Law, and Policy and the Williams Institute. She recently helped launch the nation's first Medical Legal Partnership at a local Planned Parenthood, and UCLA Law's inaugural Reproductive Justice Externship Seminar. Her scholarship is rooted in reproductive justice and examines technology as a weapon of reproductive oppression. Before joining UCLA Law, she worked as a Staff Attorney with the Women's & Reproductive Rights Project at the ACLU of Illinois, and later as an Advocacy & Policy Counsel with the organization. She led amicus efforts on a religious refusals case before the IL Human Right Commission and lobbied for the successful passage of the Reproductive Health Act and Protecting Household Privacy Act. Sapna has a J.D. from Washington University, as well as a B.A. in International Studies and a B.J. in Strategic Communication from the University of Missouri-Columbia. Her publications have appeared in the Washington University Global Studies Law Review and the Chicago Sun-Times, among others. She currently also serves as a Fellow with the Internet Law & Policy Foundry.

5:30 pm–6:45 pm

Poster Session and Reception

Check out the cool new ideas and the latest preliminary research on display at the SOUPS Poster Session and Reception. View the list of accepted posters.

Tuesday, August 8

8:00 am–9:00 am

Continental Breakfast

Platinum 5-6 Foyer

9:00 am–10:00 am

Beyond End Users/Developers and Experts

Session Chair: Daniel Zappala, Brigham Young University

Who Comes Up with this Stuff? Interviewing Authors to Understand How They Produce Security Advice

Lorenzo Neil, North Carolina State University; Harshini Sri Ramulu, The George Washington University; Yasemin Acar, Paderborn University & The George Washington University; Bradley Reaves, North Carolina State University

Available Media

Users have a wealth of available security advice --- far too much, according to prior work. Experts and users alike struggle to prioritize and practice advised behaviours, negating both the advice's purpose and potentially their security. While the problem is clear, no rigorous studies have established the root causes of overproduction, lack of prioritization, or other problems with security advice. Without understanding the causes, we cannot hope to remedy their effects.

In this paper, we investigate the processes that authors follow to develop published security advice. In a semi-structured interview study with 21 advice writers, we asked about the authors' backgrounds, advice creation processes in their organizations, the parties involved, and how they decide to review, update, or publish new content. Among the 17 themes we identified from our interviews, we learned that authors seek to cover as much content as possible, leverage multiple diverse external sources for content, typically only review or update content after major security events, and make few if any conscious attempts to deprioritize or curate less essential content. We recommend that researchers develop methods for curating security advice and guidance on messaging for technically diverse user bases and that authors then judiciously identify key messaging ideas and schedule periodic proactive content reviews. If implemented, these actionable recommendations would help authors and users both reduce the burden of advice overproduction while improving compliance with secure computing practices.

Towards Usable Security Analysis Tools for Trigger-Action Programming

McKenna McCall and Eric Zeng, Carnegie Mellon University; Faysal Hossain Shezan, University of Virginia; Mitchell Yang and Lujo Bauer, Carnegie Mellon University; Abhishek Bichhawat, IIT Gandhinagar; Camille Cobb, University of Illinois Urbana-Champaign; Limin Jia, Carnegie Mellon University; Yuan Tian, University of California, Los Angeles

Available Media

Research has shown that trigger-action programming (TAP) is an intuitive way to automate smart home IoT devices, but can also lead to undesirable behaviors. For instance, if two TAP rules have the same trigger condition, but one locks a door while the other unlocks it, the user may believe the door is locked when it is not. Researchers have developed tools to identify buggy or undesirable TAP programs, but little work investigates the usability of the different user-interaction approaches implemented by the various tools.

This paper describes an exploratory study of the usability and utility of techniques proposed by TAP security analysis tools. We surveyed 447 Prolific users to evaluate their ability to write declarative policies, identify undesirable patterns in TAP rules (anti-patterns), and correct TAP program errors, as well as to understand whether proposed tools align with users' needs. We find considerable variation in participants' success rates writing policies and identifying anti-patterns. For some scenarios over 90% of participants wrote an appropriate policy, while for others nobody was successful. We also find that participants did not necessarily perceive the TAP anti-patterns flagged by tools as undesirable. Our work provides insight into real smart-home users' goals, highlights the importance of more rigorous evaluation of users' needs and usability issues when designing TAP security tools, and provides guidance to future tool development and TAP research.

On the Recruitment of Company Developers for Security Studies: Results from a Qualitative Interview Study

Raphael Serafini, Marco Gutfleisch, Stefan Albert Horstmann, and Alena Naiakshina, Ruhr University Bochum

Available Media

To address the issue of participant recruitment for security developer studies, researchers proposed using freelance online platforms or recruiting computer science (CS) students as proxies. However, recent studies showed that company developers performed better than freelancers or CS students in security developer studies. Additionally, studies on factors influencing usable security and privacy in companies make recruiting professionals indispensable. Therefore, we investigated influential factors on the motivation of software developers regularly employed in companies to participate in security studies. We conducted 30 semi-structured interviews on their perceptions of study factors concerning study design, recruitment methods, and data collection. We found that the study duration, topic, monetary compensation, and trust are influential factors for participation in developer studies. However, participants were concerned about high effort and weak performance in security tasks. Based on our findings, we provide recruitment and study design recommendations for future security research with company developers.

SoK: I Have the (Developer) Power! Sample Size Estimation for Fisher's Exact, Chi-Squared, McNemar's, Wilcoxon Rank-Sum, Wilcoxon Signed-Rank and t-tests in Developer-Centered Usable Security

Anna-Marie Ortloff and Christian Tiefenau, University of Bonn; Matthew Smith, University of Bonn and Fraunhofer FKIE

Awarded Distinguished Paper!

Available Media

A priori power analysis would be very beneficial for researchers in the field of developer-centered usable security since recruiting developers for studies is challenging. Power analysis allows researchers to know how many participants they need to test their null hypotheses. However, most studies in this field do not report having conducted power analysis. We conducted a meta-analysis of 54 top-tier developer study papers and found that many are indeed underpowered even to detect large effects. To aid researchers in conducting a priori power analysis in this challenging field, we conducted a systematization of knowledge to extract and condense the needed information. We extracted information from 467 tests and 413 variables and developed a data structure to systematically represent information about hypothesis tests, involved variables, and study methodology. We then systematized the information for tests with categorical independent variables with two groups, i.e., Fisher's exact, chi-squared, McNemar's, Wilcoxon rank-sum, Wilcoxon signed-rank, and paired and independent t-tests to aid researchers with power analysis for these tests. Additionally, we present overview information on the field of developer-centered usable security and list recommendations for suitable reporting practices to make statistical information for power analysis and interpretation more accessible for researchers.

10:00 am–10:30 am

Lightning Talks

10:30 am–11:00 am

Break with Refreshments

Platinum 5-6 Foyer

11:00 am–12:15 pm

Accessibility and Allies

Session Chair: Yaxing Yao, Virginia Tech

GuardLens: Supporting Safer Online Browsing for People with Visual Impairments

Smirity Kaushik, Natã M. Barbosa, Yaman Yu, Tanusree Sharma, Zachary Kilhoffer, and JooYoung Seo, University of Illinois at Urbana-Champaign; Sauvik Das, Carnegie Mellon University; Yang Wang, University of Illinois at Urbana-Champaign

Available Media

Visual cues play a key role in how users assess the privacy/security of a website but often remain inaccessible to people with visual impairments (PVIs), disproportionately exposing them to privacy and security risks. We employed an iterative, user-centered design process with 25 PVIs to design and evaluate GuardLens, a browser extension that improves the accessibility of privacy/security cues and helps PVIs assess a website's legitimacy (i.e. if it is a spoof/phish). We started with a formative study to understand what privacy/security cues PVIs find helpful, and then improved GuardLens based on the results. Next, we further refined Guardlens based on a pilot study, and lastly conducted our main study to evaluate GuardLens' efficacy. The results suggest that GuardLens, by extracting and listing pertinent privacy/security cues in one place for faster and easier access, helps PVIs quickly and accurately determine if websites are legitimate or spoofs. PVIs found cues such as domain age, search result ranking, and the presence/absence of HTTPS encryption, especially helpful. We conclude with design implications for tools to support PVIs with safe web browsing.

Iterative Design of An Accessible Crypto Wallet for Blind Users

Zhixuan Zhou, Tanusree Sharma, and Luke Emano, University of Illinois at Urbana-Champaign; Sauvik Das, Carnegie Mellon University; Yang Wang, University of Illinois at Urbana-Champaign

Available Media

Crypto wallets are a key touch-point for cryptocurrency use. People use crypto wallets to make transactions, manage crypto assets, and interact with decentralized apps (dApps). However, as is often the case with emergent technologies, little attention has been paid to understanding and improving accessibility barriers in crypto wallet software. We present a series of user studies that explored how both blind and sighted individuals use MetaMask, one of the most popular non-custodial crypto wallets. We uncovered inter-related accessibility, learnability, and security issues with MetaMask. We also report on an iterative redesign of MetaMask to make it more accessible for blind users. This process involved multiple evaluations with 44 novice crypto wallet users, including 20 sighted users, 23 blind users, and one user with low vision. Our study results show notable improvements for accessibility after two rounds of design iterations. Based on the results, we discuss design implications for creating more accessible and secure crypto wallets for blind users.

Youth understandings of online privacy and security: A dyadic study of children and their parents

Olivia Williams, University of Maryland; Yee-Yin Choong and Kerrianne Buchanan, National Institute of Standards and Technology

Available Media

With youth increasingly accessing and using the internet, it is important to understand what they know about online privacy and security (OPS), and from where they gain this knowledge in order to best support their learning and online practices. Currently, the field of literature surrounding such youth understandings has gaps in depth and breadth that we aimed to address in this study. We conducted semi-structured interviews with 40 youth/parent dyads with youth in 3rd-12th grades in the United States to understand more about what youth know about OPS and how their parents attempt to influence this knowledge. We found that youth of all ages in the study could provide at least basic descriptions of both online privacy and online security and could give relevant examples of good and bad OPS choices. We also found that parents took a variety of approaches to influencing youth understandings and behavior, with most of those approaches relying on device monitoring and limiting use. However, parents who attempted to influence their children's knowledge through conversations had children who demonstrated the most nuanced understandings. Our findings offer promising suggestions for parents, technology providers, and future research.

ImageAlly: A Human-AI Hybrid Approach to Support Blind People in Detecting and Redacting Private Image Content

Zhuohao (Jerry) Zhang, University of Washington, Seattle; Smirity Kaushik and JooYoung Seo, University of Illinois at Urbana-Champaign; Haolin Yuan, Johns Hopkins University; Sauvik Das, Carnegie Mellon University; Leah Findlater, University of Washington, Seattle; Danna Gurari, University of Colorado Boulder; Abigale Stangl, University of Washington, Seattle; Yang Wang, University of Illinois at Urbana-Champaign

Available Media

Many people who are blind take and post photos to share about their lives and connect with others. Yet, current technology does not provide blind people with accessible ways to handle when private information is unintentionally captured in their images. To explore the technology design in supporting them with this task, we developed a design probe for blind people — ImageAlly — that employs a human-AI hybrid approach to detect and redact private image content. ImageAlly notifies users when potential private information is detected in their images, using computer vision, and enables them to transfer those images to trusted sighted allies to edit the private content. In an exploratory study with pairs of blind participants and their sighted allies, we found that blind people felt empowered by ImageAlly to prevent privacy leakage in sharing images on social media. They also found other benefits from using ImageAlly, such as potentially improving their relationship with allies and giving allies the awareness of the accessibility challenges they face.

Evaluating the Impact of Community Oversight for Managing Mobile Privacy and Security

Mamtaj Akter, Vanderbilt University; Madiha Tabassum and Nazmus Sakib Miazi, Northeastern University; Leena Alghamdi, University of Central Florida; Jess Kropczynski, University of Cincinnati; Pamela J. Wisniewski, Vanderbilt University; Heather Lipford, University of North Carolina, Charlotte

Available Media

Mobile privacy and security can be a collaborative process where individuals seek advice and help from their trusted communities. To support such collective privacy and security management, we developed a mobile app for Community Oversight of Privacy and Security ("CO-oPS") that allows community members to review one another's apps installed and permissions granted to provide feedback. We conducted a four-week-long field study with 22 communities (101 participants) comprised of friends, families, or co-workers who installed the CO-oPS app on their phones. We found that community members reported significantly higher levels of transparency, trust, and awareness of one another's mobile privacy and security behaviors, along with increased individual and community participation in mobile privacy and security co-management, after using the CO-oPS app. Interview results show that the app features supported collective considerations of apps and permissions. However, participants expressed a range of concerns regarding having community members with different levels of technical expertise and knowledge regarding mobile privacy and security that can impact motivation to participate and perform oversight. Our study demonstrates the potential and challenges of community oversight mechanisms to support communities to work together to manage their mobile privacy and security.

12:15 pm–1:30 pm

Tuesday Luncheon and Speed Mentoring Tables

Platinum Salon 6

See the Mentoring Program page for more information.

1:30 pm–2:45 pm

Beliefs and Behavior

Session Chair: Alexandru Bardas, University of Kansas

Data Privacy and Pluralistic Ignorance

Emilee Rader, Michigan State University

Available Media

This paper presents the results of an online survey experiment with 746 participants that investigated whether social norms influence people's choices about using technologies that can infer information they might not want to disclose. The results show both correlational and causal evidence that empirical expectations (beliefs about what others do) and normative expectations (beliefs about what others believe) influence choices to use mobile devices in ways that generate data that could be used to make sensitive inferences. However, participants also reported concern about data privacy, and lower behavioral intentions for vignettes involving more invasive inferences. Pluralistic ignorance is a phenomenon where individuals behave in ways they privately disagree with, because they see others around them behaving the same way and assume this is evidence most people approve of the behavior. These results are consistent with the existence of pluralistic ignorance related to data privacy, and suggest that interventions focused on transparency about data practices are not enough to encourage people to make different privacy choices.

Distrust of big tech and a desire for privacy: Understanding the motivations of people who have voluntarily adopted secure email

Warda Usman, Jackie Hu, McKynlee Wilson, and Daniel Zappala, Brigham Young University

Available Media

Secure email systems that use end-to-end encryption are the best method we have for ensuring user privacy and security in email communication. However, the adoption of secure email remains low, with previous studies suggesting mainly that secure email is too complex or inconvenient to use. However, the perspectives of those who have, in fact, chosen to use an encrypted email system are largely overlooked. To understand these perspectives, we conducted a semi-structured interview study that aims to provide a comprehensive understanding of the mindsets underlying adoption and use of secure email services. Our participants come from a variety of countries and vary in the amount of time they have been using secure email, how often they use it, and whether they use it as their primary account. Our results uncover that a defining reason for adopting a secure email system is to avoid surveillance from big tech companies. However, regardless of the complexity and accuracy of a person's mental model, our participants rarely send and receive encrypted emails, thus not making full use of the privacy they could obtain. These findings indicate that secure email systems could potentially find greater adoption by appealing to their privacy advantages, but privacy gains will be limited until a critical mass are able to join these systems and easily send encrypted emails to each other.

"Is Reporting Worth the Sacrifice of Revealing What I've Sent?": Privacy Considerations When Reporting on End-to-End Encrypted Platforms

Leijie Wang and Ruotong Wang, University of Washington; Sterling Williams-Ceci, Cornell University; Sanketh Menda, Cornell Tech; Amy X. Zhang, University of Washington

Available Media

User reporting is an essential component of content moderation on many online platforms---in particular, on end-to-end encrypted (E2EE) messaging platforms where platform operators cannot proactively inspect message contents. However, users' privacy concerns when considering reporting may impede the effectiveness of this strategy in regulating online harassment. In this paper, we conduct interviews with 16 users of E2EE platforms to understand users' mental models of how reporting works and their resultant privacy concerns and considerations surrounding reporting. We find that users expect platforms to store rich longitudinal reporting datasets, recognizing both their promise for better abuse mitigation and the privacy risk that platforms may exploit or fail to protect them. We also find that users have preconceptions about the respective capabilities and risks of moderators at the platform versus community level---for instance, users trust platform moderators more to not abuse their power but think community moderators have more time to attend to reports. These considerations, along with perceived effectiveness of reporting and how to provide sufficient evidence while maintaining privacy, shape how users decide whether, to whom, and how much to report. We conclude with design implications for a more privacy-preserving reporting system on E2EE messaging platforms.

Evaluating User Behavior in Smartphone Security: A Psychometric Perspective

Hsiao-Ying Huang, University of Illinois at Urbana Champaign; Soteris Demetriou, Imperial College London; Muhammad Hassan, University of Illinois at Urbana Champaign; Güliz Seray Tuncay, Google; Carl A. Gunter and Masooda Bashir, University of Illinois at Urbana Champaign

Available Media

Smartphones have become an essential part of our modern society. Their popularity and ever-increasing relevance in our daily lives make these devices an integral part of our comput-ing ecosystem. Yet, we know little about smartphone users and their security behaviors. In this paper, we report our de-velopment and testing of a new 14-item Smartphone Security Behavioral Scale (SSBS) which provides a measurement of users' smart-phone security behavior considering both technical and social strategies. For example, a technical strategy would be resetting the advertising ID while a social strategy would be downloading mobile applications only from an official source. Initial evaluations of the scale also exhibit high reliability and good fit to a two-component behavioral model based on technical versus social protection strategies. This newly developed measure of smartphone security behavior is inspired by the theory of planned behavior and draws inspiration from a well-known scale of cybersecurity behavioral intention, the Security Behavior Intention Scale (SeBIS). The psychometrics of the scale were established by surveying 1011 participants. We believe this new scale and its measures can enhance the understanding of human security behavior for security researchers and HCI designers.

Privacy Mental Models of Electronic Health Records: A German Case Study

Rebecca Panskus, Ruhr-University Bochum; Max Ninow, Leibniz University Hannover; Sascha Fahl, CISPA Helmholtz Center for Information Security; Karola Marky, Ruhr-University Bochum and Leibniz University Hannover

Available Media

Central digitization of health records bears the potential for better patient care, e.g., by having more accurate diagnoses or placing less burden on patients to inform doctors about their medical history. On the flip side, having electronic health records (EHRs) has privacy implications. Hence, the data management infrastructure needs to be designed and used with care. Otherwise, patients might reject the digitization of their records, or the data might be misused. Germany, in particular, is currently introducing centralized EHRs nationwide. We took this effort as a case study and captured privacy mental models of EHRs. We present and discuss findings of an interview study where we investigated expectations towards EHRs and perceptions of the German infrastructure. Most participants were positive but skeptical, yet expressed a variety of misconceptions, especially regarding data exchange with health insurance providers and read-write access to their EHRs. Based on our results, we make recommendations for digital infrastructure providers, such as developers, system designers, and healthcare providers.

2:45 pm–3:15 pm

Break with Refreshments: Ice Cream Social

Platinum 5-6 Foyer

3:15 pm–4:15 pm

Future Internet/Smart Home, the Metaverse, and AI

Session Chair: Scott Ruoti, The University of Tennessee

"Nobody's Happy": Design Insights from Privacy-Conscious Smart Home Power Users on Enhancing Data Transparency, Visibility, and Control

Sunyup Park and Anna Lenhart, University of Maryland, College Park; Michael Zimmer, Marquette University; Jessica Vitak, University of Maryland, College Park

Available Media

As smart home technologies continue to grow in popularity and diversity, they raise important questions regarding ways to increase awareness about data collection practices and empower users to better manage data flows. In this paper, we share insights from 32 privacy-conscious smart home power users—individuals who have invested significant time, money, and technological prowess in customizing their smart home setup to maximize utility and meet privacy and security needs. We explore the drawbacks and limitations power users experience when balancing privacy goals with interoperability, customizability, and usability considerations, and we detail their design ideas to enhance and extend data transparency, visibility, and control. We conclude by discussing the importance of designing smart home technologies that both address these considerations and empower a wide range of users to make more informed decisions about whether and how to implement smart technologies in their homes, as well as the wider need for greater regulation of technologies that collect significant user data.

Exploring the Usability, Security, and Privacy of Smart Locks from the Perspective of the End User

Hussein Hazazi and Mohamed Shehab, University of North Carolina at Charlotte

Available Media

Smart home devices have recently become a sought-after commodity among homeowners worldwide. Among these, smart locks have experienced a marked surge in market share, largely due to their role as a primary safeguard for homes and personal possessions. Various studies have delved into users' apprehensions regarding the usability, security, and privacy aspects of smart homes. However, research specifically addressing these facets concerning smart locks has been limited. To bridge this research gap, we undertook a semi-structured interview study with 29 participants, each of whom had been using smart locks for a minimum period of two months. Our aim was to uncover insights regarding any possible usability, security, or privacy concerns related to smart locks, drawing from their firsthand experiences. Our findings were multifaceted, shedding light on mitigation strategies employed by users to tackle their security and privacy concerns. Moreover, we investigated the lack of concern exhibited by some participants regarding certain security or privacy risks associated with the use of smart locks, and delved into the reasons underpinning such indifference. In addition, we explored the apparent unconcern displayed by some participants towards specific security or privacy risks linked with the use of smart locks.

"There will be less privacy, of course": How and why people in 10 countries expect AI will affect privacy in the future

Patrick Gage Kelley, Google; Celestina Cornejo and Lisa Hayes, Ipsos; Ellie Shuo Jin, Aaron Sedley, Kurt Thomas, Yongwei Yang, and Allison Woodruff, Google

Available Media

The public has many concerns and fears regarding artificial intelligence (AI). Some are general or existential, while others are more specific with personal repercussions, like weakened human relationships, job loss, and further erosion of privacy. In this work, we provide a deeper understanding of how AI privacy concerns are taking shape. We surveyed public opinion of AI's expected effects on privacy with 10,011 respondents spanning ten countries and six continents. We identify four main themes regarding how the public believes AI impacts privacy: vulnerability of data, highly personal data and inference, lack of consent, and surveillance and government use. Unlike many aspects of AI and algorithmic literacy, for which public perception is often reported to be riddled with inconsistency and misconceptions, these privacy concerns are well-reasoned and broadly aligned with expert narratives. Based on our findings, we provide a roadmap of public priorities to help guide researchers and the broader community in exploring solutions that ameliorate AI's impact on privacy, and to inform efforts related to civic participation.

Investigating Security Indicators for Hyperlinking Within the Metaverse

Maximiliane Windl, LMU Munich & Munich Center for Machine Learning (MCML); Anna Scheidle, LMU Munich; Ceenu George, University of Augsburg & TU Berlin; Sven Mayer, LMU Munich & Munich Center for Machine Learning (MCML)

Available Media

Security indicators, such as the padlock icon indicating SSL encryption in browsers, are established mechanisms to convey secure connections. Currently, such indicators mainly exist for browsers and mobile environments. With the rise of the metaverse, we investigate how to mark secure transitions between applications in virtual reality to so-called sub-metaverses. For this, we first conducted in-depth interviews with domain experts (N=8) to understand the general design dimensions for security indicators in virtual reality (VR). Using these insights and considering additional design constraints, we implemented the five most promising indicators and evaluated them in a user study (N=25). While the visual blinking indicator placed in the periphery performed best regarding accuracy and task completion time, participants subjectively preferred the static visual indicator above the portal. Moreover, the latter received high scores regarding understandability while still being rated low regarding intrusiveness and disturbance. Our findings contribute to a more secure and enjoyable metaverse experience.

4:15 pm–4:30 pm

Closing Remarks

General Chairs: Patrick Gage Kelley, Google, and Apu Kapadia, Indiana University Bloomington