SOUPS 2019 Technical Sessions

View the Full Schedule

The full SOUPS 2019 schedule, including the workshops, is available on the Program at a Glance page.
All sessions will be held in Grand Ballroom V unless otherwise noted.

Proceedings Front Matter
Proceedings Cover | Title Page and List of Organizers | Table of Contents | Message from the Program Co-Chairs

Full Proceedings PDFs
 SOUPS 2019 Full Proceedings (PDF)
 SOUPS 2019 Proceedings Interior (PDF, best for mobile devices)
 SOUPS 2019 Errata Slip (PDF)

Downloads for Registered Attendees
(
Sign in to your USENIX account to download these files.)

Attendee Files 
SOUPS 2019 Attendee List (PDF)
SOUPS 2019 Proceedings Web Archive (ZIP)

Sunday, August 11

6:00 pm–7:00 pm

SOUPS 2019 Poster Session

Grand Ballroom Foyer

Check out the cool new ideas and the latest preliminary research on display at the SOUPS 2019 Poster Session. View the list of accepted posters.

Monday, August 12

8:00 am–9:00 am

Continental Breakfast

Grand Ballroom Foyer

9:00 am–9:30 am

Opening Remarks and Awards

SOUPS 2019 General Chair: Heather Richter Lipford, University of North Carolina at Charlotte

9:30 am–10:30 am

Keynote Address

Investigative Journalism in the Digital Age: Privacy, Security and the Media

Jennifer Valentino-DeVries, The New York Times

Available Media

The ability to effectively interrogate our society’s power structure — a key function of a free press — increasingly requires technological understanding. Major digital platforms have deeply affected our economy and our public discourse. Government and corporate surveillance has transformed our sense of privacy and our system of justice. Security flaws put businesses, personal data and even our democratic process at risk.

Journalists and technologists have traditionally operated in separate spheres, but collaborative work can enable a more accurate public understanding of these problems and can bring more accountability for privacy and security problems. Technologists have a crucial role to play in advancing journalistic investigations in the digital age, particularly by contributing empirical evidence to a narrative structure.

This talk will draw on examples of successful joint work in explaining how to work with reporters to bring privacy and security issues to the public’s attention.

Jennifer Valentino-DeVries, The New York Times

Jennifer Valentino-DeVries is a reporter in the investigative group at The New York Times, specializing in technology coverage. She was part of a team that recently won a Polk award and was a finalist for the Pulitzer Prize in National Reporting, for coverage of major technology companies and the hidden costs they present to users and society.

Before joining The Times, she worked at The Wall Street Journal for more than a decade, as an interactive producer, reporter and member of the investigative unit. Her reporting has focused on technology, privacy, computer security and the law. She was a key reporter behind the Journal’s long-running series on digital privacy, “What They Know,” which won a Gerald Loeb award and was a Pulitzer finalist in 2012. She shared an Overseas Press Club award for coverage of companies that enable censorship and surveillance by repressive regimes.

After leaving The Journal at the end of 2016, Ms. Valentino-DeVries helped launch the Knight First Amendment Institute at Columbia University. She also has reported on Facebook for ProPublica.

Ms. Valentino-DeVries graduated from the University of Texas at Austin and has a master’s degree from the Woodrow Wilson School of Public and International Affairs at Princeton University.

10:30 am–11:00 am

Break with Refreshments

Grand Ballroom Foyer

11:00 am–12:30 pm

Populations and Scales

Session Chair: Jose Such, King's College London

Cooperative Privacy and Security: Learning from People with Visual Impairments and Their Allies

Jordan Hayes, Smirity Kaushik, Charlotte Emily Price, and Yang Wang, Syracuse University

Available Media

To better inform privacy/security designs for people with disabilities, we “shadowed” people with visual impairments and their allies (e.g., friends, family members, and professional helpers) for two days followed by an exit interview. Our study results provide rich and nuanced accounts of how people with visual impairments enact their privacy/security in daily life, influenced by both their interactions with their allies and multiple (marginalized) dimensions of their identities such as different disabilities. We also found that people with visual impairments often work closely with their allies to protect their privacy and security in a cooperative manner. However, they were also thoughtful about who they would ask for help in part due to privacy reasons, even if they are trustworthy family members. We discuss ideas for future research and design, particularly a need for designing mechanisms or tools that facilitate cooperative privacy management (e.g., between people with visual impairments and their allies).

Privacy and Security Threat Models and Mitigation Strategies of Older Adults

Alisa Frik, International Computer Science Institute (ICSI) and University of California, Berkeley; Leysan Nurgalieva, University of Trento; Julia Bernd, International Computer Science Institute (ICSI); Joyce Lee, University of California, Berkeley; Florian Schaub, University of Michigan; Serge Egelman, International Computer Science Institute (ICSI) and University of California, Berkeley

Available Media

Older adults (65+) are becoming primary users of emerging smart systems, especially in health care. However, these technologies are often not designed for older users and can pose serious privacy and security concerns due to their novelty, complexity, and propensity to collect and communicate vast amounts of sensitive information. Efforts to address such concerns must build on an in-depth understanding of older adults' perceptions and preferences about data privacy and security for these technologies, and accounting for variance in physical and cognitive abilities. In semi-structured interviews with 46 older adults, we identified a range of complex privacy and security attitudes and needs specific to this population, along with common threat models, misconceptions, and mitigation strategies. Our work adds depth to current models of how older adults' limited technical knowledge, experience, and age-related declines in ability amplify vulnerability to certain risks; we found that health, living situation, and finances play a notable role as well. We also found that older adults often experience usability issues or technical uncertainties in mitigating those risks -- and that managing privacy and security concerns frequently consists of limiting or avoiding technology use. We recommend educational approaches and usable technical protections that build on seniors' preferences.

Evaluating Users’ Perceptions about a System’s Privacy: Differentiating Social and Institutional Aspects

Oshrat Ayalon and Eran Toch, Tel Aviv University

Available Media

System design has a crucial effect on users’ privacy, but privacy-by-design processes in organizations rarely involve end-users. To bridge this gap, we investigate how User-Centered Design (UCD) concepts can be used to test how users perceive their privacy in system designs. We describe a series of three online experiments, with 1,313 participants overall, in which we attempt to develop and validate the reliability of a scale for Users’ Perceived Systems’ Privacy (UPSP). We found that users’ privacy perceptions of information systems consist of three distinctive aspects: institutional, social and risk. We combined our scale with A/B testing methodology to compare different privacy design variants for given background scenarios. Our results show that the methodology and the scale are mostly applicable for evaluating the social aspects of privacy designs.

A Self-Report Measure of End-User Security Attitudes (SA-6)

Cori Faklaris, Laura Dabbish, and Jason I. Hong, Carnegie Mellon University

Available Media

We present SA-6, a six-item scale for assessing people’s security attitudes that we developed by following standardized processes for scale development. We identify six scale items based on theoretical and empirical research with sufficient response variance, reliability, and validity in a combined sample (N = 478) from Amazon Mechanical Turk and a university-based study pool. We validate the resulting measure with a U.S. Census-tailored Qualtrics panel (N = 209). SA-6 significantly associates with self-report measures of behavior intention and recent secure behaviors. Our work contributes a lightweight method for (1) quantifying and comparing people’s attitudes toward using recommended security tools and practices, and (2) improving predictive modeling of who will adopt security behaviors.

12:30 pm–1:45 pm

Monday Luncheon

Terra Courtyard and Mezzanine

1:45 pm–3:45 pm

Security Behaviors and Experiences

Session Chair: Lynne Coventry, Northumbria University

The Effect of Entertainment Media on Mental Models of Computer Security

Kelsey R. Fulton, Rebecca Gelles, Alexandra McKay, Richard Roberts, Yasmin Abdi, and Michelle L. Mazurek, University of Maryland

Available Media

When people inevitably need to make decisions about their computer-security posture, they rely on their mental models of threats and potential targets. Research has demonstrated that these mental models, which are often incomplete or incorrect, are informed in part by fictional portrayals in television and film. Inspired by prior research in public health demonstrating that efforts to ensure accuracy in the portrayal of medical situations has had an overall positive effect on public medical knowledge, we explore the relationship between computer security and fictional television and film. We report on a semi-structured interview study (n=19) investigating what users have learned about computer security from mass media and how they evaluate what is and is not realistic within fictional portrayals. In addition to confirming prior findings that television and film shape users' mental models of security, we identify specific misconceptions that appear to align directly with common fictional tropes. We identify specific proxies that people use to evaluate realism and examine how they influence these misconceptions. We conclude with recommendations for security researchers as well as creators of fictional media when considering how to improve people's understanding of computer-security concepts and behaviors.

A Typology of Perceived Triggers for End-User Security and Privacy Behaviors

Sauvik Das, Georgia Institute of Technology; Laura A. Dabbish and Jason I. Hong, Carnegie Mellon University

Available Media

What triggers end-user security and privacy (S&P) behaviors? How do those triggers vary across individuals? When and how do people share their S&P behavior changes? Prior work, in usable security and persuasive design, suggests that answering these questions is critical if we are to design systems that encourage pro-S&P behaviors. Accordingly, we asked 852 online survey respondents about their most recent S&P behaviors (n = 1947), what led up to those behaviors, and if they shared those behaviors. We found that social “triggers”, where people interacted with or observed others, were most common, followed by proactive triggers, where people acted absent of an external stimulus, and lastly by forced triggers, where people were forced to act. People from different age groups, nationalities, and levels of security behavioral intention (SBI) all varied in which triggers were dominant. Most importantly, people with low-to-medium SBI most commonly reported social triggers. Furthermore, participants were four times more likely to share their behavior changes with others when they, themselves, reported a social trigger.

Replication: No One Can Hack My Mind Revisiting a Study on Expert and Non-Expert Security Practices and Advice

Karoline Busse and Julia Schäfer, University of Bonn; Matthew Smith, University of Bonn/Fraunhofer FKIE

Available Media

A 2015 study by Iulia Ion, Rob Reeder, and Sunny Consolvo examined the self-reported security behavior of security experts and non-experts. They also analyzed what kind of security advice experts gave to non-experts and how realistic and effective they think typical advice is.

Now, roughly four years later, we aimed to replicate and extend this study with a similar set of non-experts and a different set of experts. For the non-experts, we recruited 288 MTurk participants, just as Ion et al. did. We also recruited 75 mostly European security experts, in contrast to the mostly US sample from Ion et al. Our findings show that despite the different samples and the four years that have passed, the most common pieces of expert advice are mostly unchanged, with one notable exception. In addition, we did see a fair amount of fluctuation in the long tail of advice. Non-expert self-reported behavior, however, is unchanged, meaning that the gap between experts and non-experts seen in Ion et al.'s work is still just as prominent in our study. To extend the work, we also conducted an A/B study to get a better understanding of one of the key questions concerning experts' recommendations, and we identified types of advice where research by the usable security community is most sorely needed.

"Something isn't secure, but I'm not sure how that translates into a problem": Promoting autonomy by designing for understanding in Signal

Justin Wu, Cyrus Gattrell, Devon Howard, and Jake Tyler, Brigham Young University; Elham Vaziripour, Utah Valley University; Kent Seamons and Daniel Zappala, Brigham Young University

Available Media

Security designs that presume enacting secure behaviors to be beneficial in all circumstances discount the impact of response cost on users' lives and assume that all data is equally worth protecting. However, this has the effect of reducing user autonomy by diminishing the role personal values and priorities play in the decision-making process. In this study, we demonstrate an alternative approach that emphasizes users' comprehension over compliance, with the goal of helping users to make more informed decisions regarding their own security. To this end, we conducted a three-phase redesign of the warning notifications surrounding the authentication ceremony in Signal. Our results show how improved comprehension can be achieved while still promoting favorable privacy outcomes among users. Our experience reaffirms existing arguments that users should be empowered to make personal trade-offs between perceived risk and response cost. We also find that system trust is a major factor in users' interpretation of system determinations of risk, and that properly communicating risk requires an understanding of user perceptions of the larger security ecosystem in whole.

"I was told to buy a software or lose my computer. I ignored it": A study of ransomware

Camelia Simoiu, Stanford University; Christopher Gates, Symantec; Joseph Bonneau, New York University; Sharad Goel, Stanford University

Available Media

Ransomware has received considerable news coverage in recent years, in part due to several cases against high-profile corporate targets. Little is known, however, about the prevalence and characteristics of ransomware attacks on the general population, what proportion of users pay, or how users perceive risks and respond to attacks. Using a detailed survey of a representative sample of 1,180 American adults, we estimate that 2%--3% of respondents were affected over a 1-year period between 2016 and 2017. The average payment amount demanded was $530 and only a small fraction of affected users (about 4% of those affected) reported paying. Perhaps surprisingly, cryptocurrencies were typically only one of several payment options, suggesting that they may not be a primary driver of ransomware attacks. We conclude our analysis by developing a simple proof-of-concept method for risk-assessment based on self-reported security habits.

3:45 pm–4:15 pm

Break with Refreshments

Grand Ballroom Foyer

4:15 pm–5:45 pm

New Paradigms

Session Chair: Adam Aviv, United States Naval Academy

Enhancing Privacy through an Interactive On-demand Incremental Information Disclosure Interface: Applying Privacy-by-Design to Record Linkage

Hye-Chung Kum, Population Informatics Lab, Texas A&M University; Eric D. Ragan, INDIE Lab, University of Florida; Gurudev Ilangovan, Mahin Ramezani, Qinbo Li, and Cason Schmit, Population Informatics Lab, Texas A&M University

Available Media

Achieving the benefits of data science in cases involving personal data requires the use of that data, which results in some privacy risk. Our research investigates approach-es to enhance privacy while supporting legitimate access for human decision making by capitalizing on the fact that in most human-computer hybrid systems, only a small fraction of the full data is required for human judgment. We present an interactive visual system for record linkage – a task that requires human decision-making about whether different but similar data records refer to the same person. The system employs an on-demand interactive interface that incrementally discloses partial in-formation only when needed and other feedback mechanisms to promote ethical behavior. We evaluate our approach with a controlled experiment of how different types of feedback and access restrictions affect human decision-making quality, speed, and access behavior. The on-demand interactive interface reduced privacy risk to only 7.85%, compared to 100% when all data is disclosed, with little to no impact on decision quality or completion time. In addition, feedback from an expert review supports the notion that an intermediate level of access other than “all or nothing” can provide better accuracy than no access but more protection than full access.

From Usability to Secure Computing and Back Again

Lucy Qin, Andrei Lapets, Frederick Jansen, Peter Flockhart, Kinan Dak Albab, and Ira Globus-Harris, Boston University; Shannon Roberts, University of Massachusetts Amherst; Mayank Varia, Boston University

Available Media

Secure multi-party computation (MPC) allows multiple parties to jointly compute the output of a function while preserving the privacy of any individual party's inputs to that function. As MPC protocols transition from research prototypes to real-world applications, the usability of MPC-enabled applications is increasingly critical to their successful deployment and wide adoption.

Our Web-MPC platform, designed with a focus on usability, has been deployed for privacy-preserving data aggregation initiatives with the City of Boston and the Greater Boston Chamber of Commerce. After building and deploying an initial version of this platform, we conducted a heuristic evaluation to identify additional usability improvements and implemented corresponding application enhancements. However, it is difficult to gauge the effectiveness of these changes within the context of real-world deployments using traditional web analytics tools without compromising the security guarantees of the platform. This work consists of two contributions that address this challenge: (1) the Web-MPC platform has been extended with the capability to collect web analytics using existing MPC protocols, and (2) this capability has been leveraged to conduct a usability study comparing the two version of Web-MPC (before and after the heuristic evaluation and associated improvements).

While many efforts have focused on ways to enhance the usability of privacy-preserving technologies, this study can serve as a model for using a privacy-preserving data-driven approach in evaluating or enhancing the usability of privacy-preserving websites and applications deployed in real-world scenarios. The data collected in this study yields insights about the interplay between usability and security that can help inform future implementations of applications that employ MPC.

Certified Phishing: Taking a Look at Public Key Certificates of Phishing Websites

Vincent Drury and Ulrike Meyer, Department of Computer Science, RWTH Aachen University

Available Media

The share of phishing websites using HTTPS has been constantly increasing over the last years. As a consequence, the simple user advice to check whether a website is HTTPS-protected is no longer effective against phishing. At the same time, the use of certificates in the context of phishing raises the question if the information contained in them could be used to detect phishing websites. In this paper we take a first step towards answering this question. To this end, we analyze almost 10000 valid certificates queried from phishing websites and compare them to almost 40000 certificates collected from benign sites. Our analysis shows that it is generally impossible to differentiate between benign sites and phishing sites based on the content of their certificates alone. However, we present empirical evidence that current phishing websites for popular targets do typically not replicate the issuer and subject information.

Lightning Talks

  • Everything You Need to Know About Secured Usable Design can be explained with Star Wars
    Ann-Marie Horcher, Central Michigan University
  • Preliminary Findings on Behavioral Aspects of Socially Engineered Payment Diversion Fraud
    Alana Maurushat, Western Sydney University, Australia
  • On the Road to Digital Inclusion: Designing Inclusive Security and Privacy Alerts
    Pavithren V.S. Pakianathan, Singapore University of Technology and Design
  • An Early UDS Evaluation of the Android Security Key Scheme
    Robbie MacGregor, Dalhousie University
Available Media

5:45 pm–6:45 pm

Monday Happy Hour

Terra Courtyard

Sponsored by Carnegie Mellon University Privacy Engineering
Mingle with other attendees while ­enjoying snacks and beverages. Attendees of all co-located events taking place on Monday are welcome.

Tuesday, August 13

8:00 am–9:00 am

Continental Breakfast

Grand Ballroom Foyer

9:00 am–10:30 am

Developers and Sysadmins

Session Chair: Joe Calandrino, Federal Trade Commission

"We Can't Live Without Them!" App Developers' Adoption of Ad Networks and Their Considerations of Consumer Risks

Abraham H. Mhaidli, Yixin Zou, and Florian Schaub, University of Michigan School of Information

Available Media

Mobile ads pose privacy and security risks to consumers, including behavior tracking, malware, and inappropriate or biased content. Advertising networks connect mobile app developers with advertisers, enabling in-app advertising. We conducted a mixed-methods study with mobile app developers, consisting of survey and semi-structured interviews, to better understand why and how they partner with advertising networks, and their considerations of consumer risks in those interactions. Our findings focus on app developers who work independently or in smaller companies. We find that developers use advertising because they see it as the only viable way to monetize their app. Developers mostly choose an advertising network based on perceptions of which ad networks are popular rather than a holistic assessment. Despite claims of optimizing for profitability or consumer well-being, developers largely keep ad networks' default configurations. Developers are resigned to ad-related consumer risks, seeing themselves as unable to and not responsible for addressing the risks. Based on our findings, we discuss recommendations for mitigating consumer risks of mobile advertising.

Usability Smells: An Analysis of Developers’ Struggle With Crypto Libraries

Nikhil Patnaik, Joseph Hallett, and Awais Rashid, University of Bristol

Available Media

Green and Smith propose ten principles to make cryptography libraries more usable, but to what extent do the libraries implement these principles? We undertook a thematic analysis of over 2400 questions and responses from developers seeking help with seven cryptography libraries on Stack Overflow; analyzing them to identify 17 underlying usability issues and studying see how prevalent they were across the seven libraries. Mapping our usability issues to Green and Smith’s usability principles we identify 4 usability smells where the principles are not being observed. We suggest improvements library developers can make to reduce the smells and how we can make cryptography libraries more usable for all.

System Administrators Prefer Command Line Interfaces, Don't They? An Exploratory Study of Firewall Interfaces

Artem Voronkov, Leonardo A. Martucci, and Stefan Lindskog, Karlstad University

Available Media

A graphical user interface (GUI) represents the most common option for interacting with computer systems. However, according to the literature system administrators often favor command line interfaces (CLIs). The goal of our work is to investigate which interfaces system administrators prefer, and which they actually utilize in their daily tasks. We collected experiences and opinions from 300 system administrators with the help of an online survey. All our respondents are system administrators, who work or have worked with firewalls. Our results show that only 32% of the respondents prefer CLIs for managing firewalls, while the corresponding figure is 60% for GUIs. We report the mentioned strengths and limitations of each interface and the tasks for which they are utilized by the system administrators. Based on these results, we provide design recommendations for firewall interfaces.

Keepers of the Machines: Examining How System Administrators Manage Software Updates For Multiple Machines

Frank Li, University of California, Berkeley; Lisa Rogers, University of Maryland; Arunesh Mathur, Princeton University; Nathan Malkin, University of California, Berkeley; Marshini Chetty, Princeton University

Awarded Distinguished Paper!

Available Media

Keeping machines updated is crucial for maintaining system security. While recent studies have investigated the software updating practices of end users, system administrators have received less attention. Yet, system administrators manage numerous machines for their organizations, and security lapses at these hosts can lead to damaging attacks. To improve security at scale, we therefore also need to understand how this specific population behaves and how to help administrators keep machines up-to-date.

In this paper, we study how system administrators manage software updates. We surveyed 102 administrators and interviewed 17 in-depth to understand their processes and how their methods impact updating effectiveness. We find that system administrators proceed through software updates through five main stages that, while similar to those of end users, involve significantly different considerations and actions performed, highlighting the value of focusing specifically on the administrator population. By gathering evidence on how administrators conduct updates, we identify challenges that they encountered and limitations of existing procedures at all stages of the updating process. We observe issues with comprehensively acquiring meaningful information about available updates, effectively testing and deploying updates in a timely manner, recovering from update-induced problems, and interacting with organizational and management influences. Moving forward, we propose directions for future research and community actions that may help system administrators perform updates more effectively.

10:30 am–11:00 am

Break with Refreshments

Grand Ballroom Foyer

11:00 am–1:00 pm

Authentication

Session Chair: Patrick Gage Kelley, Google

Communicating Device Confidence Level and Upcoming Re-Authentications in Continuous Authentication Systems on Mobile Devices

Lukas Mecke, University of Applied Sciences Munich, Munich, Germany and LMU Munich, Munich, Germany; Sarah Delgado Rodriguez and Daniel Buschek, LMU Munich, Munich, Germany; Sarah Prange, University of Applied Sciences Munich, Munich, Germany and Bundeswehr University Munich, Munich, Germany and LMU Munich, Munich, Germany; Florian Alt, Bundeswehr University Munich, Munich, Germany

Available Media

Continuous implicit authentication mechanisms verify users over time. In case the device's confidence level (DCL) is too low, the user is prompted with a re-authentication request, which has been shown to annoy many users due to its unpredictable nature. We address this with a novel approach to enable users to anticipate the need for re-authentication with two indicators: (1) a long term indicator shows the current DCL and its development over time, and (2) a short term indicator announces that re\hyp authentication is imminent. In both cases voluntary re-authentication allows the DCL to be raised and a device lock to be avoided. We tested the indicators in a four week field study (N=32). Our results show that both indicators were preferred over giving no indication and that importance and sensitivity of the interrupted task have a strong impact on user annoyance. Voluntary re-authentications were perceived as positive.

Exploring Intentional Behaviour Modifications for Password Typing on Mobile Touchscreen Devices

Lukas Mecke, University of Applied Sciences Munich, Munich, Germany and LMU Munich, Munich, Germany; Daniel Buschek and Mathias Kiermeier, LMU Munich, Munich, Germany; Sarah Prange, University of Applied Sciences Munich, Munich, Germany and Bundeswehr University Munich, Munich, Germany and LMU Munich, Munich, Germany; Florian Alt, Bundeswehr University Munich, Munich, Germany

Available Media

Behavioural biometric systems are based on the premise that human behaviour is hard to intentionally change and imitate. So far, changing input behaviour has been studied with the goal of supporting mimicry attacks. Going beyond attacks, this paper presents the first study on understanding users’ ability to modify their typing behaviour when entering passwords on smartphones. In a prestudy (N=114), we developed visual text annotations to communicate modifications of typing behaviour (for example, gap between letters indicates how fast to move between keys). In a lab study (N=24), participants entered given passwords with such modification instructions on a smartphone in two sessions a week apart. Our results show that users successfully control and modify typing features (flight time, hold time, touch area, touch-to-key offset), yet certain combinations are challenging. We discuss implications for usability and security of mobile passwords, such as informing behavioural biometrics for password entry, and extending the password space through explicit modifications.

Why people (don't) use password managers effectively

Sarah Pearman, Shikun Aerin Zhang, Lujo Bauer, Nicolas Christin, and Lorrie Faith Cranor, Carnegie Mellon University

Available Media

Security experts often recommend using password-management tools that both store passwords and generate random passwords. However, research indicates that only a small fraction of users use password managers with password generators. Past studies have explored factors in the adoption of password managers using surveys and online store reviews. Here we describe a semi-structured interview study with 30 participants that allows us to provide a more comprehensive picture of the mindsets underlying adoption and effective use of password managers and password-generation features. Our participants include users who use no password-specific tools at all, those who use password managers built into browsers or operating systems, and those who use separately installed password managers. Furthermore, past field data has indicated that users of built-in, browser-based password managers more often use weak and reused passwords than users of separate password managers that have password generation available by default. Our interviews suggest that users of built-in password managers may be driven more by convenience, while users of separately installed tools appear more driven by security. We advocate tailored designs for these two mentalities and provide actionable suggestions to induce effective password manager usage.

Of Two Minds about Two-Factor: Understanding Everyday FIDO U2F Usability through Device Comparison and Experience Sampling

Stéphane Ciolino, OneSpan Innovation Centre & University College London; Simon Parkin, University College London; Paul Dunphy, OneSpan Innovation Centre

Available Media

Security keys are phishing-resistant two-factor authentication (2FA) tokens based upon the FIDO Universal 2nd Factor (U2F) standard. Prior research on security keys has revealed intuitive usability concerns, but there are open challenges to better understand user experiences with heterogeneous devices and to determine an optimal user experience for everyday Web browsing. In this paper we contribute to the growing usable security literature on security keys through two user studies: (i) a lab-based study evaluating the first-time user experience of a cross-vendor set of security keys and SMS-based one-time passcodes; (ii) a diary study, where we collected 643 entries detailing how participants accessed accounts and experienced one particular security key over the period of one week. In the former we discovered that user sentiment towards SMS codes was typically higher than for security keys generally. In the latter we discovered that only 28% of accesses to security key-enabled online accounts actually involved a button press on a security key. Our findings confirm prior work that reports user uncertainty about the benefits of security keys and their security purpose.We conclude that this can be partly explained by experience with online services that support security keys, but may nudge users away from regular use of those security keys.

A Usability Study of Five Two-Factor Authentication Methods

Ken Reese, Trevor Smith, Jonathan Dutson, Jonathan Armknecht, Jacob Cameron, and Kent Seamons, Brigham Young University

Available Media

Two-factor authentication (2FA) defends against account compromise. An account secured with 2FA typically requires an individual to authenticate using something they know—typically a password—as well as something they have, such as a cell phone or hardware token. Many 2FA methods in widespread use today have not been subjected to adequate usability testing. Furthermore, previous 2FA usability research is difficult to compare due to widely-varying contexts across different studies. We conducted a two-week, between-subjects usability study of five common 2FA methods with 72 participants, collecting both quantitative and qualitative data. Participants logged into a simulated banking website nearly every day using 2FA and completed an assigned task. Participants generally gave high marks to the methods studied, and many expressed an interest in using 2FA to provide more security for their sensitive online accounts. We also conducted a within-subjects laboratory study with 30 participants to assess the general usability of the setup procedure for the five methods. While a few participants experienced difficulty setting up a hardware token and a one-time password, in general, users found the methods easy to set up.

1:00 pm–2:15 pm

Tuesday Luncheon

Terra Courtyard and Mezzanine

2:15 pm–3:45 pm

Personal Privacy

Session Chair: Manya Sleeper, Google

Personal Information Leakage by Abusing the GDPR 'Right of Access'

Mariano Di Martino and Pieter Robyns, Hasselt University/tUL, Expertise Centre For Digital Media; Winnie Weyts, Hasselt University - Law Faculty; Peter Quax, Hasselt University/tUL, Expertise Centre For Digital Media, Flanders Make; Wim Lamotte, Hasselt University/tUL, Expertise Centre For Digital Media; Ken Andries, Hasselt University - Law Faculty, Attorney at the Brussels Bar

Available Media

The General Data Protection Regulation (GDPR) 'Right of Access' grants (European) natural persons the right to request and access all their personal data that is being processed by a given organization. Verifying the identity of the requester is an important aspect of this process, since it is essential to prevent data leaks to unauthorized third parties (e.g. criminals). In this paper, we evaluate the verification process as implemented by 55 organizations from the domains of finances, entertainment, retail and others. To this end, we attempt to impersonate targeted individuals who have their data processed by these organizations, using only forged or publicly available information extracted from social media and alike. We show that policies and practices regarding the handling of GDPR data requests vary significantly between organizations and can often be manipulated using social engineering techniques. For 15 out of the 55 organizations, we were successfully able to impersonate a subject and obtained full access to their personal data. The leaked personal data contained a wide variety of sensitive information, including financial transactions, website visits and physical location history. Finally, we also suggest a number of practical policy improvements that can be implemented by organizations in order to minimize the risk of personal information leakage to unauthorized third parties.

An Empirical Analysis of Data Deletion and Opt-Out Choices on 150 Websites

Hana Habib, Carnegie Mellon University; Yixin Zou, University of Michigan; Aditi Jannu, Neha Sridhar, Chelse Swoopes, Alessandro Acquisti, Lorrie Faith Cranor, and Norman Sadeh, Carnegie Mellon University; Florian Schaub, University of Michigan

Available Media

Many websites offer visitors privacy controls and opt-out choices, either to comply with legal requirements or to address consumer privacy concerns. The way these control mechanisms are implemented can significantly affect individuals’ choices and their privacy outcomes. We present an extensive content analysis of a stratified sample of 150 English-language websites, assessing the usability and interaction paths of their data deletion options and opt-outs for email communications and targeted advertising. This heuristic evaluation identified substantial issues that likely make exercising these privacy choices on many websites difficult and confusing for US-based consumers. Even though the majority of analyzed websites offered privacy choices, they were located inconsistently across websites. Furthermore, some privacy choices were rendered unusable by missing or unhelpful information, or by links that did not lead to the stated choice. Based on our findings, we provide insights for addressing usability issues in the end-to-end interaction required to effectively exercise privacy choices and controls.

The Fog of Warnings: How Non-essential Notifications Blur with Security Warnings

Anthony Vance, Temple University; David Eargle, University of Colorado Boulder; Jeffrey L. Jenkins, C. Brock Kirwan, and Bonnie Brinton Anderson, Brigham Young University

Available Media

Adherence to security warnings continues to be an important problem in information security. Although users may fail to heed a security warning for a variety of reasons, a major contributor is habituation, which is decreased response to repeated stimulation. However, the scope of this problem may actually be much broader than previously thought because of the neurobiological phenomenon of stimulus generalization. Whereas habituation describes a diminished response with repetitions of the same stimulus, generalization occurs when habituation to one stimulus carries over to other novel stimuli that are similar in appearance.

Generalization has important implications for the domains of usable security and human–computer interaction. Because a basic principle of user interface design is visual consistency, generalization suggests that through exposure to frequent non-security-related notifications (e.g., dialogs, alerts, confirmations, etc.) that share a similar look and feel, users may become deeply habituated to critical security warnings that they have never seen before. Further, with the increasing number of notifications in our lives across a range of mobile, Internet of Things, and computing devices, the accumulated effect of generalization may be substantial. However, this problem has not been empirically examined before.

This paper contributes by measuring the impacts of generalization in terms of (1) diminished attention via mouse cursor tracking and (2) users’ ability to behaviorally adhere to security warnings. Through an online experiment, we find that:

  • Habituation to a frequent non-security-related notification does carry over to a one-time security warning.
  • Generalization of habituation is manifest both in (1) decreased attention to warnings and (2) lower warning adherence behavior.
  • The carry-over effect, most importantly, is due to generalization, and not fatigue.
  • The degree that generalization occurs depends on the similarity in look and feel between a notification and warning.

These findings open new avenues of research and provide guidance to software developers for creating warnings that are more resistant to the effects of generalization of habituation, thereby improving users’ security warning adherence.

Lightning Talks

  • Warrants are the new black: location data and the Fourth Amendment
    Naomi Shiffman, Mozilla Corporation/University of California, Berkeley
  • A privacy- and security-focused notification system
    Jim Fenton, Altmode Networks
  • Growing a UX Team in a Privacy & Security Organization
    Jim Lane, Virtru
  • Social interventions to reduce the privacy paradox
    Isha Ghosh, Rutgers University
Available Media

3:45 pm–4:15 pm

Break with Refreshments

Grand Ballroom Foyer

4:15 pm–5:45 pm

Wearables and Smart Homes

Session Chair: Florian Schaub, University of Michigan

“There is nothing that I need to keep secret”: Sharing Practices and Concerns of Wearable Fitness Data

Abdulmajeed Alqhatani and Heather Richter Lipford, University of North Carolina at Charlotte

Available Media

There has been increasing use of commercial wearable devices for tracking fitness-related activities in the past few years. These devices sense and collect a variety of personal health and fitness data, which can be shared by users with different audiences. Yet, little is known about users’ practices for sharing information collected by these devices, and the concerns they have when disclosing this information across a variety of platforms. In this study, we conducted 30 semi-structured interviews with wearable fitness device users to understand their sharing intentions and practices, and to examine what they do to manage their privacy. We describe a set of common goals for sharing health and fitness information, which then influence users’ choices of the recipients and the specific practices they employ to share that information. Our findings indicate that participants were primarily concerned about acceptable norms and self-presentation rather than the sensitivity of the information. Our results provide a set of common goals and practices which can inspire new applications and help improve existing platforms for sharing sensed fitness information.

"I don't own the data": End User Perceptions of Smart Home Device Data Practices and Risks

Madiha Tabassum, University of North Carolina at Charlotte; Tomasz Kosinski, Chalmers University of Technology; Heather Lipford, University of North Carolina at Charlotte

Available Media

Smart homes are more connected than ever before, with a variety of commercial devices available. The use of these devices introduces new security and privacy risks in the home, and needs for helping users to understand and mitigate those risks. However, we still know little about how everyday users understand the data practices of smart home devices, and their concerns and behaviors regarding those practices. To bridge this gap, we conducted a semi-structured interview study with 23 smart home users to explore what people think about smart home device data collection, sharing, and usage practices; how that knowledge affects their perceived risks of security and privacy; and the actions they take to resolve those risks. Our results reveal that while people are uncertain about manufacturers' data practices, users' knowledge of their smart home does not strongly influence their threat models and protection behaviors. Instead, users' perceptions and concerns are largely shaped by their experiences in other computing contexts and with organizations. Based on our findings, we provide several recommendations for policymakers, researchers and designers to contribute to users' risk awareness and security and privacy practices in the smart home.

More than Smart Speakers: Security and Privacy Perceptions of Smart Home Personal Assistants

Noura Abdi, King's College London; Kopo M. Ramokapane, University of Bristol; Jose M. Such, King's College London

Available Media

Smart Home Personal Assistants (SPA) such as Amazon Echo/Alexa and Google Home/Assistant have made our daily routines much more convenient, allowing us to complete tasks quickly and efficiently using natural language. It is believed that around 10\% of consumers around the world already own an SPA, and predictions are that ownership will keep rising. It is therefore paramount to make SPA secure and privacy-preserving. Despite the growing research on SPA security and privacy, little is known about users' security and privacy perceptions concerning SPA complex ecosystem, which involves several elements and stakeholders. To explore this, we considered the main four use case scenarios with distinctive architectural elements and stakeholders involved: using built-in skills, third-party skills, managing other smart devices, and shopping, through semi-structured interviews with SPA users. Using a grounded theory approach, we found that users have incomplete mental models of SPA, leading to different perceptions of where data is being stored, processed, and shared. Users' understanding of the SPA ecosystem is often limited to their household and the SPA vendor at most, even when using third-party skills or managing other smart home devices. This leads to incomplete threat models (few threat agents and types of attacks) and non-technical coping strategies they implement to protect themselves. We also found that users are not making the most of the shopping capabilities of SPA due to security and privacy concerns; and while users perceive SPA as intelligent and capable of learning, they would not like SPA learning everything about them. Based on these findings, we discuss design recommendations.

Lightning Talks

  • Respect for Norms and Expectations: A Privacy Paradigm for IT Businesses
    Amin Rabinia, University of Maine
  • Wait, that site supports 2FA?! Helping average internet users enable 2FA on sites they visit
    Conor Gilsenan, University of California, Berkeley
  • Improving usability of password authentication using keystroke dynamics
    Walter Berggren, Aalto University
  • Against Trust (in Technology)
    Maggie Oates, Carnegie Mellon University
Available Media

6:00 pm–7:00 pm

Tuesday Happy Hour

Terra Courtyard

Sponsored by Intel
Mingle with other attendees while ­enjoying snacks and beverages. Attendees of USENIX Security '19 and all co-located events are welcome.

7:00 pm–8:00 pm

SOUPS Town Hall Meeting

The SOUPS Town Hall Meeting is a time for the organizing and steering committees to interact with attendees, listen to concerns, and gather feedback regarding the future of the SOUPS conference and community. Everyone is welcome to attend and participate.