SOUPS 2017 Program

All sessions will be held in Grand Ballroom E–H unless otherwise noted.

The full Proceedings published by USENIX for the conference are available for download below. Individual papers can also be downloaded from the presentation page. Copyright to the individual works is retained by the author[s].

Proceedings Front Matter
Proceedings Cover | Title Page and List of Organizers | Table of Contents | Message from the Program Co-Chairs

Full Proceedings PDFs
 SOUPS 2017 Full Proceedings (PDF)
 SOUPS 2017 Proceedings Interior (PDF, best for mobile devices)
 SOUPS 2017 Errata

Full Proceedings ePub (for iPad and most eReaders)
 SOUPS 2017 Full Proceedings (ePub)

Full Proceedings Mobi (for Kindle)
 SOUPS 2017 Full Proceedings (Mobi)

Downloads for Registered Attendees
(Sign in to your USENIX account to download these files.)

Attendee Files 
SOUPS 2017 Attendee List (PDF)
SOUPS 2017 Proceedings Web Archive (ZIP)

Wednesday, July 12, 2017

7:30 am–9:00 am

Continental Breakfast

Grand Ballroom DE Foyer

9:00 am–5:30 pm

9:00 am–12:30 pm

Tutorial: Managing Password Managers: Usability Challenges Introduced by a Usable Security Solution

Grand Ballroom H

Jeffery Goldberg and Pilar Garcia, Agilebits

This tutorial takes the form of a number of guided discussions surrounding the usability challenges presented by password managers. All participants are encouraged to join the discussion.

We start from a position that well-designed password managers are a clear win in terms of usable security for a large majority of people who use numerous websites and services for which they need to authenticate. Although we may look forward to the day when password managers are not necessary, they remain (among) the best practical and usable solution to the password problem for most people.

But even as password managers serve to mitigate some large usability and security issues, they create their own usability challenges.

Each discussion will take the form of a short presentation by the organizers about the problem followed by lightning talks from participants, and then guided discussion.

Introductions

9:00 am–9:15 am

  • General Introduction
  • Who's who and what we want out of this tutorial

Session 1: Forgotten Master Passwords

9:15 am–10:00 am

Given the nature of the secrets that are managed by a password manager, there is a strong requirement for confidentiality of the data and availability. _How can we achieve both for a system that is to be used by non-specialists?

  • Overview: the design choices of 1Password.
  • Overview: how well is 1Password working.
  • Overview: Recovery mechanisms in 1Password

Discussion Topics:

  1. Recovery
  2. Memorability
  3. User maintained backup

Session 2: (Mis)communicating Security Properties

10:30 am – 11:15 am

Users should be able to use a system effectively and safely without needing to understand the nuance of how the system works, under the covers. The user's implicit mental models of associated with system functionality should not encourage behaviors that may harm the user's security and privacy interests.

  • Presentation: Weakly enforced versus strongly enforced security settings
  • Presentation: The semantics of a master password change aren't what you might think
  • Presentation: When 2FA (2-factor authentication) is not 2FA.
  • Discussion followup

Session 3: Words, Words, Words

11:20 am –12:05 pm

Master passwords generated from words or meaningful units chosen at random from a sufficiently long list may be one of the better approaches for strong passwords which users must actually remember and use. But they are not a perfect solution.

  • Presentation: From $h!t to scatology: Taboo language isn't the only problem.
  • Presentation: Length, complexity requirements, and other annoyances
  • Discussion followup

Session 4: Continuing Discussions

12:05 pm–12:30 pm

Where do we go from here? What new features should we ask of our password managers? Collaboration with academia? Here we discuss future work and future directions.

10:30 am–11:00 am

Break with Refreshments

Mezzanine

12:30 pm–2:00 pm

Lunch (on your own)

Check out a list of places to eat lunch nearby!

1:00 pm–5:15 pm

Tutorial: An Overview of Usable Privacy Technologies, Tools and Findings coming out of Recent Research at CMU

Grand Ballroom H

Norman Sadeh, Anupam Das, Martin Degeling, and Sebastian Zimmeck, Carnegie Mellon University

This tutorial will provide an overview of technologies, tools and findings coming out of recent research in two projects at Carnegie Mellon University: the Usable Privacy Policy project and the Personalized Privacy Assistant project. The tutorial will combine overviews of research findings and tools with hands-on exercises and group discussions. Beyond sharing recent research findings, the tutorial aims to showcase different ways in which technologies and tools made available by the projects could be used by others in the SOUPS community—be it in the context of their own research and education activities, deployments in corporate or university settings, or regulatory activities.

Session 1: Semi-Automated Extraction of Data Practice Statements from Natural Language Privacy Policies

1:00 pm–2:15 pm

Privacy policies are known to be long and difficult to read and understand. This session will provide an overview of crowdsourcing, machine learning and natural language processing techniques developed to extract data practice statements from privacy policies. It will include a discussion of major findings and also introduce several large-scale data sets and interactive web-based tools released or soon-to-be released to the research community on our Explore website (https://explore.usableprivacy.org/). One of these tools relies on automated annotation techniques to interactively generate privacy reports for any of the top Alexa 10,000 websites, including questions about opt-out choices available to users. The session will also feature some group exercises around the use of the tools and a discussion of opportunities to conduct large-scale analyses of privacy policies.

Session 2: Mobile App Privacy Compliance

2:30 pm–3:45 pm

Mobile apps are known to collect a wide variety of information about their users. Research has shown that many apps also fail to comply with basic privacy requirements (Federal, State or international). This session will provide an overview of techniques developed to automatically analyze mobile apps for potential privacy compliance violations. This will include an overview of natural language and machine learning techniques to analyze the text of privacy policies as well as static code analysis techniques to analyze what apps actually do. The tutorial will include a discussion of recent findings using this technology as well as a discussion of a mobile app privacy compliance tool to analyze mobile apps at scale. Participants will be given a chance to play with the tool and vet results it produces. This will include a discussion of possible uses for the tool in research, education, industry and regulatory contexts as well as opportunities for further extensions.

Session 3: Personalized Privacy Assistants and Infrastructure for IoT

4:00 pm–5:15 pm

Even on the wired Web people do not read privacy policies and can’t find time to configure their privacy settings. In the Internet of Things (IoT), this challenge is compounded by the fact that users may not even know what technologies they are interacting with and what settings might be available to them (e.g., opt-in/opt-out). Privacy Assistants are intended to help users manage their privacy, selectively informing them about data practices they would likely want to know about and helping configure any available privacy settings. This session will provide an overview of privacy preference modeling techniques and machine learning techniques designed to drive privacy assistants in the context of mobile and IoT scenarios, including a privacy assistant released on the Google Play Store. It will also provide an overview of an IoT infrastructure developed to help resource owners declare the presence of IoT resources and their privacy policies, help IoT Privacy Assistants discover relevant resources and selectively inform users about their data practices. The infrastructure has been deployed at CMU and UC Irvine. The session will include group exercises to populate IoT resource registries and discuss how this infrastructure can now be deployed by others.

3:30 pm–4:00 pm

Break with Refreshments

Mezzanine

6:00 pm–7:30 pm

SOUPS 2017 Poster Session and Happy Hour

Magnolia Room

Check out the cool new ideas and the latest preliminary research on display at the Poster Session and Reception. Take part in discussions with your colleagues over complimentary food and drinks. View the list of accepted posters.

Thursday, July 13, 2017

7:30 am–8:30 am

Continental Breakfast

Grand Ballroom DE Foyer

8:30 am–9:00 am

Welcome and Awards Presentations

General Chair: Mary Ellen Zurko

  • Distinguished Poster Award
    Presented by Michelle Mazurek, University of Maryland, and Kent Seamons, Brigham Young University
  • Distinguished Paper Award
    Presented by Matthew Smith, University of Bonn, and Sonia Chiasson, Carleton University
  • IAPP SOUPS Privacy Award
    Presented by Matthew Smith, University of Bonn, and Sonia Chiasson, Carleton University
  • John Karat Usable Privacy and Security Student Research Award
    Presented by Dave Crocker, Brandenburg InternetWorking
  • SOUPS 2017 Impact Award
    Presented by Heather Lipford, University of North Carolina at Charlotte

9:00 am–10:00 am

Keynote Address

Session Chair: Mary Ellen Zurko

Better Security and Privacy through Transparency: Lessons from HTTPS and Beyond

Eric Mill, GSA

Available Media

Usable security and privacy isn't just about technical and design work—creating good outcomes on a global scale means getting involved in the politics. We'll examine the relationships and roller coaster politics behind HTTPS, and how transparency and communication are at the core of stabilizing the world's most successful mainstream security protocol. We'll look at how these lessons might apply to other areas of online privacy, and at how the US government is applying them to its own security.

Eric Mill, GSA

Eric is a senior advisor to GSA's Technology Transformation Service, where he helps guide the organization's strategy and policy impact. As part of his work, Eric helps implement the US government's HTTPS-only policy for public web services. Before GSA, Eric worked as an engineer at the Sunlight Foundation, a nonprofit foundation dedicated to government transparency through technology, where he helped develop infrastructure and policy in support of open government.

10:00 am–10:30 am

Break with Refreshments

Grand Ballroom DE Foyer

10:30 am–12:00 pm

Authentication

Session Chair: Serge Egelman, University of California, Berkeley and International Computer Science Institute

Diversify to Survive: Making Passwords Stronger with Adaptive Policies

Sean Segreti, William Melicher, Saranga Komanduri, Darya Melicher, and Richard Shay, Carnegie Mellon University; Blase Ur, University of Chicago; Lujo Bauer, Nicolas Christin, and Lorrie Cranor, Carnegie Mellon University; Michelle Mazurek, University of Maryland

Available Media

Password-composition policies are intended to increase resistance to guessing attacks by requiring certain features (e.g., a minimum length and the inclusion of a digit). Sadly, they often result in users' passwords exhibiting new, yet still predictable, patterns. In this paper, we investigate the usability and security of adaptive password-composition policies, which dynamically change password requirements over time as users create new passwords. We conduct a 2,619- participant between-subjects online experiment to evaluate the strength and usability of passwords created with two adaptive password policies. We also design and test a feedback system that guides users to successfully create a password conforming to these policies. We find that a well-configured, structure-based adaptive password policy can significantly increase password strength with little to no decrease in usability. We discuss how system administrators can use these results to improve password diversity.

A Second Look at Password Composition Policies in the Wild: Comparing Samples from 2010 and 2016

Peter Mayer and Jan Kirchner, Technische Universität Darmstadt; Melanie Volkamer, Technische Universität Darmstadt, Karlstad University

Available Media

In this paper we present a replication and extension of the study performed by Florêncio and Herley published at SOUPS 2010. They investigated a sample of US websites, examining different website features' effects on the strength of the website's password composition policy (PCP). Using the same methodology as in the original study, we re-investigated the same US websites to identify differences over time. We then extended the initial study by investigating a corresponding sample of German websites in order to identify differences across countries. Our findings indicate that while the website features mostly retain their predicting power for the US sample, only one feature affecting PCP strength translates to the German sample: whether users can choose among multiple alternative websites providing the same service. Moreover, German websites generally use weaker PCPs and, in particular, PCPs of German banking websites stand out for having generally low strength PCPs.

Is that you, Alice? A Usability Study of the Authentication Ceremony of Secure Messaging Applications

Elham Vaziripour, Justin Wu, Mark O’Neill, Ray Clinton, Jordan Whitehead, Scott Heidbrink, Kent Seamons, and Daniel Zappala, Brigham Young University

Available Media

The effective security provided by secure messaging applications depends heavily on users completing an authentication ceremony|a sequence of manual operations enabling users to verify they are indeed communicating with one another. Unfortunately, evidence to date suggests users are unable to do this. Accordingly, we study in detail how well users can locate and complete the authentication ceremony when they are aware of the need for authentication. We execute a two-phase study involving 36 pairs of participants, using three popular messaging applications with support for secure messaging functionality: WhatsApp, Viber, and Facebook Messenger. The first phase included instruction about potential threats, while the second phase also included instructions about the importance of the authentication ceremony. We find that, across the three apps, the average success rates of finding and completing the authentication ceremony increases from 14% to 79% from the first to second phase, with second-phase success rates as high as 96% for Viber. However, the time required to find and complete the ceremony is undesirably long from a usability standpoint, and our data is inconclusive on whether users make the connection between this ceremony and the security guarantees it brings. We discuss in detail the success rates, task timings, and user feedback for each application, as well as common mistakes and user grievances. We conclude by exploring user threat models, finding significant gaps in user awareness and understanding.

“...better to use a lock screen than to worry about saving a few seconds of time”: Effect of Fear Appeal in the Context of Smartphone Locking Behavior

Yusuf Albayram, Mohammad Maifi Hasan Khan, Theodore Jensen, and Nhan Nguyen, University of Connecticut

Available Media

Using a secure screen lock mechanism is important to prevent unauthorized access and compromise of valuable data stored on smartphones. However, many users still choose not to use any such mechanism and often state inconvenience as the main reason. In this paper, we argue that lack of risk awareness plays an important role behind many users' insecure behavior rather than inconvenience, which can be addressed by communicating risks more effectively. To test this hypothesis, we designed an informational video explaining the risks of unauthorized access to smartphones with no screen lock. We compared a control group (n = 114) to a treatment group that viewed the video (n = 114) in terms of risk perceptions, concerns, and attitudes towards secure screen lock methods. Subsequently, a follow-up study was conducted to see whether the video was effective in changing participants' behavior or not (i.e., enabling a secure screen lock). We found statistically significant differences between treatment and control group participants in terms of perceived severity, risk awareness, response cost, and privacy and security concerns. Furthermore, the follow-up study revealed that significantly more participants in the treatment group enabled a secure screen lock than those in the control group (48/96 (50%) vs. 21/98 (21%), respectively). Finally, we present our analysis of qualitative data to further explain participants' behavior.

12:00 pm–1:30 pm

Lunch (on your own)

Check out a list of places to eat lunch nearby!

1:30 pm–3:00 pm

Specific User Populations

Session Chair: Sascha Fahl, CISPA

End User Security and Privacy Concerns with Smart Homes

Eric Zeng, Shrirang Mare, and Franziska Roesner, University of Washington

Available Media

The Internet of Things is becoming increasingly widespread in home environments. Consumers are transforming their homes into smart homes, with internet-connected sensors, lights, appliances, and locks, controlled by voice or other user-defined automations. Security experts have identified concerns with IoT and smart homes, including privacy risks as well as vulnerable and unreliable devices. These concerns are supported by recent high profile attacks, such as the Mirai DDoS attacks. However, little work has studied the security and privacy concerns of end users who actually set up and interact with today's smart homes. To bridge this gap, we conduct semi-structured interviews with fifteen people living in smart homes (twelve smart home administrators and three other residents) to learn about how they use their smart homes, and to understand their security and privacy related attitudes, expectations, and actions. Among other findings, we identify gaps in threat models arising from limited technical understanding of smart homes, awareness of some security issues but limited concern, ad hoc mitigation strategies, and a mismatch between the concerns and power of the smart home administrator and other people in the home. From these and other findings, we distill recommendations for smart home technology designers and future research.

Security Developer Studies with GitHub Users: Exploring a Convenience Sample

Yasemin Acar, Leibniz University Hannover; Christian Stransky, CISPA, Saarland University; Dominik Wermke, Leibniz University Hannover; Michelle Mazurek, University of Maryland, College Park; Sascha Fahl, CISPA, Saarland University

Available Media

The usable security community is increasingly considering how to improve security decision-making not only for end users, but also for information technology professionals, including system administrators and software developers. Recruiting these professionals for user studies can prove challenging, as, relative to end users more generally, they are limited in numbers, geographically concentrated, and accustomed to higher compensation. One potential approach is to recruit active GitHub users, who are (in some ways) conveniently available for online studies. However, it is not well understood how GitHub users perform when working on security-related tasks. As a first step in addressing this question, we conducted an experiment in which we recruited 307 active GitHub users to each complete the same security-relevant programming tasks. We compared the results in terms of functional correctness as well as security, finding differences in performance for both security and functionality related to the participant's self-reported years of experience, but no statistically significant differences related to the participant's self-reported status as a student, status as a professional developer, or security background. These results provide initial evidence for how to think about validity when recruiting convenience samples as substitutes for professional developers in security developer studies.

Regulators, Mount Up! Analysis of Privacy Policies for Mobile Money Services

Jasmine Bowers, Bradley Reaves, Imani N. Sherman, Patrick Traynor, and Kevin Butler, University of Florida

Available Media

Emerging digital financial services use mobile phones to provide access to populations traditionally excluded from the global economy. These "mobile money" services have proven extremely successful in their first ten years of deployment, and provide a powerful means of raising people out of poverty. Such services have access to a wealth of customer information, potentially including entire purchase histories, geolocation, and social network information. In this paper, we perform the first study of privacy policies in mobile money services, evaluating policies from 54 services and comparing them to 50 policies from traditional financial institutions. Because mobile money services are developed under a wide range of regulatory environments, we compare policies to the industry standard (the GSMA's Mobile Privacy Principles) and to a traditional national standard (the FDIC's Privacy Rule Handbook). Our analysis shows that almost half (44%) of these mobile money services do not have any privacy policy whatsoever. Of the services that do have privacy policies, roughly one-third (33%) fail to provide them in either of the two most common languages of their market. Furthermore, 50% of these policies do not ever identify to the user what data is actually being collected and stored. Finally, we find that where policies do exist, they are often incomplete and diffcult to read by their target customers. These findings show that more work is needed to protect consumer privacy within these mobile money services.

A Qualitative Investigation of Bank Employee Experiences of Information Security and Phishing

Dan Conway, Ronnie Taib, Mitch Harris, Kun Yu, Shlomo Berkovsky, and Fang Chen, Data61 - CSIRO

Available Media

Staff behaviour is increasingly understood to be an important determinant of an organisations' vulnerability to information security breaches. In parallel to the HCI and CSCW literature, models drawn from cognitive and health psychology have suggested a number of mental variables that predict staff response to security threats. This study began with these models, but engaged in a broader, discovery-orientated, qualitative investigation of how these variables were experienced, interacted subjectively, and what further variables might be of relevance. We conducted in-depth, semi-structured interviews consisting of open and closed questions with staff from a financial services institution under conditions of strict anonymity. Results include a number of findings such as a possible association between highly visible security procedures and low perceptions of vulnerability leading to poor security practices. We also found self-efficacy was a strong determinant of staff sharing stories of negative experiences and variances in the number of non-relevant emails that they process. These findings lead to a richer, deeper understanding of staff experiences in relation to information security and phishing.

3:00 pm–3:30 pm

Lightning Talks

Session Chair: Heather Crawford, Florida Institute of Technology

  • (Demo) TD-CHAIN: A System to Enhance Transparency in Data Flows
    David Sidi, University of Arizona
  • Privacy-preserving Information Mediation for Enterprises
    Karen Myers, SRI International
  • Security Metrics for the Electric Sector
    Candace Suh-Lee, Electric Power Research Institute
  • The Keepers of PII: Roles, Rules, and Responsibilities in the Workplace
    Brian Stanton, National Institute of Standards and Technology

3:30 pm–4:00 pm

Break with Refreshments

Grand Ballroom DE Foyer

4:00 pm–5:30 pm

Mobile

Session Chair: Robert Reeder, Google

DigiTally: Piloting Offline Payments for Phones

Khaled Baqer and Ross Anderson, Computer Laboratory, University of Cambridge; Lorna Mutegi, Strathmore University; Jeunese Adrienne Payne, Independent Consultant; Joseph Sevilla, Strathmore University

Available Media

Mobile payments support a range of services in many less developed countries including everyday payments, migrant remittances, credit, tax collection, and welfare benefits. These services depend entirely on the mobile phone network as their carrier, so they stop where the network does. This leaves millions of the very poorest people stranded — people living in remote areas where there is little to no network service. It also leaves urban users at the mercy of network congestion.

We developed a prototype system, DigiTally, which lets users make offline payments by copying short strings of digits from one mobile handset to another. Offline payments are already used for electricity (both in prepayment meters and pay-as-you-go solar); can we extend them into a general-purpose payment system, to increase service resilience in the face of network congestion or outage, and provide service to currently excluded areas?

We report the results of a preliminary study with an early prototype of DigiTally, tested on participants from a university in Nairobi (Kenya). The code-sharing process presented a possible usability challenge. To explore this and other aspects of an early prototype, DigiTally was introduced to Kenyan participants in order to resolve any major issues before a later field trial.

We discuss the lessons learned from our field visits and initial evaluation; we hope that this contribution is helpful for researchers and policy makers interested in mobile payments and financial inclusion. We also present our findings and observations. We found that, although offline payments involve copying codes in both directions between the payer's phone and the payee's, the extra workload was acceptable to most users.

Turtle Guard: Helping Android Users Apply Contextual Privacy Preferences

Lynn Tsai, University of California, Berkeley; Primal Wijesekera, University of British Columbia; Joel Reardon, Irwin Reyes, Serge Egelman, and David Wagner, University of California, Berkeley; Nathan Good and Jung-Wei Chen, Good Research

Available Media

Current mobile platforms provide privacy management interfaces to regulate how applications access sensitive data. Prior research has shown how these interfaces are insufficient from a usability standpoint: they do not account for context. In allowing for more contextual decisions, machine-learning techniques have shown great promise for designing systems that automatically make privacy decisions on behalf of the user. However, if such decisions are made automatically, then feedback mechanisms are needed to empower users to both audit those decisions and correct any errors.

In this paper, we describe our user-centered approach towards designing a fully functional privacy feedback interface for the Android platform. We performed two large-scale user studies to research the usability of our design. Our second, 580-person validation study showed that users of our new interface were significantly more likely to both understand and control the selected set of circumstances under which applications could access sensitive data when compared to the default Android privacy settings interface.

Authentication on the Go: Assessing the Effect of Movement on Mobile Device Keystroke Dynamics

Heather Crawford and Ebad Ahmadzadeh, Florida Institute of Technology

Available Media

Transparent authentication based on behavioral biometrics has the potential to improve the usability of mobile authentication due to the lack of a possibly intrusive user interface. Keystroke dynamics, or typing behavior, is a potentially rich source of biometric information for those that type frequently, and thus has been studied widely as an authenticator on touch-based mobile devices. However, the typing-while-moving scenario that characterizes mobile device use may change keystroke-based patterns sufficiently that typing biometrics-based authentication may not be viable. This paper presents a user study on the effects of user movement while typing on the effectiveness of keystroke dynamics as an authenticator. Using the dynamic text-based keystroke timings of 36 study participants, we first show that naïvely measuring patterns without considering position (e.g., sitting, standing or walking while typing) results in generic patterns that are little better than chance. We show that first determining the user's position before classifying their typing behavior, our two-phased approach, inferred the user's position with an AUC of above 90%, and the user's typing pattern was classified with an AUC of over 93%. Our results show that user typing patterns are a viable secondary or continuous post-PIN authentication method, even when movement changes a user's typing pattern.

Impact of User Characteristics on Attitudes Towards Automatic Mobile Application Updates

Arunesh Mathur, University of Maryland, College Park; Marshini Chetty, Princeton University

Available Media

To keep mobile devices secure, experts recommend turning on auto-updates for applications, but recent research has suggested that users often avoid auto-updating because updates can lead to undesirable consequences such as user interface changes or compatibility issues. Understanding whether there are commonalities amongst users who avoid auto-updates can help us create better mobile application updating interfaces. However, little is known about how users' characteristics associate with their attitudes towards auto-updating their mobile applications, or how we can leverage these characteristics to encourage users to auto-update these applications to improve security. In this paper, by surveying Android users, we establish how users' past experiences with software updating, and users' psychometric traits differentiate those users who avoid application auto-updates from those who do them, as well as users' preferences towards auto-updating their applications. Our findings reveal that users who avoid application auto-updates are more likely to have had past negative experiences with software updating, tend to take fewer risks, and display greater proactive security awareness. Users' perceived level of trust with mobile applications also determined how comfortable they are auto-updating these applications. Based on these findings, we recommend how Android can improve the design of application update systems to encourage users to auto-update and keep their devices secure.

Exploring decision making with Android’s runtime permission dialogs using in-context surveys

Bram Bonné, Hasselt University; Sai Teja Peddinti, Igor Bilogrevic, and Nina Taft, Google Inc.
IAPP SOUPS Privacy Award!

Available Media

A great deal of research on the management of user data on smartphones via permission systems has revealed significant levels of user discomfort, lack of understanding, and lack of attention. The majority of these studies were conducted on Android devices before runtime permission dialogs were widely deployed. In this paper we explore how users make decisions with runtime dialogs on smartphones with Android 6.0 or higher. We employ an experience sampling methodology in order to ask users the reasons influencing their decisions immediately after they decide. We conducted a longitudinal survey with 157 participants over a 6 week period.

We explore the grant and denial rates of permissions, overall and on a per permission type basis. Overall, our participants accepted 84% of the permission requests. We observe differences in the denial rates across permissions types; these vary from 23% (for microphone) to 10% (calendar). We find that one of the main reasons for granting or denying a permission request depends on users' expectation on whether or not an app should need a permission. A common reason for denying permissions is because users know they can change them later. Among the permissions granted, our participants said they were comfortable with 90% of those decisions — indicating that for 10% of grant decisions users may be consenting reluctantly. Interestingly, we found that women deny permissions twice as often as men.

6:00 pm–8:00 pm

SOUPS 2017 Symposium Reception

Terra Courtyard

Friday, July 14, 2017

7:30 am–8:30 am

Continental Breakfast

Grand Ballroom DE Foyer

8:30 am–10:00 am

Behaviour

Session Chair: Yasemin Acar, CISPA, Saarland University

Weighing Context and Trade-offs: How Suburban Adults Selected Their Online Security Posture

Scott Ruoti, Tyler Monson, Justin Wu, Daniel Zappala, and Kent Seamons, Brigham Young University

Available Media

Understanding how people behave when faced with complex security situations is essential to designing usable security tools. To better understand users' perceptions of their digital lives and how they managed their online security posture, we conducted a series of 23 semi-structured interviews with mostly middle-aged parents from suburban Washington state. Using a grounded theory methodology, we analyzed the interview data and found that participants chose their security posture based on the immense value the Internet provides and their belief that no combination of technology could make them perfectly safe. Within this context, users have a four-stage process for determining which security measures to adopt: learning, evaluation of risks, estimation of impact, and weighing trade-offs to various coping strategies. Our results also revealed that a majority of participants understand the basic principles of symmetric encryption. We found that participants' misconceptions related to browser-based TLS indicators lead to insecure behavior, and it is the permanence of encrypted email that causes participants to doubt that it is secure. We conclude with a discussion of possible responses to this research and avenues for future research.

How Effective is Anti-Phishing Training for Children?

Elmer Lastdrager and Inés Carvajal Gallardo, University of Twente; Pieter Hartel, University of Twente; Delft University of Technology; Marianne Junger, University of Twente
Distinguished Paper Award!

Available Media

User training is a commonly used method for preventing victimization from phishing attacks. In this study, we focus on training children, since they are active online but often overlooked in interventions. We present an experiment in which children at Dutch primary schools received an anti-phishing training. The subjects were subsequently tested for their ability to distinguish phishing from non-phishing. A control group was used to control for external effects. Furthermore, the subjects received a re-test after several weeks to measure how well the children retained the training. The training improved the children's overall score by 14%. The improvement was mostly caused by an increased score on the questions where they had to detect phishing. The score on recognizing legitimate emails was not affected by the training. We found that the improved phishing score returned to pre-training levels after four weeks. Conversely, the score of recognition of legitimate emails increased over time. After four weeks, trained pupils scored significantly better in recognizing legitimate emails than their untrained counterparts. Age had a positive effect on the score (i.e., older children scored higher than younger ones); but sex had no significant influence. In conclusion, educating children to improve their ability to detect phishing works in the short term only. However, children go to school regularly, making it easier to educate them than adults. An increased focus on the cybersecurity of children is essential to improve overall cybersecurity in the future.

“I feel stupid I can’t delete...”: A Study of Users’ Cloud Deletion Practices and Coping Strategies

Kopo Marvin Ramokapane and Awais Rashid, Lancaster University; Jose Miguel Such, King’s College London

Available Media

Deletion of data from cloud storage and services is an important aspect of privacy and security. But how easy or simple a task is it for users to complete? Cloud users' deletion practices, challenges and coping strategies have not been well studied to date. We undertook an exploratory study to better understand this issue. Through in-depth semi-structured interviews and use of deletion scenarios with 26 subjects, we explored several key questions: why and when cloud users would like to delete, why cloud users cannot delete, what causes such failures, what users do to work around these problems, and finally what do users want in terms of usable deletion in the cloud. We found that users' failure to delete arises from lack of information about deletion, incomplete mental models of the cloud and deletion within the cloud, and poorly designed user interfaces for deletion functions. Our results also show that users develop different coping strategies such as deleting from certain devices only, seeking help and changing service providers, to overcome such challenges. However, these strategies may not always produce desired results. We also discuss potential ways to improve the usability of deletion in the cloud.

The importance of visibility for folk theories of sensor data

Emilee Rader and Janine Slaker, Michigan State University

Available Media

Sensor-enabled wearable devices and smartphones collect data about users' movements, location, and private spaces and activities. As with many ubiquitous computing technologies, this data collection happens in the background and appears "seamless" or invisible to the user. Despite this, users are still expected to make informed choices regarding consent to data collection. Folk theories are sets of beliefs and understandings that arise informally and guide decision-making. To investigate folk theories regarding sensor data collection that might guide privacy self-management decisions, we conducted qualitative free list activities with 30 activity tracker users in which we asked them to list "information that an activity tracker knows". We found that folk theories regarding the data that activity trackers collect depend on interactions between the users and their trackers that provide visibility into dependencies among data types, evidence about what trackers are able to record, and feedback that inspires speculation about how trackers work. Our findings suggest opportunities for designing interfaces that intentionally support the development of folk theories about how sensor data are produced and how they might be used, which may enable users to make more informed privacy self-management decisions.

10:00 am–10:30 am

More Lightning Talks

Session Chair: Heather Crawford, Florida Institute of Technology

  • (Demo) Improving Second Factor Authentication Challenges to Help Protect Facebook account owners
    Oleg Iaroshevych, Facebook
  • Apples vs. Mangos: How an Activation Code Turned into an Identifier
    Joy Marie Forsythe, Mango Health
  • Rethinking & Redesigning the Windows 10 Privacy Settings
    Janice Tsai, Formerly Microsoft, Windows Privacy Manager (now Research Scientist @ Mozilla)
  • Measuring Privacy Interest with Search Queries
    Andrew McNamara, North Carolina State University
  • And More!

10:30 am–11:00 am

Break with Refreshments

Grand Ballroom DE Foyer

11:00 am–12:30 pm

Attacks and Defense

Session Chair: Mary Theofanos, National Institute of Standards and Technology (NIST)

Replication: Challenges in Using Data Logs to Validate Phishing Detection Ability Metrics

Casey Canfield, Alex Davis, and Baruch Fischhoff, Carnegie Mellon University; Alain Forget, Google; Sarah Pearman and Jeremy Thomas, Carnegie Mellon University

Available Media

The Security Behavior Observatory (SBO) is a longitudinal fieldstudy of computer security habits that provides a novel dataset for validating computer security metrics. This paper demonstrates a new strategy for validating phishing detection ability metrics by comparing performance on a phishing signal detection task with data logs found in the SBO. We report: (1) a test of the robustness of performance on the signal detection task by replicating Canfield, Fischhoff, and Davis (2016), (2) an assessment of the task's construct validity, and (3) evaluation of its predictive validity using data logs. We find that members of the SBO sample had similar signal detection ability compared to members of the previous mTurk sample and that performance on the task correlated with the Security Behavior Intentions Scale (SeBIS). However, there was no evidence of predictive validity, as the signal detection task performance was unrelated to computer security outcomes in the SBO, including the presence of malicious software, URLs, and files. We discuss the implications of these findings and the challenges of comparing behavior on structured experimental tasks to behavior in complex real-world settings.

Can we fight social engineering attacks by social means? Assessing social salience as a means to improve phish detection

James Nicholson, Lynne Coventry, and Pam Briggs, Northumbria University

Available Media

Phishing continues to be a problem for both individuals and organisations, with billions of dollars lost every year. We propose the use of nudges – more specifically social saliency nudges – that aim to highlight important information to the user when evaluating emails. We used Signal Detection Theory to assess the effects of both sender saliency (highlighting important fields from the sender) and receiver saliency (showing numbers of other users in receipt of the same email). Sender saliency improved phish detection but did not introduce any unwanted response bias. Users were asked to rate their confidence in their own judgements and these confidence scores were poorly calibrated with actual performance, particularly for phishing (as opposed to genuine) emails. We also examined the role of impulsive behaviour on phish detection, concluding that those who score highly on dysfunctional impulsivity are less likely to detect the presence of phishing emails.

Raise the Curtains: The Effect of Awareness About Targeting on Consumer Attitudes and Purchase Intentions

Sonam Samat, Alessandro Acquisti, and Linda Babcock, Carnegie Mellon University

Available Media

We investigate the effect of awareness about targeting on users' attitudes towards a targeted ad and behavioral intentions towards the advertised product. Specifically, we study the effect of a notice that makes individuals aware that a particular advertisement has been targeted to them on their attitudes about the product and intentions to purchase the product. We find that, among individuals who have negative opinions about the practice of targeted advertising, awareness about targeting significantly worsens attitudes towards the targeted product and reduces the likelihood of purchasing the targeted product. Among individuals who have positive and neutral opinions about targeted advertising, awareness about targeting does not impact attitudes or purchase intentions towards the targeted product. We develop a scale to measure opinions about targeted ads and find that a substantial proportion (at least 33%) of our participants have negative opinions about targeted ads. This suggests that the self-regulated advertising industry is not incentivized to follow recommendations from the U.S. Federal Trade Commission to make consumers aware about their targeted advertising practices.

Using chatbots against voice spam: Analyzing Lenny’s effectiveness

Merve Sahin, EURECOM; Marc Relieu, I3-SES, CNRS, Télécom ParisTech; Aurélien Francillon, EURECOM

Available Media

A new countermeasure recently appeared to fight back against unwanted phone calls (such as, telemarketing, survey or scam calls), which consists in connecting back the telemarketer with a phone bot ("robocallee") which mimics a real persona. Lenny is such a bot (a computer program) which plays a set of pre-recorded voice messages to interact with the spammers. Although not based on any sophisticated artificial intelligence, Lenny is surprisingly effective in keeping the conversation going for tens of minutes. Moreover, it is clearly recognized as a bot in only 5% of the calls recorded in our dataset. In this paper, we try to understand why Lenny is so successful in dealing with spam calls. To this end, we analyze the recorded conversations of Lenny with various types of spammers. Among 487 publicly available call recordings, we select 200 calls and transcribe them using a commercial service. With this dataset, we first explore the spam ecosystem captured by this chatbot, presenting several statistics on Lenny's interaction with spammers. Then, we use conversation analysis to understand how Lenny is adjusted with the sequential context of such spam calls, keeping a natural flow of conversation. Finally, we discuss a range of research and design issues to gain a better understanding of chatbot conversations and to improve their effciency.

12:30 pm–2:00 pm

SOUPS 2017 Symposium Luncheon

Terra Courtyard

2:00 pm–3:50 pm

Privacy

Session Chair: Rick Wash, Michigan State University

Valuating Friends’ Privacy: Does Anonymity of Sharing Personal Data Matter?

Yu Pu, The Pennsylvania State University; Jens Grossklags, Technical University of Munich

Available Media

Through their third-party app installation decisions, users are frequently triggering interdependent privacy consequences by sharing personal information of their friends who are unable to control these information flows. With our study, we aim to quantify the value which app users attribute to their friends' information (i.e., value of interdependent privacy) and to understand how this valuation is affected by two factors: sharing anonymity (i.e., whether disclosure of friends' information is anonymous), and context relevance (i.e., whether friends' information is necessary for apps' functionality). Specifically, we conduct a between-subject, choice-based conjoint analysis study with 4 treatment conditions (2 sharing anonymity × 2 context relevance). Our study confirms the important roles that sharing anonymity and context relevance play in the process of interdependent privacy valuation. In addition, we also investigate how other factors, e.g., individuals' personal attributes and experiences, affect interdependent privacy valuations by applying structural equation modeling analysis. Our research findings yield design implications as well as contribute to policy discussions to better account for the problem of interdependent privacy.

Self-driving cars and data collection: Privacy perceptions of networked autonomous vehicles

Cara Bloom, Joshua Tan, Javed Ramjohn, and Lujo Bauer, Carnegie Mellon University

Available Media

Self-driving vehicles and other networked autonomous robots use sophisticated sensors to capture continuous data about the surrounding environment. In the public spaces where autonomous vehicles operate there is little reasonable expectation of privacy and no notice or choice given, raising privacy questions. To improve the acceptance of networked autonomous vehicles and to facilitate the development of technological and policy mechanisms to protect privacy, public expectations and concerns must first be investigated. In a study (n=302) of residents in cities with and without Uber autonomous vehicle fleets, we explore people's conceptions of the sensing and analysis capabilities of self-driving vehicles; their comfort with the different capabilities; and the effort, if any, to which they would be willing to go to opt out of data collection. We find that 54% of participants would spend more than five minutes using an online system to opt out of identifiable data collection. In addition, secondary use scenarios such as recognition, identification, and tracking of individuals and their vehicles were associated with low likelihood ratings and high discomfort. Surprisingly, those who thought secondary use scenarios were more likely were more comfortable with those scenarios. We discuss the implications of our results for understanding the unique challenges of this new technology and recommend industry guidelines to protect privacy.

Format vs. Content: The Impact of Risk and Presentation on Disclosure Decisions

Sonam Samat and Alessandro Acquisti, Carnegie Mellon University

Available Media

Although the importance of format and presentation of privacy notices has been extensively studied in the privacy literature, less explored is the interplay of presentation and content in influencing users' disclosure decisions. In two experiments, we manipulate the content as well as the format of privacy notices shown to participants who were asked to choose whether they would like to disclose personal information. We manipulate content by changing the objective privacy risk that participants face from disclosing personal information. We manipulate format by changing the manner in which these notices are presented. We find that participants are significantly less likely to share their personal information when the privacy notice is presented under a 'Prohibit [disclosure]' frame, as compared to an 'Allow [disclosure]' frame. However, and importantly, we find that the effect of changes in framing on disclosure decisions is small when the objective privacy risk from disclosure is low, but the effect of framing becomes larger when the risk is increased—that is, for potentially more sensitive decisions. Our results highlight the nuanced interaction effects between the objective content of privacy notices and the manner in which they are presented, on disclosure behavior.

New Me: Understanding Expert and Non-Expert Perceptions and Usage of the Tor Anonymity Network

Kevin Gallagher, New York University; Sameer Patil, Indiana University; Nasir Memon, New York University

Available Media

Proper use of an anonymity system requires adequate understanding of how it functions. Yet, there is surprisingly little research that looks into user understanding and usage of anonymity software. Improper use stemming from a lack of sufficient knowledge of the system has the potential to lead to deanonymization, which may hold severe personal consequences for the user. We report on the understanding and the use of the Tor anonymity system. Via semistructured interviews with 17 individuals (6 experts and 11 non-experts) we found that experts and non-experts view, understand, and use Tor in notably different ways. Moreover, both groups exhibit behavior as well as gaps in understanding that could potentially compromise anonymity. Based on these findings, we provide several suggestions for improving the user experience of Tor to facilitate better user understanding of its operation, threat model, and limitations.

Privacy Expectations and Preferences in an IoT World

Pardis Emami Naeini, Sruti Bhagavatula, Hana Habib, Martin Degeling, Lujo Bauer, Lorrie Cranor, and Norman Sadeh, Carnegie Mellon University

Available Media

With the rapid deployment of Internet of Things (IoT) technologies and the variety of ways in which IoT-connected sensors collect and use personal data, there is a need for transparency, control, and new tools to ensure that individual privacy requirements are met. To develop these tools, it is important to better understand how people feel about the privacy implications of IoT and the situations in which they prefer to be notified about data collection. We report on a 1,007-participant vignette study focusing on privacy expectations and preferences as they pertain to a set of 380 IoT data collection and use scenarios. Participants were presented with 14 scenarios that varied across eight categorical factors, including the type of data collected (e.g. location, biometrics, temperature), how the data is used (e.g., whether it is shared, and for what purpose), and other attributes such as the data retention period. Our findings show that privacy preferences are diverse and context dependent; participants were more comfortable with data being collected in public settings rather than in private places, and are more likely to consent to data being collected for uses they find beneficial. They are less comfortable with the collection of biometrics (e.g. fingerprints) than environmental data (e.g. room temperature, physical presence). We also find that participants are more likely to want to be notified about data practices that they are uncomfortable with. Finally, our study suggests that after observing individual decisions in just three data-collection scenarios, it is possible to predict their preferences for the remaining scenarios, with our model achieving an average accuracy of up to 86%.

3:50 pm–4:00 pm

Closing Remarks

4:00 pm–4:30 pm

SOUPS 2017 Ice Cream Social

TusCA Courtyard