SOUPS 2021 Technical Sessions

All the times listed below are in Pacific Daylight Time (PDT).

Papers are available for download below to registered attendees now and to everyone beginning Monday, August 9, 2021. Paper abstracts are available to everyone now. Copyright to the individual works is retained by the author[s].

Monday, August 9

7:00 am–7:15 am

Opening Remarks and Awards

General Chair: Sonia Chiasson, Carleton University

7:15 am–8:15 am

Authentication

Session Chair: Rick Wash, Michigan State University

Towards Usable and Secure Location-based Smartphone Authentication

Geumhwan Cho, Sungkyunkwan University, South Korea; Sungsu Kwag and Jun Ho Huh, Samsung Research, South Korea; Bedeuro Kim, Sungkyunkwan University, South Korea; Choong-Hoon Lee, Samsung Research, South Korea; Hyoungshick Kim, Sungkyunkwan University, South Korea

Available Media

Using a Blocklist to Improve the Security of User Selection of Android Patterns

Collins W. Munyendo and Miles Grant, The George Washington University; Philipp Markert, Ruhr University Bochum; Timothy J. Forman, United States Navy; Adam J. Aviv, The George Washington University

Available Media

Android patterns remain a popular method for unlocking smartphones, despite evidence suggesting that many users choose easily guessable patterns. In this paper, we explore the usage of blocklists to improve the security of user-chosen patterns by disallowing common patterns, a feature currently unavailable on Android but used by Apple during PIN selection. In a user study run on participants' smartphones (n = 1006), we tested 5 different blocklist sizes and compared them to a control treatment. We find that even the smallest blocklist (12 patterns) had benefits, reducing a simulated attacker's success rate after 30 guesses from 24 % to 20 %. The largest blocklist (581 patterns) reduced the percentage of correctly guessed patterns after 30 attempts down to only 2 %. In terms of usability, blocklists had limited negative impact on short-term recall rates and entry times, with reported SUS values indicating reasonable usability when selecting patterns in the presence of a blocklist. Based on our simulated attacker performance results for different blocklist sizes, we recommend blocking 100 patterns for a good balance between usability and security.

8:15 am–8:30 am

Break

8:30 am–9:30 am

User Attitudes and (Mis)understandings

Session Chair: Blase Ur, University of Chicago

Never ever or no matter what: Investigating Adoption Intentions and Misconceptions about the Corona-Warn-App in Germany

Maximilian Häring, University of Bonn; Eva Gerlitz, Fraunhofer FKIE; Christian Tiefenau, University of Bonn; Matthew Smith, University of Bonn, Fraunhofer FKIE; Dominik Wermke and Sascha Fahl, CISPA, University of Hannover; Yasemin Acar, Max Planck Institute for Security and Privacy

Available Media

To help tackle the COVID-19 pandemic, the tech community has put forward proximity detection apps to help warn people who might have been exposed to the coronavirus. The privacy implications of such apps have been discussed both in academic circles and the general population. The discussion in Germany focused on the trade-off between a centralized or decentralized approach for data collection and processing and their implications. Specifically, privacy dominated the public debate about the proposed "Corona-Warn-App." This paper presents a study with a quota sample of the German population (n=744) to assess what the population knew about the soon-to-be-released app and their willingness to use it. We also presented participants potential properties the app could have and asked them how these would affect their usage intention. Based on our findings, we discuss our participants' views on privacy and functionality, including their perception of selected centralized and decentralized features. We also examine a wide range of false beliefs and information that was not communicated successfully. Especially technical details, such as that the app would use Bluetooth, as opposed to location services, were unknown to many participants. Our results give insights on the complicated relationship of trust in the government and public communication on the population's willingness to adopt the app.

Understanding Users' Knowledge about the Privacy and Security of Browser Extensions

Ankit Kariryaa, University of Copehagen & University of Bremen; Gian-Luca Savino and Carolin Stellmacher, University of Bremen; Johannes Schöning, University of Bremen & University of St. Gallen

Available Media

Browser extensions enrich users' browsing experience, e.g., by blocking unwanted advertisements on websites. To perform these functions, users must grant certain permissions during the installation process. These permissions, however, give very limited information about the fact that they allow the extension to access user's personal data and browsing behaviour, posing security and privacy risks. To understand users' awareness of these privileges and the associated threats, we conducted an online survey with 353 participants, focusing on users' attitude, knowledge, and preference towards extensions' permission requests. We found that users report interest in seeking information, trust the developers but do little to protect their data. They have limited knowledge about the technical abilities of browser extensions and prefer permission statements that evoke a clear mental model. Based on our findings we derive recommendations for the improvement of browser extension permission dialogues through clear language, technical improvements and distinct responsibilities.

Replication: Effects of Media on the Mental Models of Technical Users

Khadija Baig, Elisa Kazan, Kalpana Hundlani, Sana Maqsood, and Sonia Chiasson, Carleton University

Available Media

Media has been observed to influence users' mental models in several domains. It was recently demonstrated that fictional television and movies have a strong influence on non-technical end users' mental models of security. We extended this study to explore its effect on 23 participants with technical backgrounds, given that misconceptions amongst this group could have important organisational impacts or could influence other non-technical end users. Our qualitative analysis reveals that technical participants sourced their mental models from both their academic or professional lives and from different forms of media (like news, cinema, forums, and social media). They were capable of identifying unrealistic depictions of hacking in the provided video clips and most could offer simplistic explanations about why these were problematic. We found that they generally had more nuanced understanding of the issues than non-technical end users, but they were not immune to misinformation from mass media.

Comparing Security and Privacy Attitudes Among U.S. Users of Different Smartphone and Smart-Speaker Platforms

Desiree Abrokwa, Shruti Das, Omer Akgul, and Michelle L. Mazurek, University of Maryland

Available Media

Many studies of mobile security and privacy are, for simplicity, limited to either only Android users or only iOS users. However, it is not clear whether there are systematic differences in the privacy and security knowledge or preferences of users who select these two platforms. Understanding these differences could provide important context about the generalizability of research results. This paper reports on a survey (n=493) with a demographically diverse sample of U.S. Android and iOS users. We compare users of these platforms using validated privacy and security scales (IUIPC-8 and SA-6) as well as previously deployed attitudinal and knowledge questions from Pew Research Center. As a secondary analysis, we also investigate potential differences among users of different smart-speaker platforms, including Amazon Echo and Google Home. We find no significant differences in privacy attitudes of different platform users, but we do find that Android users have more technology knowledge than iOS users. In addition, we find evidence (via comparison with Pew data) that Prolific participants have more technology knowledge than the general U.S. population.

9:30 am–10:15 am

Social, Mentorship, and Networking Events

Mentoring Program: See the Mentoring Program page for more information.

10:15 am–11:15 am

Keynote Address

Mitigating Technology Abuse in Intimate Partner Violence

Thomas Ristenpart, Cornell Tech

Our research group studies the role technology plays in intimate partner violence (IPV). Via interviews with survivors and support professionals, online measurement studies, and investigation of malicious software tools purpose-built for abuse, we have documented how abusers exploit technology to control, harass, stalk, and otherwise harm their current or former partner. To help survivors, we work with technology companies and lawmakers to affect positive changes and, more directly, we have put into practice a new interventional approach that we call clinical computer security. Our Clinic to End Tech Abuse works in partnership with the New York City Mayor's Office to End Domestic and Gender-Based Violence to help survivors navigate technology abuse and, ultimately, empower their use of technology.

In this talk I will provide a brief overview of our work on IPV technology abuse, and use it as a case study for "advocate-scientist" models that blend basic research with direct advocacy work in close collaboration with a community.

This talk will cover joint work with a large number of collaborators and clinic volunteers. See https://ipvtechresearch.org and https://ceta.tech.cornell.edu for more information.

Thomas Ristenpart, Cornell Tech

Thomas Ristenpart is an Associate Professor at Cornell Tech and a member of the Computer Science department at Cornell University. His research spans a wide range of computer security topics, with recent focuses including digital privacy and safety in intimate partner violence, mitigating abuse and harassment online, cloud computing security, improvements to authentication mechanisms including passwords, confidentiality and privacy in machine learning, and topics in applied and theoretical cryptography.

11:15 am–11:30 am

Break

11:30 am–12:30 pm

Perspectives and Policy

Session Chair: Josephine Wolff, Tufts University

"How I Know For Sure": People's Perspectives on Solely Automated Decision-Making (SADM)

Smirity Kaushik, University of Illinois at Urbana-Champaign; Yaxing Yao, University of Maryland, Baltimore County; Pierre Dewitte, Katholieke Universiteit Leuven Centre for IT & IP; Yang Wang, University of Illinois at Urbana-Champaign

Available Media

Algorithms are used to make automated decisions that can affect individuals in numerous domains. The General Data Protection Regulation (GDPR) of the European Union (EU) has granted citizens some rights regarding solely automated decision-making (SADM) including obtaining an explanation of such processing. It is unclear, however, how organizations should support people in effectively exercising such rights. We conducted an online survey to understand people's perspectives on SADM. We found that our respondents had several misunderstandings about the SADM right such as they can opt out of SADM ahead of time. We also identified various attributes of SADM that our respondents desired to understand, including new attributes (e.g., actionable information about what they can practically do to improve future decision outcome) not covered by implementation guidelines of the GDPR. Our respondents also anticipated many challenges with SADM including not knowing when SADM is applied to them. We discuss design implications of our results on how to support people in coping with SADM, for instance, the design of icons to represent SADM processing as well as explanation templates that cover a common set of attributes and can be personalized to explain a specific SADM decision about a person.

A Fait Accompli? An Empirical Study into the Absence of Consent to Third-Party Tracking in Android Apps

Konrad Kollnig and Reuben Binns, University of Oxford; Pierre Dewitte, KU Leuven; Max Van Kleek, Ge Wang, Daniel Omeiza, Helena Webb, and Nigel Shadbolt, University of Oxford

Available Media

Third-party tracking allows companies to collect users' behavioural data and track their activity across digital devices. This can put deep insights into users' private lives into the hands of strangers, and often happens without users' awareness or explicit consent. EU and UK data protection law, however, requires consent, both 1) to access and store information on users' devices and 2) to legitimate the processing of personal data as part of third-party tracking, as we analyse in this paper.

This paper further investigates whether and to what extent consent is implemented in mobile apps. First, we analyse a representative sample of apps from the Google Play Store. We find that most apps engage in third-party tracking, but few obtained consent before doing so, indicating potentially widespread violations of EU and UK privacy law. Second, we examine the most common third-party tracking libraries in detail. While most acknowledge that they rely on app developers to obtain consent on their behalf, they typically fail to put in place robust measures to ensure this: disclosure of consent requirements is limited; default consent implementations are lacking; and compliance guidance is difficult to find, hard to read, and poorly maintained.

Pursuing Usable and Useful Data Downloads Under GDPR/CCPA Access Rights via Co-Design

Sophie Veys, Daniel Serrano, Madison Stamos, and Margot Herman, University of Chicago; Nathan Reitinger and Michelle L. Mazurek, University of Maryland; Blase Ur, University of Chicago

Available Media

Data privacy regulations like GDPR and CCPA define a right of access empowering consumers to view the data companies store about them. Companies satisfy these requirements in part via data downloads, or downloadable archives containing this information. Data downloads vary in format, organization, comprehensiveness, and content. It is unknown, however, whether current data downloads actually achieve the transparency goals embodied by the right of access. In this paper, we report on the first exploration of the design of data downloads. Through 12 focus groups involving 42 participants, we gathered reactions to six companies' data downloads. Using co-design techniques, we solicited ideas for future data download designs, formats, and tools. Most participants indicated that current offerings need improvement to be useful, emphasizing the need for better filtration, visualization, and summarization to help them hone in on key information.

12:30 pm–12:45 pm

Break

12:45 pm–1:45 pm

Inclusive Privacy and Security

Session Chair: Lorrie Cranor, Carnegie Mellon University

"I'm Literally Just Hoping This Will Work:'' Obstacles Blocking the Online Security and Privacy of Users with Visual Disabilities

Daniela Napoli, Khadija Baig, Sana Maqsood, and Sonia Chiasson, Carleton University

Available Media

To successfully manage security and privacy threats, users must be able to perceive the relevant information. However, a number of accessibility obstacles impede the access of such information for users with visual disabilities, and could mislead them into incorrectly assessing their security and privacy. We explore how these users protect their online security and privacy. We observed their behaviours when navigating Gmail, Amazon, and a phishing site imitating CNIB, a well-known organization for our participants. We further investigate their real world concerns through semi-structured interviews. Our analysis uncovered severe usability issues which led users to engage in risky behaviours or to compromise between accessibility or security. Our work confirms the findings from related literature and provides novel insights, such as how security and accessibility software (e.g., JAWS) can hinder users' abilities to identify risks. We organize our main findings around four states of security and privacy experienced by users while completing sensitive tasks, and provide design recommendations for communicating security and privacy information to users with visual disabilities.

WebAlly: Making Visual Task-based CAPTCHAs Transferable for People with Visual Impairments

Zhuohao Zhang and Zhilin Zhang, University of Illinois at Urbana-Champaign; Haolin Yuan, Johns Hopkins University; Natã M. Barbosa, University of Illinois at Urbana-Champaign; Sauvik Das, Georgia Tech; Yang Wang, University of Illinois at Urbana-Champaign

Available Media

Task-based visual CAPTCHAs are a significant accessibility hurdle for people with visual impairments (PVIs). What if PVIs could transfer task-based visual CAPTCHAs to a helper to solve? How might PVIs want such a system configured in terms of from whom they would solicit help and how they would compensate this help? To answer these questions, we implemented and evaluated a proof-of-concept assistive transfer system — WebAlly — that makes task-based CAPTCHAs transferable by allowing PVIs to source just-in-time, remote control help from a trusted contact. In an exploratory, role-play study with 10 pairs of participants — a PVI and a friend or a family member — we asked participants to use WebAlly in four different configurations that varied in source of help (friend vs. stranger) and compensation (paid vs. volunteer). We found that PVIs liked having WebAlly as an additional option for solving visual CAPTCHAs, when other options that preserve their independence fail. In addition, many PVIs and their friends felt that using the system would bring their relationship closer. We discuss design implications for transferable CAPTCHAs and assistive transfer systems more broadly, e.g., the importance of complementing rather than replacing PVIs' existing workflows.

Designing Toxic Content Classification for a Diversity of Perspectives

Deepak Kumar, Stanford University; Patrick Gage Kelley and Sunny Consolvo, Google; Joshua Mason, University of Illinois at Urbana-Champaign; Elie Bursztein, Google; Zakir Durumeric, Stanford University; Kurt Thomas, Google; Michael Bailey, University of Illinois at Urbana-Champaign

Available Media

In this work, we demonstrate how existing classifiers for identifying toxic comments online fail to generalize to the diverse concerns of Internet users. We survey 17,280 participants to understand how user expectations for what constitutes toxic content differ across demographics, beliefs, and personal experiences. We find that groups historically at-risk of harassment—such as people who identify as LGBTQ+ or young adults—are more likely to to flag a random comment drawn from Reddit, Twitter, or 4chan as toxic, as are people who have personally experienced harassment in the past. Based on our findings, we show how current one-size-fits-all toxicity classification algorithms, like the Perspective API from Jigsaw, can improve in accuracy by 86% on average through personalized model tuning. Ultimately, we highlight current pitfalls and new design directions that can improve the equity and efficacy of toxic content classifiers for all users.

1:45 pm–2:00 pm

Break

2:00 pm–2:45 pm

Poster Session

Check out the cool new ideas and the latest preliminary research on display at the SOUPS 2021 Poster Session. View the list of accepted posters.

Tuesday, August 10

7:00 am–8:00 am

Phishing and Account Compromise

Session Chair: Adam J. Aviv, The George Washington University

Why They Ignore English Emails: The Challenges of Non-Native Speakers in Identifying Phishing Emails

Ayako A. Hasegawa, Naomi Yamashita, and Mitsuaki Akiyama, NTT; Tatsuya Mori, Waseda University / NICT / RIKEN AIP

Available Media

Prior work in cybersecurity and risk management has shown that non-native speakers of the language used in phishing emails are more susceptible to such attacks. Despite much research on behaviors English speakers use to avoid phishing attacks, little is known about behaviors of non-native speakers. Therefore, we conducted an online survey with 862 non-native English speakers (284 Germans, 276 South Koreans, and 302 Japanese). Our findings show that participants, especially those who lacked confidence in English, had a higher tendency to ignore English emails without careful inspection than emails in their native languages. Furthermore, both the German and South Korean participants generally followed the instructions in the email in their native languages without careful inspection. Finally, our qualitative analysis revealed five main factors that formed the participants' concerns in identifying English phishing emails. These findings highlight the importance of providing non-native speakers with specific anti-phishing interventions that differ from those for native speakers.

SoK: Still Plenty of Phish in the Sea — A Taxonomy of User-Oriented Phishing Interventions and Avenues for Future Research

Anjuli Franz, Verena Zimmermann, Gregor Albrecht, Katrin Hartwig, Christian Reuter, Alexander Benlian, and Joachim Vogt, Technical University of Darmstadt

Available Media

Phishing is a prevalent cyber threat, targeting individuals and organizations alike. Previous approaches on anti-phishing measures have started to recognize the role of the user, who, at the center of the target, builds the last line of defense. However, user-oriented phishing interventions are fragmented across a diverse research landscape, which has not been systematized to date. This makes it challenging to gain an overview of the various approaches taken by prior works. In this paper, we present a taxonomy of phishing interventions based on a systematic literature analysis. We shed light on the diversity of existing approaches by analyzing them with respect to the intervention type, the addressed phishing attack vector, the time at which the intervention takes place, and the required user interaction. Furthermore, we highlight shortcomings and challenges emerging from both our literature sample and prior meta-analyses, and discuss them in the light of current movements in the field of usable security. With this article, we hope to provide useful directions for future works on phishing interventions.

Investigating Web Service Account Remediation Advice

Lorenzo Neil, Elijah Bouma-Sims, and Evan Lafontaine, North Carolina State University; Yasemin Acar, Max Planck Institute for Security and Privacy; Bradley Reaves, North Carolina State University

Available Media

Knowledge and Capabilities that Non-Expert Users Bring to Phishing Detection

Rick Wash, Norbert Nthala, and Emilee Rader, Michigan State University

Available Media

Phishing emails are scam communications that pretend to be something they are not in order to get people to take actions they otherwise would not. We surveyed a demographically matched sample of 297 people from across the United States and asked them to share their descriptions of a specific experience with a phishing email. Analyzing these experiences, we found that email users' experiences detecting phishing messages have many properties in common with how IT experts identify phishing. We also found that email users bring unique knowledge and valuable capabilities to this identification process that neither technical controls nor IT experts have. We suggest that targeting training toward how to use this uniqueness is likely to improve phishing prevention.

8:00 am–8:15 am

Lightning Talks 1

The explainability of machine learning-based systems as a key consideration for usable security
Thomas Y. Chen, Academy for Mathematics, Science, and Engineering

Decision-Making Errors in Mobile Phishing
Rasha Ibrahim, Newcastle University

Solitude: A privacy analysis tool
Dan Hastings, NCC Group

8:15 am–8:30 am

Break

8:30 am–9:30 am

Security in Practice

Session Chair: Yasemin Acar, Max Planck Institute for Security and Privacy

On the Limited Impact of Visualizing Encryption: Perceptions of E2E Messaging Security

Christian Stransky, Leibniz University Hannover; Dominik Wermke, CISPA Helmholtz Center for Information Security; Johanna Schrader, Leibniz University Hannover; Nicolas Huaman, CISPA Helmholtz Center for Information Security; Yasemin Acar, Max Planck Institute for Security and Privacy; Anna Lena Fehlhaber, Leibniz University Hannover; Miranda Wei, University of Washington; Blase Ur, University of Chicago; Sascha Fahl, CISPA, Leibniz University Hannover

Available Media

9:30 am–10:15am

Social, Mentorship, and Networking Events

Mentoring Program: See the Mentoring Program page for more information.

10:15 am–11:15 am

Ubiquitous Computing

Session Chair: Jose M. Such, King's College London

Evaluating and Redefining Smartphone Permissions with Contextualized Justifications for Mobile Augmented Reality Apps

David Harborth, Goethe University Frankfurt am Main; Alisa Frik, ICSI, University of California Berkeley

Available Media

Augmented reality (AR), and specifically mobile augmented reality (MAR) gained much public attention after the success of Pokémon Go in 2016, and since then has found application in online games, social media, entertainment, real estate, interior design, and other services. MAR apps are highly dependent on real time context-specific information provided by the different sensors and data processing capabilities of smartphones (e.g., LiDAR, gyroscope or object recognition). This dependency raises crucial privacy issues for end users. We evaluate whether the existing access permission systems, initially developed for non-AR apps, as well as proposed new permissions, relevant for MAR apps, provide sufficient and clear information to the users. We address this research goal in two online survey-based experiments with a total of 581 participants. Based on our results, we argue that it is necessary to increase transparency about MAR apps' data practices by requesting users' permissions to access certain novel and privacy invasive resources and functionalities commonly used in MAR apps, such as speech and face recognition. We also find that adding justifications, contextualized to the data collection practices of the app, improves transparency and can mitigate privacy concerns, at least in the context of data utilized to the users' benefit. Better understanding of the app's practices and lower concerns, in turn, increase the intentions to grant permissions. We provide recommendations for better transparency in MAR apps.

11:15 am–11:30 am

Lightning Talks 2

Gallery of Latin American Malware
Sofia Celi, Cloudflare

Trust by design: a new old requirement
Ana Ferreira, CINTESIS - Center for Health Technology and Services Research, Faculty of Medicine, University of Porto, Portugal

11:30 am–11:45 am

Break

11:45 am–12:45 pm

Developers

Session Chair: Mary Ellen Zurko, MIT Lincoln Laboratory

A Qualitative Usability Evaluation of the Clang Static Analyzer and libFuzzer with CS Students and CTF Players

Stephan Plöger, Fraunhofer FKIE; Mischa Meier, University of Bonn; Matthew Smith, University of Bonn, Fraunhofer FKIE

Available Media

Testing software for bugs and vulnerabilities is an essential aspect of secure software development. Two paradigms are particularly prevalent in this domain: static and dynamic software testing. Static analysis has seen widespread adoption across the industry, while dynamic analysis, in particular fuzzing, has recently received much attention in academic circles as well as being used very successfully by large corporations such as Google, where for instance, over 20,000 bugs have been found and fixed in the Chrome project alone. Despite these kinds of success stories, fuzzing has not yet seen the kind of industry adoption static analysis has.

To get first insights, we examine the usability of the static analyzer Clang Static Analyzer and the fuzzer libFuzzer. To this end, we conducted the first qualitative usability evaluation of the Clang Static Analyzer and libFuzzer. We conducted a mixed factorial design study with 32 CS masters students and six competitive Capture the Flag (CTF) players. Our results show that our participants encountered severe usability issues trying to get libFuzzer to run at all.

In contrast to that, most of our participants were able to run the Clang Static Analyzer without significant problems. This shows that, at least in this case, the usability of libFuzzer was worse than of the Clang Static Analyzer. We make suggestions on how libFuzzer could be improved and how both tools compare.

Deciding on Personalized Ads: Nudging Developers About User Privacy

Mohammad Tahaei, University of Edinburgh; Alisa Frik, ICSI and University of California, Berkeley; Kami Vaniea, University of Edinburgh

Available Media

Mobile advertising networks present personalized advertisements to developers as a way to increase revenue. These types of ads use data about users to select potentially more relevant content. However, choice framing also impacts app developers' decisions which in turn impacts their users' privacy. Currently, ad networks provide choices in developer-facing dashboards that control the types of information collected by the ad network as well as how users will be asked for consent. Framing and nudging have been shown to impact users' choices about privacy, we anticipate that they have a similar impact on choices made by developers. We conducted a survey-based online experiment with 400 participants with experience in mobile app development. Across six conditions, we varied the choice framing of options around ad personalization. Participants in the condition where privacy consequences of ads personalization are highlighted in the options are significantly (11.06 times) more likely to choose non-personalized ads compared to participants in the Control condition with no information about privacy. Participants' choice of ad type is driven by impact on revenue, user privacy, and relevance to users. Our findings suggest that developers are impacted by interfaces and need transparent options.

Benefits and Drawbacks of Adopting a Secure Programming Language: Rust as a Case Study

Kelsey R. Fulton and Anna Chan, University of Maryland; Daniel Votipka, Tufts University; Michael Hicks and Michelle L. Mazurek, University of Maryland

Available Media

Programming languages such as Rust and Go were developed to combat common and potentially devastating memory safety-related vulnerabilities. But adoption of new, more secure languages can be fraught and complex. To better understand the benefits and challenges of adopting Rust in particular, we conducted semi-structured interviews with professional, primarily senior software developers who have worked with Rust on their teams or tried to introduce it (n = 16), and we deployed a survey to the Rust development community (n = 178). We asked participants about their personal experiences using Rust, as well as experiences using Rust at their companies. We find a range of positive features, including good tooling and documentation, benefits for the development lifecycle, and improvement of overall secure coding skills, as well as drawbacks including a steep learning curve, limited library support, and concerns about the ability to hire additional Rust developers in the future. Our results have implications for promoting the adoption of Rust specifically and secure programming languages and tools more generally.

12:45 pm–1:00 pm

Break

1:00 pm–2:00 pm

Work (and Learn) from Home

Session Chair: Lujo Bauer, Carnegie Mellon University

Examining the Examiners: Students' Privacy and Security Perceptions of Online Proctoring Services

David G. Balash, Dongkun Kim, and Darika Shaibekova, The George Washington University; Rahel A. Fainchtein and Micah Sherr, Georgetown University; Adam J. Aviv, The George Washington University

Available Media

In response to the Covid-19 pandemic, educational institutions quickly transitioned to remote learning. The problem of how to perform student assessment in an online environment has become increasingly relevant, leading many institutions and educators to turn to online proctoring services to administer remote exams. These services employ various student monitoring methods to curb cheating, including restricted ("lockdown") browser modes, video/screen monitoring, local network traffic analysis, and eye tracking. In this paper, we explore the security and privacy perceptions of the student test-takers being proctored. We analyze user reviews of proctoring services' browser extensions and subsequently perform an online survey (n=102). Our findings indicate that participants are concerned about both the amount and the personal nature of the information shared with the exam proctoring companies. However, many participants also recognize a trade-off between pandemic safety concerns and the arguably invasive means by which proctoring services ensure exam integrity. Our findings also suggest that institutional power dynamics and students' trust in their institutions may dissuade students' opposition to remote proctoring.

Virtual Classrooms and Real Harms: Remote Learning at U.S. Universities

Shaanan Cohney, Princeton University / University of Melbourne; Ross Teixeira, Anne Kohlbrenner, Arvind Narayanan, and Mihir Kshirsagar, Princeton University; Yan Shvartzshnaider, Princeton University / York University; Madelyn Sanfilippo, Princeton University / University of Illinois at Urbana-Champaign

Available Media

2:00 pm–2:15 pm

Lightning Talks 3

How Hackers Are Leveraging Media Trends To Serve Malware
Ken Carnesi, DNSFilter

Proposing the ‘Signature’ Auth Scheme: More Usable AND Secure Than Status Quo Solutions!
Ben Towne, Sigma Resources, LLC

2:15 pm–2:30 pm

Closing Remarks

General Chair: Sonia Chiasson, Carleton University

2:30 pm

Ice Cream Social