All sessions will be held in Grand Ballroom E unless otherwise noted.
Downloads for Registered Attendees
(Sign in to your USENIX account to download these files.)
Tuesday, August 13
8:00 am–9:00 am
Grand Ballroom Foyer
9:00 am–9:30 am
Program Co-Chairs: David Brumley, ForAllSecure, and Emily Stark, Google
9:30 am–10:00 am
Vasudevan Nagendra, Stony Brook University
In this talk, I will cover two fundamental challenges the existing IoT infrastructures face today: (i) inability of existing automation frameworks to intuitively capture the automation or policy intents of the users, and (ii) inability of existing tools (that detects static conflicts among automation rules) to proactively detect potential run-time conflicts and violations during policy compilation stage itself. In the first part of the talk, I will discuss the need for novel graph-based automation specification mechanisms. In the second part, I will discuss about the limitations of existing security analyzer tools and demonstrate the need for new security analysis mechanisms such as: (i) ability to detect potential run-time violation (i.e., during policy compile time), (ii) need for behavioral analysis and situational awareness, and (iii) other analyzers that could proactively detect the fundamental issues in the automation. In this talk, I will briefly cover the potential approaches that could be taken to handle the current situation.
Razvan Beuran, Japan Advanced Institute of Science and Technology (JAIST)
There are open formats for things like exchanging music (MusicXML) and e-learning content (SCORM), so let's make a standard open format for exchanging the content of cybersecurity training activities! This would help establish an ecosystem around cybersecurity education and training so that educators can create and share activities that include hands-on practice without having to worry about the implementation details of the actual training environment (cyber range). We expect that such an ecosystem would significantly contribute to improving the quality of cybersecurity education and training worldwide.
Matthew Elder, JHU/APL
Malware Authors Are Just Writing Software: What Can the Software Development Life Cycle and Social Network Analysis Teach Us About Malware Attribution?
Malware is just software—with some special characteristics—and malware authors are just specialized software developers. Software development, including malware development, is an inherently "social" activity—all software is composed of contributions from multiple authors, either explicitly by a software development team or implicitly through the inclusion of libraries and other shared code. Software development is characterized by a software development life cycle, and given that it is a social process, social network analysis can be applied to the contributors and contributions for a given software binary. We argue that each malware binary is produced using some form of a software development life cycle, and there will be clues—artifacts—as to the contributions and contributors for that piece of software that can be studied using social network analysis.
10:00 am–10:30 am
Break with Refreshments
Grand Ballroom Foyer
10:30 am–11:00 am
Session 1 Discussion
11:00 am–11:40 am
Qi Alfred Chen, University of California, Irvine
Transportation systems today will soon be transformed profoundly due to two recent technology advances: Connected Vehicle (CV) and Autonomous Vehicle (AV). Such transformation leads to the creation of a series of next-generation transportation systems such as smart traffic lights and self-driving cars, but it may also introduce new security problems and challenges. In this talk, I will describe my recent research efforts that performed the first software security analysis of both real-world CV and AV systems [NDSS' 18, CCS' 19], which discovers new security problems that can cause unsafe driving decisions and massive traffic jams. I will conclude by discussing research challenges and future research directions of the problem space.
Arun Raghuramu, Forescout
Network segmentation is a fundamental measure to limit the attack surface in any network environment. The need and importance of network segmentation is more pronounced in environments with sensitive and easily exploitable IoT devices and data (e.g. hospital networks). Based on network data collected for over 4 million devices and over 250 enterprise networks, we compare and contrast the Layer 2 segmentation practices in sensitive healthcare environments versus non-healthcare environments. We argue for the necessity of having 'true segmentation' rather than the illusion of segments at Layer 3.
Anil Kurmus, IBM Research – Zurich
Speculative execution attacks: The new memory corruption?
In this talk, we draw a parallel between transient execution attacks and memory corruption, arguing that there will be strong defenses for some categories (e.g., fault-based transient execution attacks), while other attack categories are here to stay (e.g., Spectre v1). We will touch on what we believe fruitful mitigation strategies for the remaining attacks are, namely a combination of careful threat modeling for applications, augmented ISAs and compiler support, hopefully opening to a lively hallway discussion.
Xinyu Xing, The Pennsylvania State University
In the past, exploitation automation has been well studied. However, it has not yet facilitated the development of working exploits. In this talk, I will talk about the components that exploitation automation techniques fail to enclose and discuss the challenges of building these components.
Ariana Mirian, University of California, San Diego
Subpopulations have different characteristics and behaviors that are often not captured in scientific studies. The medical field recently realized the negative outcomes in treating all populations the same; many medications have harmful side effects or are not as effective for minority populations, the same populations that are routinely underrepresented in clinical trials. In security and measurement, segmenting user populations based on their demographics could allow us to provide targeted security recommendations and accommodate user behaviors better. In this talk, I'll discuss the need in empirical security for segmenting user populations as well as the subtleties in publishing this research, namely that results could be misconstrued to incorrectly blame the target population.
11:40 am–12:30 pm
Session 2 Discussion
12:30 pm–2:00 pm
2:00 pm–2:40 pm
Marco Slaviero, Thinkst Applied Research
Where's the honeypot love? Honeypots have largely been relegated to toys in academic settings, with very little recent notable work. We bring data from over 600 commercial deployments showing that internal honeypots are remarkably effective as breach detection. We'll expand on this contradiction of an effective approach being ignored by academics, touching on the obsession with network honeypots and point to research directions for non-network honeypots.
Jack Britton, Virtru
It's not a data deluge, it's a data desert: Security's data scarcity problem
The big data revolution has easily surpassed the buzzword stage, but no one talks about how the lack of data is greatly limiting advances in security. Defenders and researchers struggle with finding representative samples thanks to understandable IP and PII concerns. Even information sharing groups still can't reap the benefits of advanced data analytics due to limited data access. I'll discuss this data scarcity problem, argue that we can still collaborate and share data while maintaining security and privacy, and take a sneak peek at an innovative future without data deserts.
Alana Maurushat, Professor of Cybersecurity and Behaviour, Western Sydney University
Privacy Bounties - a good idea?
Bug bounties are the 'du jour' solution to the historical problems with vulnerability disclosures and properly incentivizing security researchers while limiting their exposure to liability and criminal sanction. Are bug bounty programs like BugCrowd and HackerOne suitable to other contexts such as provable privacy? This lightning talk will address the need for provable privacy validation beyond internal testing, and mere assertions that a form of anonymization or pseudonymization is being used to protect privacy in the world of extreme data processing. Transparency and programs that incentivize white-hat testing for security could be adapted to help build confidence in empirical approaches to differential privacy and other forms of anonymization and pseudonymization of personal information.
Josh Payne, Stanford University and IBM Research
Because of the recent explosion in IoT device adoption in homes and businesses, smart-environment security has become increasingly complex. Adversaries are perpetually finding new ways to exploit IoT networks, and knowing the security state of a smart home or enterprise is necessary to combat these attacks. We propose a new approach to threat modeling in this domain: an "Attack Circuit", which is a modular attack graph describing the IoT network that is implemented using known vulnerabilities, natural language processing, and SIEM logs. We've performed experiments on IoT networks to demonstrate the efficacy of the proposed techniques.
Coralie Busse-Grawitz, ETH Zurich
There is an abundance of current security research that heavily relies on Machine Learning to find a model for a complex, not well-understood truth. However, offloading the task of finding the truth to a tool eventually relies on circular reasoning, rendering these security applications useless and dangerous: Measuring a model's quality requires a dataset, whose representativeness can only be guaranteed with thorough a priori knowledge of the truth, causing an implicit tautology. This talk pushes for a clear separation between modeling and finding the truth, to give future security systems a steady ground to stand on.
2:40 pm–3:30 pm
Session 3 Discussion
3:30 pm–4:00 pm
Break with Refreshments
Grand Ballroom Foyer
4:00 pm–4:30 pm
Jacob Erickson and Jewel Yuzon, Northeastern University, Khoury College of Computer Sciences
What You Don't Expect When You're Expecting: Security and Privacy Analysis of Femtech
In this talk, we focus on femtech, a category of software, diagnostics, and products that serve female health needs. The most widely used services include fertility solutions, period-tracking, pregnancy and nursing care, women's sexual wellness, and reproductive system health care.
In 2018, femtech companies raised $392 million in funding, and some estimates project that femtech will be a $50 billion industry by 2025. While acknowledging the unprecedented growth and potential of the femtech industry, in this talk, we also recognize the importance, the sensitivity and the ease of access to the data recorded and shared by the femtech systems. We examine the range of threats arising from the misuse and abuse of femtech apps. We then audit a selection of femtech apps, to identify what data do such apps actually record, how are the data stored and transmitted, and how they are shared with data aggregators. The goal of this talk is to point out to the severe consequences of possible misuse of femtech services, and to examine appropriate technical and regulatory approaches to improve the current industry.
Smriti Bhatt, Texas A&M University
Wearable IoT devices and Cyber Warfare: With the emerging IoT landscape, wearable connected devices have gained popularity for health-related including as a fashion trend. Wearable devices collect sensitive user data, store it, and disseminate it to other points in the network. Use of these IoT devices by specific officials, especially with top security clearance, can enable cyber attackers to exploit these devices to get access to security sensitive data and thus, run more targeted attacks, on the nation. For example, fitness trackers leaked the movement of troops as the Department of Defense (DoD) found. In the future, we have a persistent risk that the wearable IoT devices weaknesses and vulnerabilities will be exploited to execute Cyber Warfare.
Yizheng Chen, Columbia University
Building robust machine learning models has always been a cat-and-mouse game, with new attacks constantly devised to defeat the defenses. Recently, a new paradigm has emerged to train verifiably robust machine learning models for image classification tasks. We can use this technique, verifiably robust training, to build robust classifiers for security-relevant applications, e.g., malware and twitter spam detection. We can train classifiers verified to be robust against building-block attacks, which makes it harder for attackers to come up with more sophisticated attacks.
Roland Meier, ETH Zurich
The possibilities, limitations, pros, and cons of using programmable data planes for network security
Recent advances in programmable network architectures allow them to run complex algorithms in the data plane of a network, i.e., with visibility of all packets and at line rate. While this new concept receives great attention in the networking community, it is barely used for security purposes. This is surprising because many network security solutions suffer from limitations induced by off-path traffic analysis (e.g., w.r.t. required sampling, induced delay, and needed storage), all of which do not apply for processing traffic directly in the data plane.
In this talk, we bring the concept of programmable data planes closer to the security community by (i) explaining what is (and is not) possible in programmable data planes and (ii) discussing when it makes (no) sense to do something in the data plane.
4:30 pm–5:00 pm
Session 4 Discussion
5:00 pm–5:30 pm
Awards and Closing Remarks
Program Co-Chairs: David Brumley, ForAllSecure, and Emily Stark, Google
6:00 pm–7:00 pm
Tuesday Happy Hour
Sponsored by Intel
Mingle with other attendees while enjoying snacks and beverages. Attendees of USENIX Security '19 and all co-located events are welcome.