HotEdge '20 Workshop Program

Papers are available for download below to registered attendees now and to everyone beginning June 25, 2020. Paper abstracts are available to everyone now. Copyright to the individual works is retained by the author[s].

All the times listed below are in Pacific Daylight Time (PDT).

Attendee Files 
HotEdge '20 Paper Archive (ZIP)

Thursday, June 25, 2020

8:00 am–8:10 am

Opening Remarks

Program Co-Chairs: Irfan Ahmad, Magnition, and Ming Zhao, Arizona State University

8:10 am–9:40 am

Paper Session #1: International Papers

Session Chairs: Chris Stewart, The Ohio State University, and Dawei Li, Samsung Research

SaFace: Towards Scenario-aware Face Recognition via Edge Computing System

Zhe Zhou, Bingzhe Wu, Zheng Liang, Guangyu Sun, Chenren Xu, and Guojie Luo, Peking University

Available Media

Deep Convolutional Neural Networks (CNNs) have achieved remarkable progress in the field of face recognition (FR). However, developing a robust FR system in the real-world is still challenging due to vast variance of illumination, visual quality, and camera angles in different scenarios. These factors may result in significant accuracy drop, if the pretrained model doesn’t have perfect generalization ability. To mitigate this issue, we present a solution named SAFACE, which helps to improve FR accuracy through unsupervised online-learning in an edge computing system. Specifically, we propose a novel scenario-aware FR flow, then decouple the flow into different phases and map each of them to different levels of a three-layer edge computing system. For evaluation, we implement a prototype and demonstrate its advantages in both improving recognition accuracy and reducing processing latency.

Sharing and Caring of Data at the Edge

Animesh Trivedi, VU Amsterdam; Lin Wang, VU Amsterdam and TU Darmstadt; Henri Bal and Alexandru Iosup, VU Amsterdam

Available Media

Edge computing is an emerging computing paradigm where data is generated and processed in the field using distributed computing devices. Many applications such as real-time video processing, augmented/virtual reality gaming, environment sensing, benefit from such decentralized, close-to-user deployments where low-latency, real-time results are expected. As with any distributed application, one of the key challenges in the development of collaborative applications is how to efficiently share data and state among multiple edge clients. The dynamic and heterogeneous environment together with diverse application’s requirements make data sharing at the edge a challenging problem. Although there have been prior efforts, a systematic understanding of the area is missing. In this paper, we conduct a methodological study of different edge applications, their data sharing needs, and designs of state-of-the-art systems. In the process, we identify design options, under-explored opportunities, and associated challenges. We then present Griffin, our edge data sharing service, and seek feedback on its design.

Transparent Microsegmentation in Smart Home IoT Networks

Amr Osman, TU Dresden; Armin Wasicek, Avast Inc., USA; Stefan Köpsell and Thorsten Strufe, TU Dresden

Available Media

Driven by the Internet-of-Things (IoT) and 5G, the growing size and complexity of smart home networks leads to an increased attack surface. Smart home IoT devices are typically online 24/7, have out-of-date firmware, are not regularly patched against the latest security vulnerabilities, and often collect sensitive data and send it to the cloud. In this work we propose microsegmentation as a mean to reduce the attack surface of smart home networks with the assistance of the edge cloud. We implement two network functions that cooperate to enforce fine-grained network security policies in smart homes. One function builds an inventory of all devices and their vulnerabilities. The second utilizes that information to dynamically allocate IoT devices to microsegments, and isolates them from one another using inter and intra-segment network-level security policies. We evaluated our approach using three different IoT network security metrics and IoT topologies. In the best case, microsegmentation reduces the attack surface exposed to a Mirai-infected IoT webcam by as much as 65.85% at the cost of preventing 2.16% of the otherwise-valid network flows between devices.

Benchmarking in The Dark: On the Absence of Comprehensive Edge Datasets

Oleg Kolosov and Gala Yadgar, Technion - Israel Institute of Technology; Sumit Maheshwari and Emina Soljanin, Rutgers University

Available Media

Application workloads are used for system analysis, design, optimization, and evaluation. As systems and their applications evolve, new workloads are required to reflect their characteristics and bottlenecks. Edge-computing systems are a new model with unique characteristics, including heterogeneous and geodistributed components and complex failure patterns. Their workload characteristics are also unique, including unpredictable load and user mobility, combined with stringent latency and bandwidth requirements. Supporting these workloads would require resource management and allocation policies that take into account a rich set of attributes.

In this paper, we show that currently available datasets include only partial subsets of these attributes. Moreover, having been collected on existing systems, they do not reflect the unique characteristics of edge systems and applications. At the same time, current edge systems in early deployment stages are not ready for the collection of representative application workloads. We show how to bridge the gap between required and available datasets by workload composition: we combine attributes from several available datasets to create realistic representations of edge systems, their users, and their workloads.

Synthesizing Plausible Infrastructure Configurations for Evaluating Edge Computing Systems

Thomas Rausch, Clemens Lachner, Pantelis A. Frangoudis, Philipp Raith, and Schahram Dustdar, TU Wien

Available Media

This paper proposes a framework for synthesizing infrastructure configurations for evaluating edge computing systems under different conditions. There are a number of tools to simulate or emulate edge systems, and while they typically provide ways of modeling infrastructure and network topologies, they lack reusable building blocks common to edge scenarios. Consequently, most edge computing systems evaluations to this date rely on either highly application-specific testbeds, or abstract scenarios and abstract infrastructure configurations. We analyze four existing or emerging edge infrastructure scenarios, from which we elicit common concepts. The scenarios serve as input to synthesize plausible infrastructure configurations, that are parameterizable in cluster density, device heterogeneity, and network topology. We demonstrate how our tool can generate synthetic infrastructure configurations for the reference scenarios, and how these configurations can be used to evaluate aspects of edge computing systems.

9:40 am–10:10 am

Break

10:10 am–11:40 am

Paper Session #2: Edge Networks & Applications

Session Chairs: Vaishnavi Ranganathan, Microsoft Research, and Srikanth Kambhatla, Intel

FERRET: Fall-back to LTE Microservices for Low Latency Data Access

Muhammad Taqi Raza, University of Arizona; Fatima Muhammad Anwar, UMASS Amherst; Dongho Kim, AT&T Research Lab; Kyu-Han Kim, HP Enterprise Research Lab

Available Media

Motivated from Software Defined Networking (SDN), LTE standard body (3GPP) has recently proposed splitting monolithic LTE Network Functions (NFs) into their control-plane and data-plane modules for better performance, flexibility, and agility. The data-plane logic is pushed at the edge of the network while retaining control-plane functionality at the core. However, both network edge and the core modules involve in executing LTE control-plane procedures (e.g. device registration/deregistration, and mobility etc.) as well as LTE data-plane services (e.g. voice over LTE, and video streaming, etc.). We discover that these decoupled modules, being part of the same LTE network function, interact frequently and cause deadlocks and races. In this paper, we argue that SDN style approach may not work for LTE NFs due to their monolithic design. We reason to retain LTE legacy design by not splitting its NFs. Our idea is to keep all types of LTE control-plane procedures handling at the core; while moving the execution of LTE data-plane services to the edge as microservices. We propose FERRET that is inspired from the success of the Circuit Switch Fall Back (CSFB) procedure, and falls-back to specific LTE microservice for the user requesting a particular LTE service. It first records signaling messages exchange as part of LTE service establishment phase at the core and then replays these messages at dedicated microservice to enable that service handling. FERRET lets microservice to facilitate LTE service execution that provides data forwarding at the edge.

EdgeBalance: Model-Based Load Balancing for Network Edge Data Planes

Wei Zhang, The George Washington University; Abhigyan Sharma, AT&T Labs Research; Timothy Wood, The George Washington University

Available Media

Edge data centers are an appealing place for telecommunication providers to offer in-network processing such as VPN services, security monitoring, and 5G. Placing these network services closer to users can reduce latency and core network bandwidth, but the deployment of network functions at the edge poses several important challenges. Edge data centers have limited resource capacity, yet network functions are re-source intensive with strict performance requirements. Replicating services at the edge is needed to meet demand, but balancing the load across multiple servers can be challenging due to diverse service costs, server and flow heterogeneity, and dynamic workload conditions. In this paper, we design and implement a model-based load balancer EdgeBalance for edge network data planes. EdgeBalance predicts the CPU demand of incoming traffic and adaptively distributes flows to servers to keep them evenly balanced. We overcome several challenges specific to network processing at the edge to improve throughput and latency over static load balancing and monitoring-based approaches.

P4EC: Enabling Terabit Edge Computing in Enterprise 4G LTE

Max Hollingsworth, Jinsung Lee, Zhang Liu, Jihoon Lee, Sangtae Ha, and Dirk Grunwald, University of Colorado Boulder

Available Media

Traditional LTE networks route Internet traffic through a packet gateway. Enterprise LTE networks with a cloud-based core use a similarly faraway gateway. To provide low-latency services, such as accessing nearby mobile devices, fog services, or localized information, a "local-exit" to the Internet is needed to avoid traveling through the LTE core. To create a local-exit, we build P4EC, a terabit capable mobile edge cloud using a programmable switch to distinguish and reroute traffic. P4EC is placed physically near the cellular deployment and reroutes specifically identified traffic to and from the mobile device. The P4EC implements packet redirection using the P4 programmable switching hardware that supports terabit throughput in inexpensive equipment. P4EC operates without any modification to the LTE core. This work describes a working proof-of-concept operating in an actual LTE network.

Interactive Planning-based Cognitive Assistance on the Edge

Zhiming Hu, Maayan Shvo, Allan Jepson, and Iqbal Mohomed, Samsung AI Center-Toronto

Available Media

Real-time cognitive assistance is one of the most exciting applications in the age of Augmented Reality (AR). Several research groups have explored the use of cognitive assistants, embodied within smartphones or wearable AR glasses, to guide users through unfamiliar tasks (e.g., assembling a piece of furniture or following a recipe). These systems generally consist of two high-level modules: a perceptual module (e.g., a deep-learning based vision system) and a cognitive module (implemented via a rule-engine or state machine), and must operate in near real-time. As such, cognitive assistants are illustrative use-cases for edge computing. While prior work has focused on pushing the frontier of what is possible, it suffers from some defects that hinder practical deployment. First, much research on cognitive assistants has assumed an accurate visual perception system, which may not be true in practice. Second, while some work has explored user errors in performance of tasks, the manner in which this is done is not scalable (i.e., possible errors are explicitly specified in a state machine representation apriori). To address these limitations, in this paper, we propose (i) to involve users in resolving the ambiguity/uncertainty of visual inputs and (ii) to employ automated planning tools as well as execution monitoring techniques to keep track of the task states, as well as to generate new plans to recover from users' mistakes if necessary. To verify the feasibility of our system, we implemented and tested it on both an Android phone and HoloLens 2, supported by an edge server for off-loading computation.

A Comparison of Communication Mechanisms in Vehicular Edge Computing

Liangkai Liu, Baofu Wu, and Weisong Shi, Wayne State University

Available Media

Connected and Autonomous Vehicles (CAVs) is becoming more and more popular for automobile academic and industry community. Communication mechanisms play an important role in CAVs applications and services. However, lack of detailed comparison of different communication mechanisms is the main obstacle for the deployment of CAVs applications and services. In this paper, we set up an end-to-end prototype which supports WiFi, LTE, and DSRC based communications and evaluate the performance in latency, power dissipation, and system utilization. Three observations are summarized for the real deployment of VEC applications.

11:40 am–11:50 am

Break

11:50 am–12:30 pm

Fireside Chat

12:30 pm–1:00 pm

Virtual Poster Session

Session Chairs: Vaishnav Janardhan, Akamai Technologies, and Ragaad Al-Tarawneh, Intel

Friday, June 26, 2020

8:00 am–8:10 am

Opening Remarks

Program Co-Chairs: Irfan Ahmad, Magnition, and Ming Zhao, Arizona State University

8:10 am–9:40 am

Paper Session #3: Data, Storage, and Accelerators

Session Chairs: Vasily Tarasov, IBM Almaden, and Liting Hu, Florida International University

Scale-out Edge Storage Systems with Embedded Storage Nodes to Get Better Availability and Cost-Efficiency At the Same Time

Jianshen Liu, UC Santa Cruz; Matthew Leon Curry, Sandia National Laboratories; Carlos Maltzahn, UC Santa Cruz; Philip Kufeldt, Seagate Technology

Available Media

In the resource-rich environment of data centers most failures can quickly failover to redundant resources. In contrast, failure in edge infrastructures with limited resources might require maintenance personnel to drive to the location in order to fix the problem. The operational cost of these "truck rolls" to locations at the edge infrastructure competes with the operational cost incurred by extra space and power needed for redundant resources at the edge. Computational storage devices with network interfaces can act as network-attached storage servers and offer a new design point for storage systems at the edge. In this paper we hypothesize that a system consisting of a larger number of such small "embedded" storage nodes provides higher availability due to a larger number of failure domains while also saving operational cost in terms of space and power. As evidence for our hypothesis, we compared the possibility of data loss between two different types of storage systems: one is constructed with general-purpose servers, and the other one is constructed with embedded storage nodes. Our results show that the storage system constructed with general-purpose servers has 7 to 20 times higher risk of losing data over the storage system constructed with embedded storage devices. We also compare the two alternatives in terms of power and space using the Media-Based Work Unit (MBWU) that we developed in an earlier paper as a reference point.

Adaptively Compressing IoT Data on the Resource-constrained Edge

Tao Lu, Marvell Technology Group; Wen Xia and Xiangyu Zou, Harbin Institute of Technology, Shenzhen, China; Qianbin Xia, Marvell Technology Group

Available Media

Big IoT data needs to be frequently moved between edge and cloud for efficient analysis and storage. Data movement is costly in low-bandwidth wide area network environments. Data compression can dramatically reduce data size to mitigate the bandwidth bottleneck. However, compression is compute-intensive and compression throughput can be limited by available CPU resources. The impact of available computation capability of the resource-constrained edge on the edge-to-cloud data transfer rate is apparent. Our study reveals compressors, including gzip, bzip2, lzma, and zstd, perform very differently under various resource-constrained conditions. This motivates us to propose models for the best compressor selection under CPU, network, and storage resource limitation conditions on the edge. We implement ZipMate, a middleware that enables resource-aware and adaptive compression policy based on the model. Our evaluation shows that adaptive policies consistently outperform unitary or random compressor selection policies.

Understanding and Tackling the Hidden Memory Latency for Edge-based Heterogeneous Platform

Zhendong Wang, Zhen Wang, Cong Liu, and Yang Hu, The University of Texas at Dallas

Available Media

With the burgeoning of autonomous driving, the edge-deployed integrated CPU/GPU (iGPU) platform gains significant attention from both academia and industries. NVIDIA issues a series of Jetson iGPU platforms that perform well in terms of computation capability, power consumption, and mobile size. However, these iGPU platforms typically contain very limited physical memory, which could be the bottleneck of these autonomous driving and edge computing applications. Although the introduction of the Unified Memory (UM) model in GPU programming can reduce the memory footprint, the programming legacy of the UM model initializes data on the CPU side by default as the conventional copy-and-execute model does, which causes significant latency of application execution. In this paper, we propose an enhanced unified memory management model (eUMM), which delivers a prefetch-enhanced data initialization method on the GPU side of the iGPU platform. We evaluate eUMM on the representative Jetson TX2 and Xavier AGX platforms and demonstrate that eUMM not only reduces the initialization latency significantly but also benefits the following kernel computation and the entire application execution latency.

Cost-effective Hardware Accelerator Recommendation for Edge Computing

Xingyu Zhou, Robert Canady, Shunxing Bao, and Aniruddha Gokhale, Vanderbilt University

Available Media

Hardware accelerator devices have emerged as an alternative to traditional CPUs since they not only help perform computations faster but also consume much lessenergy than a traditional CPU thereby helping to lower both capex (i.e., procurement costs) and opex (i.e., energy usage). However, since different accelerator tech-nologies can illustrate different traits for different application types that run at the edge, there is a critical needfor effective mechanisms that can help developers select the right technology (or a mix of) to use in their context,which is currently lacking. To address this critical need,we propose a recommender system to help users rapidly and cost-effectively select the right hardware accelerator technology for a given compute-intensive task. Our framework comprises the following workflow. First, we collect realistic execution traces of computations on real,single hardware accelerator devices. Second, we utilize these traces to deduce the achievable latencies and amortized costs for device deployments across the cloud-edge spectrum, which in turn provides guidance in selecting the right hardware.

Fast and Efficient Container Startup at the Edge via Dependency Scheduling

Silvery Fu, UC Berkeley; Radhika Mittal, UIUC; Lei Zhang, Alibaba Group; Sylvia Ratnasamy, UC Berkeley

Available Media

Container is becoming the canonical way of deploying compute tasks at the edge. Unfortunately, container startup latency and overhead remain high, limiting responsiveness and of edge deployment. This latency comes mostly from fetching container dependencies including system libraries, tools, configuration files, and data files.

To address this, we propose that schedulers in container orchestrators take into account a task's dependencies. Hence, in dependency scheduling, the scheduler tries to place a task at a node that has the maximum number of the task's dependencies stored locally. We implement dependency scheduling within Kubernetes and evaluate it through extensive experiments and measurement-driven simulations. We show that dependency scheduling improves task startup latency by 1.4-2.3x relative to current dependency-agnostic scheduling for typical scenarios. Our implementation of dependency scheduling has been adopted into the mainline Kubernetes codebase.

9:40 am–10:10 am

Break

10:10 am–10:40 am

International Virtual Poster Session

Session Chairs: Huiping Cao, New Mexico State University, and Vaishnav Janardhan, Akamai Technologies

Practical, Provable, End-to-End Guarantees at the Edge

Amit Vasudevan, Software Engineering Institute, Carnegie Mellon University; Petros Maniatis, Google Research; Ruben Martins, School of Computer Science, Carnegie Mellon University; Sagar Chaki, Mentor Graphics/Siemens

Available Media

10:40 am–11:20 am

Fireside Chat

11:20 am–11:30 am

Break

11:30 am–1:00 pm

Paper Session #4: Edge Infrastructure

Session Chairs: L. Jean Camp, Indiana University Bloomington, and Ziming Zhao, Rochester Institute of Technology

Toward Lighter Containers for the Edge

Misun Park, Ketan Bhardwaj, and Ada Gavrilovska, Georgia Institute of Technology

Available Media

Edge computing environments, being resource-limited, cannot tolerate the bloat in size of edge applications due to use of thick, complex software runtimes and hardware acceleration support. But such capabilities are critical to support rich, diverse and high performance applications. Performance includes deployment time, responsiveness and scalability, and is impacted by the bloat if using cloud-native container-based systems at the edge. If not addressed, this will limit the ability of the edge to scale to increasing number of workloads.

This paper makes a case for a new featherweight system—Pocket—to support edge computing. Pocket addresses the limitations in current container-based systems while retaining their benefits. Pocket achieves this by splitting containerized applications into two parts: application container and a bloat-causing execution environment container. Experimental evaluations of an early prototype show that by sharing the execution environment containers across multiple application containers, Pocket is able to achieve significant reductions of the resource pressure at the edge, thus presenting a path toward greater efficiency and scalability for edge computing.

SMARTER: Experiences with Cloud Native on the Edge

Alexandre Ferreira, Eric Van Hensbergen, Chris Adeniyi-Jones, Edmund Grimely-Evans, Josh Minor, Mark Nutter, Luis E. Peña, Kanak Agarwal, and Jon Hermes, Arm Research

Available Media

The decreasing cost and power consumption of intelligent, interconnected, and interactive devices at the edge of the internet are creating massive opportunities to instrument our cities, factories, farms, and environment to improve efficiency, safety and productivity. Developing, debugging, deploying and securing software for the estimated trillion connected devices present substantial challenges. As part of the SMARTER (Secure Municipal, Agricultural, Rural, and Telco Edge Research) project, Arm has been exploring the use of cloud-native technology and methodologies in edge environments to evaluate their effectiveness at addressing these problems at scale.

Toward Loosely Coupled Orchestration for the LEO Satellite Edge

Vaibhav Bhosale, Ketan Bhardwaj, and Ada Gavrilovska, Georgia Institute of Technology

Available Media

Low Earth Orbit (LEO) satellites are envisioned to be capable of providing Internet services for billions of users who currently lack reliable Internet connectivity. This calls for a new LEO edge capable of providing edge computing benefits from space. This paper proposes an orchestration approach for the LEO edge that incorporates path models, temporal compensation and affinity chains as the primary scheduling constructs, and presents preliminary results that illustrate opportunities for achieving improved service availability and improved performance for a stateful (caching) edge function.

Towards an Architecture for Trusted Edge IoT Security Gateways

Matt McCormack, Carnegie Mellon University - CyLab; Amit Vasudevan, Carnegie Mellon Software Engineering Institute; Guyue Liu, Carnegie Mellon University - CyLab; Sebastián Echeverría, Kyle O'Meara, and Grace Lewis, Carnegie Mellon Software Engineering Institute; Vyas Sekar, Carnegie Mellon University - CyLab

Available Media

Today's edge networks continue to see an increasing number of deployed IoT devices. These IoT devices aim to increase productivity and efficiency; however, they are plagued by a myriad of vulnerabilities. Industry and academia have proposed protecting these devices by deploying a "bolt-on" security gateway to these edge networks. The gateway applies security protections at the network level. While security gateways are an attractive solution, they raise a fundamental concern: Can the bolt-on security gateway be trusted?

This paper identifies key challenges in realizing this goal and sketches a roadmap for providing trust in bolt-on edge IoT security gateways. Specifically, we show the promise of using a micro-hypervisor driven approach for delivering practical (deployable today) trust that is catered to both end-users and gateway vendors alike in terms of cost, generality, capabilities, and performance. We describe the challenges in establishing trust on today's edge security gateways, formalize the adversary and trust properties, describe our system architecture, present preliminary results, and discuss open questions. We foresee our trusted security gateway architecture becoming a practical and extensible foundation towards realizing robust trust properties on edge security gateways.

Trustless IoT: A Logic-Driven Architecture for IoT Hubs

Soumya Basu and Emin Gün Sirer, Department of Computer Science, Cornell University

Available Media

The proliferation of smart devices has led to a de-facto IoT architecture where devices are controlled by cloud operators. This, in turn, leads to a central point of failure where a hacked hub can lead to the failure of the entire system. In this paper, we outline OrbanHub, an alternate IoT architecture which rules out Byzantine behavior by centralized IoT controllers. OrbanHub works the same way as most IoT hubs, but instead of issuing commands to devices to perform their operations, OrbanHub issues proof-carrying statements that devices verify. To ensure that the hub cannot reissue valid commands, OrbanHub leverages hashchains to prove freshness. We demonstrate that, through the two techniques, OrbanHub cannot force devices to execute commands that are not authorized by the user’s control policies and provide a feasibility study of the architecture.