Low Level Control of a Quadrotor with Deep Model-Based Reinforcement learning 

Presenter(s): Joseph Yaconelli

Oral Session 2 C

Generating low-level robot controllers often requires manual parameters tuning and significant system knowledge, which can result in long design times for highly specialized controllers. With the growth of automation, the need for such controllers might grow faster than the number of expert designers. To address the problem of rapidly generating low-level controllers without domain knowledge, we propose using model-based reinforcement learning (MBRL) trained on few minutes of automatically generated data. In this paper, we explore the capabilities of MBRL on a Crazyflie quadrotor with rapid dynamics where existing classical control schemes offer a baseline against the new method’s performance. To our knowledge, this is the first use of MBRL for low-level controlled hover of a quadrotor using only on-board sensors, direct motor input signals, and no initial dynamics knowledge. Our forward dynamics model for prediction is a neural network tuned to predict the state variables at the next time step, with a regularization term on the variance of predictions. The model predictive controller then transmits best actions from a GPU-enabled base station to the quadrotor firmware via radio. In our experiments, the quadrotor achieved hovering capability of up to 6 seconds with 3 minutes of experimental training data.

Automating Dev Ops with Docker Application Technology Shell Scripts

Presenter(s): Franklin Smith

Faculty Mentor(s): Ramakrishnan Durairajan 

Oral Session 2 C

With an emerging rise of Dev Ops technology like Docker and other application containers comes an underlying challenge that has been plaguing the computer industry for years, how to efficiently learn and use the technology in a timely manner. Most users are tired of long and meaningless online tutorials and videos which shove irrelevant information down the throat of the consumer. I have solved this problem by programming a shell script that automates the dev ops process with docker while allowing the user to interact and choose where, what, and how they would like to learn about the technology. With a computer execution run time of 2-3 minutes, one can now learn to: set up their docker environment; build an image and run as one container; scale their application to run multiple containers; distribute their application across a cluster; stack their services by adding a back end database; and deploy their application to production.

MACE: Improving Measurement Accuracy in Containers Through Trace-based Network Stack Latency Monitoring

Presenter(s): Christopher Misa

Faculty Mentor(s): Ramakrishnan Durairajan

Oral Session 2 C

Container systems (e.g., Docker) provide a well-defined, lightweight, and versatile foundation to streamline the process of tool deployment, to provide a consistent and repeatable experimental interface, and to leverage data centers in the global cloud infrastructure as measurement vantage points. However, the virtual network devices commonly used to connect containers to the Internet are known to impose latency overheads which distort the values reported by measurement tools running inside containers. In this study, we develop a tool called MACE to measure the latency overhead of virtual network devices as used by Docker containers. MACE is implemented as a Linux kernel module using the trace event subsystem to hook into key points along the network stack code path. Using CloudLab, we evaluate MACE by comparing ping round trip time (RTT) measurements emitted from a slim-ping container to the ones emitted using the same tool running in the bare metal machine under varying traffic loads. Our evaluation shows that the MACE-adjusted RTT measurements are within 20 microseconds of the bare metal ping RTTs on average while incurring less than 25 microseconds RTT perturbation. We also compare RTT perturbation incurred by MACE with perturbation incurred by the ftrace kernel tracing system and provide a perturbation breakdown for the various components of MACE to focus future development.

Edge Detection and Deep Learning Algorithm Performance Studies for the ATLAS Trigger System

Presenter(s): Adrian Gutierrez

Faculty Mentor(s): Stephanie Majewski

Oral Session 2 C

The upcoming ATLAS Phase-I upgrade at the Large Hadron Collider (LHC) planned for 2019-2020 will incorporate the Global Feature Extractor (gFEX), a component of the Level-1 Calorimeter trigger that is intended for the detection and selection of energy coming from hadronic decays emerging from proton-proton collisions at the LHC. As the luminosity at the LHC increases, the acquisition of data in the ATLAS trigger system becomes very challenging and rejecting background events in high pileup environment (up to 80 interactions per bunch crossing) is necessary. To achieve this goal, edge-detection and deep learning techniques that could be adapted for the gFEX’s Field Programable Gate Array (FPGA) architecture are being investigated. The focus of this study is to analyze the performance of these algorithms using Monte Carlo simulations of Lorentz-boosted top quark decays in order to increase the efficiency of signal detection given a fixed background rejection in our trigger system. Comparing the results of the edge detection and deep learning algorithms has shown an improvement in our trigger system efficiency that exploits the capabilities of the gFEX and could potentially be implemented to help us detect particles that are not described by our current theories.