Missing Transverse Momentum Trigger Performance Studies for the ATLAS Calorimeter Trigger Upgrades

Presenter: Brianna Stamas

Faculty Mentor: Stephanie Majewski, Geraldine Richmond

Presentation Type: Oral

Primary Research Area: Science

Major: Physics

Funding Source: Presidential Undergraduate Research Scholars Program, UO Undergraduate Research Opportunity Program, $5,000

A basic question about our universe remains unanswered: what is everything fundamentally made of? Everything we know of only makes up 4% of the universe; a significant fraction of the remaining 96% is made of an unknown fundamental particle referred to as dark matter. In an attempt to identify the dark particle, the Large Hadron Collider (LHC) at CERN in Geneva, Switzerland is recreating the conditions of the Big Bang. The ATLAS Experiment is one of two general purpose detectors at the LHC. In anticipation of discovering new physics, the ATLAS detector will undergo numerous hardware upgrades in the coming years, one of which will be an improvement to the existing trigger system which is a 3-level hardware and software based system. This study focuses on the upgrades to the level-1 trigger. The LHC collides bunches of protons every 25 ns, which amounts to a lot of data in an extremely short period of time. Specifically, the missing transverse energy (ETmiss) trigger is crucial in being able to detect a previously undetectable particle. Therefore, we propose implementing a topological clustering inspired algorithm on the level-1 ETmiss trigger. The algorithm will be employed on the gFEX (global feature extractor) with 0.2×0.2 eta-phi granularity to be installed in 2019. This study analyzes the performance the algorithm for future implementation.

Pileup Suppression in the ATLAS Detector

Presenter: Elliot Parrish

Faculty Mentor: Stephanie Majewski

Presentation Type: Poster 31

Primary Research Area: Science

Major: Physics

The ATLAS experiment at the Large Hadron Collider at CERN is looking to improve on their previous discovery of the Higgs boson in 2012 with the discovery of new particles. To ensure the continued success of ATLAS, there are a series of planned upgrades to the detector. After the Phase II upgrade, scheduled for 2026, the ATLAS detector will receive collisions of proton bunches every 25 ns with an average of 140 interactions per collision. Most of these interactions are not energetic enough to produce interesting physics (high energy events). The uninteresting interactions are referred to as pileup. These pileup interactions happen simultaneously with the interesting events, leading to a masking of the signal beneath the pileup. In order to sift through the large amounts of data, a firm understanding of pileup is needed. The focus of this study is to measure the energy deposited in the detector due to pileup and use it as a discriminating factor in reducing the data flow to a rate that can be written out in the time allotted.

High Energy Particle Studies for the Improvement of Missing Transverse Energy Calculations using a High Granularity Timing Detector

Presenter: Elizabeth Maynard

Faculty Mentor: Stephanie Majewski

Presentation Type: Poster 28

Primary Research Area: Science

Major: Physics

Funding Source: UROP Mini-grant, University of Oregon, $734.96

The purpose of this study is to discover if using a high granularity timing detector, which is a proposed upgrade for the ATLAS particle detection chamber, could improve missing transverse energy calculations. Missing transverse energy is a sign of the presence of particles that we cannot yet detect, and finding these particles could be proof of new physics. The hypothesis is that this detector could be used to identify, and thereby exclude, uninteresting particles (“pileup”) from the calculation of missing transverse energy.

To perform the study, particle collisions were simulated with a variety of parameters using the specialized computer programs ROOT and FastJet. Then, missing transverse energy was calculated after ‘cutting out’ particles based on their timing parameter, to simulate the addition of the timing detector. Currently, only the most basic particle collision types (QCD jets) have been studied. In this case, we expect the missing transverse energy to be close to zero for all events. Without the detector, 42% of events had an energy calculation greater than eighty giga-electron volts in the large pileup case, but with the timing detector this was reduced to 21%. Similarly, in the low pileup case, the percent above eighty GeV was reduced from 26% to 4%.

In conclusion, my study shows that for QCD type particle collisions, the addition of a high granularity timing detector improves missing transverse energy calculations by about 20%, which is a substantial improvement.

Gradient Estimation Algorithm for the ATLAS Level-1 Calorimeter Trigger Upgrades

Presenter: Luc Lisi

Faculty Mentor: Stephanie Majewski

Presentation Type: Poster 25

Primary Research Area: Science

Major: Physics

The Large Hadron Collider (LHC) is a proton-proton particle collider that at the present (2016) is the most powerful particle accelerator in the world. At peak operation, there can be as many as 600 million proton-proton collisions
per second and as a result, deciding which events are useful to analysis and which events are not, in real time, is paramount to data collection. To accomplish this, accurate calorimeter object reconstruction and suppression of multiple interactions per bunch crossing (pileup) in the ATLAS detector at the Large Hadron Collider plays a key role in triggering on important proton-proton collision events. In particular, we aim to improve the performance of the jet and missing transverse energy triggers. We present simulation studies of a novel algorithm for the Level-1 Calorimeter trigger in the Phase-I and Phase-II upgrades of the trigger electronics that aims to improve this trigger efficiency. Inspired by image processing techniques, we use gradient estimation to extract areas of topological interest in the 0.2×0.2 (in eta-phi) towers of the global feature extractor (gFEX), a component of the Level-1 trigger system for the Phase-I upgrade. Our preliminary results have found that these techniques are capable of suppressing pileup and reconstructing calorimeter objects in simulated events. However, further studies must be conducted to understand the algorithm’s speed, efficiency, and other factors critical to implementation in the final trigger.

Implementing Sobel Filtering Algorithm to Search for Particle Signatures In Proton-Proton Collisions at the Large Hadron Collider

Presenter(s): Adrian Gutierrez − Physics And Mathematics

Faculty Mentor(s): Stephanie Majewski

Poster 12

Research Area: Physics

ATLAS is one of four particle detector experiments constructed at the Large Hadron Collider (LHC) at CERN in Geneva, Switzerland. The experiment is designed to take advantage of the high-energy proton-proton collisions to search for rare, interesting events. Each collision produces different types of particles that will deposit energy in the detector; the interactions between these particles are described by the Standard Model of particle physics. At the High Luminosity LHC, planned for 2026, the ATLAS trigger system must select these interesting events amidst 200 background collisions per proton-proton bunch crossing within 10 microseconds. In order to detect interesting events among the large amount of data collected, a filtering method is needed. Such techniques can exploit the unique type of signature that each elementary particle has. A technique that has shown to give promising results is edge detection, in particular a Sobel filter. Applying a Sobel filter to the energy depositions in our events defines boundaries around so-called “jets”, or splashes of energies in our detector. The main goal of my study is the application of edge filtering techniques which can be implemented in our trigger system to look for areas of topological interest in our detector in hope that it will shed light on new particles or forces beyond the Standard Model.

Clustering Algorithm Performance Studies for the ATLAS Trigger System at the HL-LHC

Presenter(s): Taylor Contreras − Physics

Faculty Mentor(s): Stephanie Majewski

Oral Session 2S

Research Area: Physics

Funding: PURS, McNair

The Large Hadron Collider (LHC) at CERN is a particle accelerator providing massive amounts of data which can reveal new physics about fundamental particles and forces. An upgrade to the LHC that will increase the luminosity will be enacted in 2026, called the High-Luminosity LHC (HL-LHC). The higher luminosity will increase the rate of proton-proton interactions in detectors like ATLAS, thus these detectors must increase the speed of sorting through data. This sorting is performed by the ATLAS Trigger System, which decides whether an interaction is interesting enough to keep within about ten microseconds. Our group is studying the efficiency of different algorithms that cluster energy for implementation on a Field Programmable Gate Array (FPGA) in the Global Trigger. These algorithms cluster the most energetic cells in multiple layers of the detector to reconstruct particle showers. We have implemented the algorithms used on the FPGA in python in order to validate the performance of the FPGA, analyze the background rejection and trigger efficiency of the clustering algorithms, and compare these quantities between different algorithms.

Edge Detection and Deep Learning Algorithm Performance Studies for the ATLAS Trigger System

Presenter(s): Adrian Gutierrez

Faculty Mentor(s): Stephanie Majewski

Oral Session 2 C

The upcoming ATLAS Phase-I upgrade at the Large Hadron Collider (LHC) planned for 2019-2020 will incorporate the Global Feature Extractor (gFEX), a component of the Level-1 Calorimeter trigger that is intended for the detection and selection of energy coming from hadronic decays emerging from proton-proton collisions at the LHC. As the luminosity at the LHC increases, the acquisition of data in the ATLAS trigger system becomes very challenging and rejecting background events in high pileup environment (up to 80 interactions per bunch crossing) is necessary. To achieve this goal, edge-detection and deep learning techniques that could be adapted for the gFEX’s Field Programable Gate Array (FPGA) architecture are being investigated. The focus of this study is to analyze the performance of these algorithms using Monte Carlo simulations of Lorentz-boosted top quark decays in order to increase the efficiency of signal detection given a fixed background rejection in our trigger system. Comparing the results of the edge detection and deep learning algorithms has shown an improvement in our trigger system efficiency that exploits the capabilities of the gFEX and could potentially be implemented to help us detect particles that are not described by our current theories.

Visualizing Topocluster Algorithms for the Global Trigger

Presenter(s): Sylvia Mason—Physics

Faculty Mentor(s): Stephanie Majewski

Session 5: To the Moon and Back—Relativity Matters

There is a Standard Model of particles and forces that explain the fundamental components of matter . However, this model is incomplete, seeing as we currently understand only about 5% of our universe . The Large Hadron Collider (LHC) is a particle accelerator that collides protons in the hopes of discovering new particles or forces, so that we can learn more about the other 95% of the universe . The LHC will undergo an upgrade in 2026 that will increase its luminosity, meaning there will be an increased number of collisions per second (up to 200 collisions every 25 nanoseconds) . After this upgrade, the ATLAS trigger system will need to reduce the data by a factor of 40 within 10 microseconds, so we will need to sort out the interesting events very fast . Our group is designing an algorithm for implementation in firmware in the “Global Trigger” system for ATLAS to help select these interesting events . My research focuses on creating accurate 3-D visualizations of potential algorithms that cluster energies from particle showers in the ATLAS Calorimeters, and investigation splitting criteria for these clusters . These visualizations will help us understand the details of the performance of these algorithms, which can significantly help us reject background .