Machine Learning for Jet Physics

US/Pacific
2-100 (Lawrence Berkeley National Laboratory)

2-100

Lawrence Berkeley National Laboratory

Benjamin Nachman, Kyle Cranmer, Matt Dolan, Timothy Cohen (Princeton/IAS)
Description
There has been a recent surge of interest in developing and applying advanced machine learning techniques in HEP, and jet physics is a domain at the forefront of the excitement. The goal of this workshop is to gather experts and new-commers to discuss progress, new ideas, and common challenges. The workshop is open to the community; we invite contributions and will try to accommodate everyone within reason.
Slides
Participants
  • Anders Andreassen
  • Andrew Larkoski
  • Aviv Cukierman
  • Benjamin Nachman
  • Bryan Ostdiek
  • Charilou Labitan
  • Christine McLean
  • Christopher Frye
  • Eric Metodiev
  • Felix Ringer
  • Francesco Rubbo
  • Frederic Dreyer
  • Gabriela Lima Lichtenstein
  • Gregor Kasieczka
  • Huilin Qu
  • Ian Moult
  • Isaac Henrion
  • Jack Collins
  • Jannicke Pearkes
  • Kiel Howe
  • Kyle Cranmer
  • LongGang Pang
  • Luke de Oliveira
  • Maosen Zhou
  • Marat Freytsis
  • Markus Stoye
  • Matthew Dolan
  • Matthew Schwartz
  • Michael Kagan
  • Michela Paganini
  • Patrick Komiske
  • Peter Jacobs
  • Peter Sadowski
  • Raghav Kunnawalkam Elayavalli
  • Rebecca Carney
  • Rohan Bhandari
  • Sai Neha Santpur
  • Savannah Thais
  • Stefan Hoche
  • Steve Farrell
  • Taoli Cheng
  • Taylor Childers
  • Tilman Plehn
  • Tim Cohen
  • Wahid Bhimji
  • Wes Bethel
  • William Mccormack
  • Wojciech Fedorko
  • Yang-Ting Chien
    • 1
      Registration 2-100

      2-100

      Lawrence Berkeley National Laboratory

    • 2
      Welcome and Logistics 2-100

      2-100

      Lawrence Berkeley National Laboratory

      Speakers: Benjamin Nachman, Natalie Roe
      Slides
    • 3
      Jets and ML in Theory 2-100

      2-100

      Lawrence Berkeley National Laboratory

      Speaker: Matt Schwartz (Harvard)
      Slides
    • 4
      Jets and ML in CMS (30'+15') 2-100

      2-100

      Lawrence Berkeley National Laboratory

      Speaker: Markus Stoye (CERN)
      Slides
    • 5
      Jets and ML in ATLAS (30'+15') 2-100

      2-100

      Lawrence Berkeley National Laboratory

      Speaker: Francesco Rubbo (SLAC)
      Slides
    • 11:30 AM
      Lunch 2-100

      2-100

      Lawrence Berkeley National Laboratory

    • Jet tagging 2-100

      2-100

      Lawrence Berkeley National Laboratory

      • 6
        Introduction and Overview (15'+5')
        Speaker: Matt Dolan (The University of Melbourne)
        Slides
      • 7
        Recursive Neural Networks in quark/gluon tagging (15'+5')
        I am writing to propose a talk based on the recent paper https://arxiv.org/abs/1711.02633. The main topic is exploring the performance of Recursive Neural Networks in quark/gluon tagging.
        Speaker: Taoli Cheng (University of Chinese Academy of Sciences)
        Slides
      • 8
        Top tagging with jet constituents and Long Short-Term Memory (LSTM) networks. (15'+5')
        Multivariate techniques based on engineered features have found wide adoption in the identification of jets resulting from hadronic top decays at the Large Hadron Collider (LHC). Recent Deep Learning developments in this area include the treatment of the calorimeter activation as an image or supplying a list of jet constituent momenta to a fully connected network. This latter approach lends itself well to the use of Recurrent Neural Networks. We study the applicability of architectures incorporating Long Short-Term Memory (LSTM) networks. We explore several network architectures, methods of ordering of jet constituents, and input pre-processing. The best performing LSTM-based network achieves a background rejection of 100 for 50\% signal efficiency in the jet transverse momentum range of 600 to 2500 GeV. This represents more than a factor of two improvement over a fully connected Deep Neural Network (DNN) trained on similar types of inputs.
        Speakers: Wojciech Fedorko (UBC), Dr Wojciech Fedorko (University of British Columbia)
        Slides
      • 9
        Deep(Boosted)Jet: Boosted jet identification using particle-level convolutional neural networks (15'+5')
        dentification of boosted top quarks from their hadronic decays can play an important role in searches for new physics at the LHC. We present DeepBoostedJet, a new approach for boosted jet identification using particle-flow jets at CMS. One dimensional convolutional neural networks are utilized to classify a jet directly from its reconstructed constituent particles. The new method shows significant improvement in performance compared to alternative multivariate methods using jet-level observables.
        Speaker: Qu Huilin (UCSB)
        Slides
      • 10
        Deep-Learned Top Taggers from Images & Lorentz Invariance (15'+5')
        Distinguishing hadronic top quark decays from light quark and gluon jets (top tagging) is an important tool for new physics searches at the LHC and allows the comparison of different machine learning approaches. We present results on using convolutional neural networks as well as recent studies employing a physics motivated network architecture based on Lorentz Invariance (and not much else) for top tagging. We also discuss further generalisations of this approach.
        Speaker: Gregor Kasieczka (Uni Hamburg)
        Slides
      • 11
        Jets as graphs: W tagging with neural message passing (15'+5')
        Speaker: Isaac Henrion (NYU)
        Paper
        Slides
    • 2:30 PM
      Coffee break 2-100

      2-100

      Lawrence Berkeley National Laboratory

    • Representing jets 2-100

      2-100

      Lawrence Berkeley National Laboratory

      • 12
        Visualization Intro
        Speaker: Wes Bethel (LBNL)
        Slides
    • 13
      Workshop Dinner Great China Restaurant

      Great China Restaurant

      2190 Bancroft Way Berkeley, CA 94704
    • Learning more about QCD 2-100

      2-100

      Lawrence Berkeley National Laboratory

      • 14
        Introduction and Overview (15'+5')
        Speaker: Yang-Ting Chien (MIT)
        Slides
      • 15
        Machine learning in the Lund plane (15'+5')
        We introduce a novel representation for emission patterns inside a jet, by declustering a Cambridge-Aachen jet and using the primary-emission Lund plane coordinates. We present several possible variations of this method, and show how it can be used to construct either an n by n pixel image or a graph, which can be used as inputs for neural networks. Using W tagging as an example, we show how these jet representations can be used as inputs for convolutional neural networks or recurrent neural networks, performing on par or better than other state-of-the-art methods. We illustrate in particular how networks trained on Lund coordinates result in excellent discrimination at high pt.
        Speaker: Frederic Dreyer (MIT)
        Slides
      • 16
        A complete, linear basis for (machine) learning jet substructure (15'+5')
        In this talk, I will present Energy Flow Polynomials (EFPs), a novel class of jet substructure observables that form a discrete, linear basis of all infrared- and collinear-safe information in a jet. The EFPs are multiparticle energy correlators with a powerful graph-theoretic interpretation which encompass and generalize the analytic structures present in many existing classes of jet substructure observables. I will show that many common jet substructure observables are exact linear combinations of EFPs. Further, I will demonstrate the linear, IRC-safe spanning nature of EFPs by performing linear regression with EFPs on a collection of IRC-safe and unsafe observables in a variety of jet contexts.
        Speaker: Mr Eric Metodiev (MIT)
        Slides
      • 17
        Linear jet tagging with the energy flow basis (15'+5')
        In this talk, I will demonstrate the linear power of Energy Flow Polynomials (EFPs) by applying linear classification methods to quark/gluon discrimination, boosted W tagging, and boosted top tagging, achieving performance that compares favorably to other jet representations and modern machine learning approaches. I will briefly describe novel algorithms that make use of the graph-theoretic interpretation of EFPs to improve their computational complexity over that of an arbitrary N-particle correlator, making the computation of a large number of EFPs highly feasible. I will discuss how this linear energy flow basis provides an alternative to “black-box” machine learning techniques for fully combining the (IRC-safe) information in jet observables, replacing complex models by convex linear methods with few or no hyperparameters.
        Speaker: Mr Patrick Komiske (MIT)
        Slides
      • 18
        Learning the Physics of Jet Evolution with a Recurrent Neural Network Part I (15')
        Many early applications of Machine Learning in jet physics are classifiers that use Convolutional Neural Networks trained on jet images. We will present a work-in-progress custom probabilistic model, tailored to learning the physics of jet production in an unsupervised way. Our model is built on a Recurrent Neural Network suited to modeling the approximate sequential splitting of a tree, which can be explicitly defined through a clustering algorithm. The model also contains fully-connected sub-networks modeling physical quantities like the QCD splitting functions. We train our network on Pythia jets as a proof-of-principle, but our framework importantly admits training on LHC data, including the potential to be jet-algorithm independent. Given the general structure, our model can be used as a generative model for jets, though we do not anticipate that to be its primary use. Instead, we will investigate the extraction of splitting functions in various environments and their sensitivity to global jet structure using unsupervised machine learning. Further possible physics applications will be explored.
        Speaker: Anders ANDREASSEN (Harvard)
        Slides
      • 19
        Learning the Physics of Jet Evolution with a Recurrent Neural Network Part II (15'+5')
        Speaker: Frye Chris (Harvard)
        Slides
    • 11:00 AM
      Coffee break 2-100

      2-100

      Lawrence Berkeley National Laboratory

    • Heavy Ions 2-100

      2-100

      Lawrence Berkeley National Laboratory

      • 20
        Introduction and Overview (15'+5')
        Speaker: Peter Jacobs
        Slides
      • 21
        Identifying QCD transition using convolution neural network (15'+5')
        The initial state fluctuations in relativistic heavy ion collisions are converted to the final state correlations of soft particles in momentum space, through strong collective expansion of the quark gluon plasma (QGP) and the QCD transition from QGP to hadrons. The patterns (equations of state) encoded in the relativistic hydrodynamic evolution are extracted from the final particle spectra rho(pt, phi) using supervised learning with a deep convolution neural network (DCNN). Comparisons with traditional machine learning methods (such as support vector machine, decision trees and random forests…) show that the DCNN is very good at decoding physical information from complex and dynamical evolving systems.
        Speaker: Long Pang
        Slides
      • 22
        Probing heavy ion collisions using quark and gluon jet substructure with machine learning (15'+5')
        We study the classification of quark-initiated jets and gluon-initiated jets in proton-proton and heavy ion collisions using modern machine learning techniques. We train the deep convolutional neutral network on discretized jet images. The classification performance is compared with the multivariate analysis of several physically-contructed jet observables including the jet mass, the $p_T^D$, the multiplicity and the radial moments. We also compare with the systematic $N$-subjet expansion in telescoping deconstruction to exploit the information carried by the subjets. The quark and gluon jet samples generated from JEWEL are used as an example to demonstrate this general method. We find that the classification performance gradually worsens in central or high multiplicity PbPb events at 2.76 TeV in JEWEL w/recoils. The information carried by the subleading subjets can be washed out by the possible subjet thermalization or randomization due to the soft event activities. Our method provides a systematically improvable framework for analyzing and comparing all jet simulations and measurements in heavy ion collisions.
        Speaker: Dr Raghav Kunnawalkam Elayavalli (Wayne State University)
        Slides
    • 12:30 PM
      Lunch 2-100

      2-100

      Lawrence Berkeley National Laboratory

    • Recent results in ML 2-100

      2-100

      Lawrence Berkeley National Laboratory

      • 23
        Meta learning
        Speaker: Luke de Oliveira (LBNL/VAI Tech.)
        Slides
    • 2:30 PM
      Coffee break 2-100

      2-100

      Lawrence Berkeley National Laboratory

    • Experimental/Practical aspects of learning with jets 2-100

      2-100

      Lawrence Berkeley National Laboratory

      • 24
        Introduction and Overview (15'+5')
        Speaker: Taylor Childers (ANL)
        Slides
      • 25
        Deep Neural Networks for whole, multi-jet event classification and generation (15'+5')
        Several studies have had success applying deep convolutional neural nets (CNNs) to a subset of the calorimeter for individual jet classification / tagging. We explore approaches that use the entire calorimeter, combined with track information, for directly conducting multi-jet physics analyses, without the need for any jet reconstruction. We use an existing RPV-Susy analysis as a case study and compare statistical performance of our approaches with selections on high-level physics variables from the current physics analyses, and shallow classifiers trained on those variables. We also discuss work in progress, and possible directions, using GraphCNNs on this data and GAN approaches for generating new events of this type. Networks are applied on GPU and multi-node CPU architectures (including Knights Landing (KNL) Xeon Phi nodes) on the Cori supercomputer at NERSC, so we also provide time-to-solution performance of CPU (scaling to multiple KNL nodes) and GPU implementations.
        Speaker: Wahid Bhimji
        Slides
      • 26
        Jet Response Prediction Using Jet Images (15'+5')
        Understanding and appropriately correcting for the detector response on any observable of interest is an important chore for experimentalists. Such a procedure is ultimately necessary to remove the impact of the finite detector and to facilitate direct comparisons with theoretical predictions. All current experiments take on this major task by generating Monte Carlo samples and running them through a detector simulation in GEANT. In the case of reconstructed jets, one often ends up with a parametrized extraction of the jet energy scale and resolution as function of the jet’s transverse momenta and rapidity. With the recent push towards new jet observables that involve the jet structure and fragmentation, a better representation of detector driven correction is paramount to better parameterize the energy resolution and hence the inherent smearing. Since the jet image contains the full jet fragmentation and energy distribution on the incident detector, we train a deep convolutional neural network to extract the jet energy response from a given jet image. This method is shown to effectively reproduce the parametrized input and as an additional feature, capture the dependence on the energy scale on the jet’s internal structure observables. We show comparisons of our model with standard multi-variable machine learning techniques and highlight the importance of such an unbiased extraction on jets in data, with the near future goal of reduced jet energy resolution uncertainties on a jet-by-jet basis.
        Speakers: Mr Alexx Perloff (TAMU), Dr Raghav Kunnawalkam Elayavalli (Rutgers University)
        Slides
      • 27
        The Latest in GANs for Jet/Calo Simulation (15'+5')
        Speaker: Michela Paganini
        Slides
      • 28
        Machine Learning and Tracking inside Jets in ATLAS (15'+5')
        Speaker: William Mccormack
        Slides
    • Learning from data 2-100

      2-100

      Lawrence Berkeley National Laboratory

      • 29
        Introduction and Overview (15'+5')
        Speaker: Marat Freytsis (University of Oregon)
        Slides
      • 30
        "Planing" to expose what the machine is learning (15'+5')
        Applications of machine learning tools to problems of physical interest are often criticized for producing sensitivity at the expense of transparency. In this talk, I explore a procedure for identifying combinations of variables -- aided by physical intuition -- that can discriminate signal from background. Weights are introduced to smooth away the features in a given variable(s). New networks are then trained on this modified data. Observed decreases in sensitivity diagnose the variable's discriminating power. Planing also allows the investigation of the linear versus non-linear nature of the boundaries between signal and background. I will demonstrate these features in both an easy to understand toy model and an idealized LHC resonance scenario.
        Speaker: Bryan Ostdiek (University of Oregon)
        Slides
      • 31
        Weak Supervision in High Dimensions (15'+5')
        Speaker: Eric Metodiev (MIT)
        Slides
      • 32
        Building an anti-QCD tagger (15'+5')
        Speaker: Jack Collins (University of Maryland)
        Slides
      • 33
        Adversarial Approaches (15'+5')
        We use an adversarial neural network to train a jet classifier that remains largely uncorrelated with the jet mass --- a nuisance parameter that is highly correlated with the observed features. This adversarial training strategy balances the dual objectives of classification accuracy and decorrelation, reducing the deleterious effect of systematic uncertainties in the background modeling. The result is a robust classifier with improved discovery significance relative to existing jet classification strategies.
        Speaker: Kyle Cranmer (NYU)
        Slides
    • 34
      Discussion and Closeout 2-100

      2-100

      Lawrence Berkeley National Laboratory

    • 35
      Advanced Light Source Tour 2-100

      2-100

      Lawrence Berkeley National Laboratory