Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

  • The first 2 columns contain information which that is provided by experiments at the LHC that will not be used in the training of our Machine Learning algorithms, therefore we skip our explanation to the next columns.

  • The next variable is the f_weights. This corresponds to the probability of having that particular kind of physical process on the whole experiment. Indeed, it is a product of Branching Ratio (BR), geometrical acceptance of the detector, and kinematic phase-space. It is very important for the trainings training phase and you will use it later.

  • The variables f_massjj,f_deltajj,f_mass4l,f_Z1mass, and f_Z2mass are named high-level features (event features) since they contain overall information about the final-state particles (the mass of the two jets, their separation in space, the invariant mass of the four leptons, the masses of the two Z bosons). Note that the mass is lighter w.r.t. the one. Why is that? In the Higgs boson production (hypothesis of mass = 125 GeV) only one of the Z bosons is an actual particle that has the nominal mass of 91.18 GeV. The other one is a virtual (off-mass shell) particle.

  • The remnant columns represent the low-level features (object kinematics observables), the basic measurements which are made by the detectors for the individual final state objects (in our case four charged leptons and jets) such as f_lept1(2,3,4)_pt(phi,eta) corresponding to their transverse momentum and the spatial distribution of their tracks ().

...

  • Introduction to the Neural Network algorithm
    If you have the knowledge about ANN you may skip it.
  • Usage of Keras API: basic concepts
    Here you find concepts that are useful for the ANN implementation using KERAS API (callfunctionscall functions, metrics etc.).

There are three ways to create Keras models:

  • The Sequential model, which is very straightforward (a simple list of layers), but is limited to single-input, single-output stacks of layers (as the name gives away).
  • The Functional API, which is an easy-to-use, fully-featured API that supports arbitrary model architectures. For most people and most use cases, this is what you should be using. This is the Keras "industry-strength" model. We will use it.
  • Model subclassing, where you implement everything from scratch on your own. Use this if you have complex, out-of-the-box research use cases.

...

A Neural Network (NN) is a biology-inspired analytical model, but not a bio-mimetic one. It is formed by a network of basic elements called neurons or perceptrons (see the picture below), which receive an input, change their state according to the input and produce an output.

...

A Neural Networks (NN) can be classified according to the type of neuron interconections interconnections and the flow of information.

...


A feedforward NN is a neural network where connections between the nodes do not form a cycle. In a feed-forward network information always moves one direction, from input to output, and it never goes backwardsbackward. Feedforward NN can be viewed as mathematical models of a func5on function .

Recurrent Neural Network

...

A Neural Network layer is called a dense layer to indicate that it’s fully connected. Information For information about the Neural Network architectures see: https://www.asimovinstitute.org/neural-network-zoo/

...

The last output layer is usually constituted by a single node that provides the discriminant output. Intermediate layers between the input and the output layers are called hidden layers. Usually, if a Neural Network as has more than one hidden layer is called Deep Neural Network and is able to do by itself the feature extraction (it becomes a Deep Learning algorithm).

...

The output of the node of the layers is computed as the weighted average of the input variables, with weights that are subject to optimization via training.

...

A popular algorithm to optimize the weights , consists in of iteratively modifying the weights afer after each training observation or after a bunch of training observation observations by doing a minimization of the loss function.

...