Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

  • one that contains events distributed according to the null (in our case signal - there exist other conventions in actual physics analyses) hypothesis Image RemovedH0 ;
  • another one according to the alternative (in our case background) hypothesis Image RemovedH1 .

Then the algorithm must learn how to classify new datasets (the test dataset in our case).
This means that we have the same set of features (random variables) with their own distribution on the Image Removed and Image Removed hypothesesH0 and H1 hypotheses.


To obtain a good ML classifier with high discriminating power, we will follow the following steps:

...

  • Training (learning): a discriminator is built by using all the input variables. Then, the parameters are iteratively modified by comparing the discriminant output to the true label of the dataset (supervised machine learning algorithms, we will use two of them). This phase is crucial: one should tune the input variables and the parameters of the algorithm!

    • As an alternative, algorithms that group and find patterns in the data according to the observed distribution of the input data are called unsupervised learning.
    • A good habit is training multiple models with various hyperparameters on a “reduced” training set ( i.e. the full training set minus subtracting the so-called validation set), and then select the model that performs best on the validation set.
    • Once, the validation process is over, you can re-train the best model on the full training set (including the validation set), and this gives you the final model.
  • Test: once the training has been performed, the discriminator score is computed in a separated, independent dataset for both Image Removed and Image RemovedH0 and H1 .

  • A comparison is made between test and training classifier and their performances (in terms of ROC curves) are evaluated.
    • If the test fails and the performance of the test and training are different, this could be a symptom of overtraining and our model can be considered not good!

...

Our physics problem consists in detecting the so-called “golden decay channel” channel”  which is one of the possible Higgs boson's decays: its name is due to the fact that it has the clearest and cleanest signature of all the possible Higgs boson's decay modes. The decay chain is sketched here: the Higgs boson decays into Z boson pairs, which in turn decay into a lepton pair (in the picture, muon-antimuon or electron-positron pairs). In this exercise, we will use only datasets concerning the Image Removed decay  decay channel and the datasets about the 4e channel are given to you to be analyzed as an optional exercise. At the LHC experiments, the decay channel 2e2mu is also widely analyzed.

...

  • electrically-charged leptons (electrons or muons, denoted with Image Removedl)
  • particle jets (collimated streams of particles originating from quarks or gluons, denoted with Image Removedj).

For each object, several kinetic variables are measured:

  • the momentum transverse to the beam direction (Image Removedpt)
  • two angles Image Removed θ (polar) and Image Removed Φ (azimuthal) - see picture below for the CMS reference frame used.
  • for convenience, at hadron colliders, the pseudorapidity Image Removedη, defined as Image Removed is η=-ln(tan(η/2)) is used instead of the polar angle Image Removedθ.

We will use some of them for training our Machine Learning algorithms.

...

The datasets files are stored on Recas-Bari's ownCloud and are automatically loaded by the notebook. In case, they are also available here (four muons decay channel)for the main exercise and here (four electrons decay channel) for the optional exercise.

...

  • The first 2 columns contain information that is provided by experiments at the LHC that will not be used in the training of our Machine Learning algorithms, therefore we skip our explanation to the next columns.

  • The next variable is the f_weights. This corresponds to the probability of having that particular kind of physical process on the whole experiment. Indeed, it is a product of Branching Ratio (BR), geometrical acceptance and kinematic phase-space (generator level). It is very important for the training phase and you will use it later.

  • The variables f_massjj,f_deltajj,f_mass4l,f_Z1mass, and f_Z2mass are named high-level features (event features) since they contain overall information about the final-state particles (the mass of the two jets, their separation in space, the invariant mass of the four leptons, the masses of the two Z bosons). Note that the Image Removed mass mZ2 mass is lighter w.r.t. the Image Removed onemZ1 one. Why is that? In the Higgs boson production (hypothesis of mass = 125 GeV) only one of the Z bosons is an actual particle that has the nominal mass of 91.18 GeV. The other one is a virtual (off-mass shell) particle.

  • The other columns represent the low-level features (object kinematics observables), the basic measurements which are made by the detectors for the individual final state objects (in our case four charged leptons and jets) such as f_lept1(2,3,4)_pt(phi,eta) corresponding to their transverse momentum Image Removed and pt and the spatial distribution of their tracks (Image Removedη,Φ).

The same comments hold for the background datasets:

...

Such a structure is also called Feedforward Multilayer Perceptron (MLP, see the picture).

The output of the Image Removed node kth node of the Image Removed layers nth layers is computed as the weighted average of the input variables, with weights that are subject to optimization via training.

...

Then a bias or threshold parameter Image Removed is w0 is applied. This bias accounts for the random noise, in the sense that it measures how well the model fits the training set (i.e. how much the model is able to correctly predict the known outputs of the training examples.) The output of a given node is: .

...

During training we optimize the loss function, i.e. reduce the error between actual and predicted values. Since we deal with a binary classification problem, the Image Removed can ytrue can take on just two values, Image Removed ytrue = 0 (for hypothesis Image RemovedH1) and = 1 (for hypothesis Image RemovedH0).

A popular algorithm to optimize the weights consists of iteratively modifying the weights after each training observation or after a bunch of training observations by doing a minimization of the loss function.

...

Question to students: What happens if you switch to the Image Removed decay 4e decay channel? You can submit your model (see the ML challenge below) for this physical process as well!

...

  • You can participate as a single participant or as a team
  • The winner is the one scoring the best AUC in the challenge samples!
  • In the next box, you will find some lines of code for preparing an output csv file, containing your y_predic for this new dataset!
  • Choose a meaningful name for your result csv file (i.e. your name, or your team name, the model used for the training phase, and the decay channel - 4Image Removed or 4Image Removed 4μ or 4e - but avoid to submit results.csv)
  • Download the csv file and upload it here: https://recascloud.ba.infn.it/index.php/s/CnoZuNrlr3x7uPI
  • You can submit multiple results, paying attention to name them accordingly (add the version number, such as v1, v34, etc.)
  • You can use this exercise as a starting point (train over constituents)
  • We will consider your best result for the final score.
  • The winner will be asked to present the ML architecture!

...