Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

It is too expensive to train this model on the 4 projections simultaneously in terms of GPUs RAM. Therefore, we train four different CNNs per projection. The classification scores of the last layers of each CNN will be averaged to produce a label that takes into account all the images related to a single subject. It is suggested to train the network on at least about 1000 images to obtain a good performance. We obtained the following performance:

accuracy

0.81

recall

0.81

precision

0.80


with a private dataset annotated by a radiologist with one of the four BIRADS density classes (A, B, C, D). The images have been acquired with GE Senograph DS imaging systems. The data were also randomly split into a training set (80%), a validation set (10%) and a test set (10%). Before using them for the training the images underwent some preprocessing steps:

...

2) The scripts in Jupiter notebooks can be run on Google CoLab or you can clone the github GitHub repo and execute the notebook locally.

Annotated Description

...

  1. Train_ResNet.py:  this is the first script to train the CNN ResNet model. You may train the network

...

  1. four times, one per projection. Therefore, the training set of images should be divided in 4

...

  1. different                                        folders  (CC_R, CC_L, MLO_L, MLO_R) and each folder divided into 4 sub-folders, one per class (A, B, C, D).
  2. prediction.py: this is the script to test the saved trained model on new images. The test set of images should be organized in folders as the training set.
  3. figure_merit.ipynb: this is the script to obtain the metrics, the final figures of merit to evaluate the classification performance on the test set. You need as input the .txt files, obtained with the

...

  1. script                                            prediction.py with the prediction outcomes. We uploaded files predictions_mlor.txt, predictions_ccr.txt, predictions_mlol.txt, predictions_ccl.txt obtained from a pre-

...

  1. trained                                          and tested ResNet model to use them as examples.
  2. CAM_CC_R_A.ipynb: this is the script to obtain the Class Activation Maps. You can use the test images we uploaded as example in the folder “TestSet” in GitHub repo.

References

Here you can find a more detailed description of the ResNet model architecture: https://link.springer.com/chapter/10.1007/978-3-030-29930-9_3

Here you can find a more detailed description of the Grad-CAM technique: https://openaccess.thecvf.com/content_iccv_2017/html/Selvaraju_Grad-CAM_Visual_Explanations_ICCV_2017_paper.html

Attachments

CCR_INFN.pdf