You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 10 Current »

Depending on the pledged resources for you experiment, both disk and tape resources can be available at CNAF. Disk space is managed by General Parallel File System (GPFS) [8] which is a high-performance clustered file system developed by IBM.

Normally, under ”/storage/gpfs_data/EXPERIMENT_NAME/" there is a filesystem to store experiments’ data and it is optimized for big sized files (GB). It is recommended not be used as home directories or for SW compilation. We recommend a minimum file size of several MB (use archival tools, i.e. tar, to pack many small files if needed). If you have different needs please contact the user support since different storage resources can be provided. For example, the file system /storage/gpfs_small_files/.../ was created for storing a large number of small files.

Several GPFS filesystems are present, they have different task:

  • /storage/gpfs_* are dedicated to data storage. They are mounted on UI and worker nodes and readable or writable depending on the setting agreed with experiment. If the storage area is under the control of SRM (Storage Resource Manager), like StoRM, it is possible to write there only via SRM. 
  • /opt/exp_software is dedicated to experiment software. They are mounted on UI and worker nodes. They can be written directly by user from UI or by software manager jobs through dedicated worker node depending on the setting agreed with experiment. This are must not be used to store data, as it is for software only.
    N.B. This filesystem is not writable through normal job and standard worker node.
  • /home/* where there are user homes grouped by experiment. Quotas are set per experiment, therefore we recommend to delete as much as possible the unnecessary files.
    N.B. The user home directories are present also on worker node but they are local and physically different from the user home on the UI.
  • /storage/gpfs_archive is a buffer for the tape for several experiments. Some experiments has dedicated buffer. The tape is managed by Tivoli Storage Manager.
  • /cvmfs/EXPERIMENT_NAME is another storage location based on the CernVM File System (CernVM-FS) (https://cernvm.cern.ch/portal/filesystem). It is managed centrally by the experiment and the mount point is available on all the CNAF UIs and WNs - more information can be provided by the experiment.
  • /home/EXPERIMENT_NAME/username/ hosts the home directory for each user. It is available only on the bastion machine and on the UIs, NOT on the WNs. It is optimized for small files and should not be used to store experiments’ data. Daily backup is guaranteed.


  • No labels