Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Depending on the pledged resources for you experiment, both disk and tape resources can be available at CNAF. Disk space is managed by the General Parallel File System (GPFS) [8] which is a high-performance clustered file system developed by IBM.

Normally, on user interfaces and farm worker nodes, under "/storage/gpfs_data/EXPERIMENT_NAME/" there is a filesystem to store experiments’ data is mounted and it is optimized for big sized files (GB). It is recommended not be used as home directories or for SW compilation. to not use it as a larger home directory or for software compilation.

We recommend a minimum file size of several MB (use .

Use archival tools, ie.eg. tar, to pack many small files together if needed).

If you have different needs a use case that doesn't fit with this prescriptions, please contact the user support since different team, as storage resources optimised for small files can be provided. For example, the file system /storage/gpfs_small_files/.../ was created for storing a large number of small files.

Several GPFS filesystems are present, they have serve different taskpurposes:

  • /storage/gpfs_* are dedicated to data storage. They are mounted on UI and worker nodes and readable or writable writeable depending on the setting agreed with experiment. If the storage area is under the control of SRM (Storage Resource Manager), like StoRM, it is possible to write there only via SRM. 
  • /opt/exp_software is /* are dedicated to experiment software. They are mounted on UI user interfaces (default mode: read+write) and worker nodes (default mode: read-only). They can be written directly by user from UI or by software manager jobs through dedicated worker node depending on the setting agreed with experiment. This are must not be used to store data, as it is for software only.
    N.B. This This filesystem is not writable writeable through normal job and standard worker node.
  • /home/* where there are user homes grouped by experiment. Quotas are set both per experiment (100 or 200GB) and per user (20GB), therefore we recommend to delete as much as possible the unnecessary files.
    N.B. The user home directories are present also on worker node but they are local and physically different from the user home on the UI.
  • /storage/gpfs_archive is a buffer disk filesystem for the tape for several of several experiments. Some experiments has dedicated and separated buffer. The tape is managed by the Tivoli Storage Manager.
  • /cvmfs/ EXPERIMENT_NAME is another storage location based on the CernVM File System (CernVM-FS) (https://cernvm.cern.ch/portal/filesystem). It is managed centrally by the experiment and the mount point is available on all the CNAF UIs and WNs - more information can has to be provided by the experimentspecific host, as CVMFS content is generally not maintained by CNAF.
  • /home/EXPERIMENT_NAME/username/ hosts the home directory for each user. It is available only on the bastion machine and on the UIs, NOT although distinct, and is not available on the WNsfarm worker nodes. It is optimized for small files and should not be used to store experiments’ data. Daily backup is guaranteed on the user interfaces home directories. No backup is provided on the bastion and a random delete policy is implemented to recover space in case of problems.