One of the main tasks on ML-INFN is to offer democratic access to R&D level resources to INFN researchers, independently from their location.
This is realized via:
- the direct acquisition of hardware, as funded by various INFN bodies
- the utilization of pre-existing resources in the various INFN structures
While the initial ML-INFN project was assuming the realization of a ML-INFN specific Cloud infrastructure, a later development guided to the utilization of the INFNCloud general infrastructure.
|1||General INFNCloud documentation||Documentation on the generic INFNCloud infrastructure|
|2||The INFNCloud portal||WIP: Details here|
Access portal to INFNCloud
|3||How to access and use ML-INFN resources||Access to ML-INFN resources on the INFNCloud|
|4||(obsolete) Using INFNCloud to obtain access to ML-INFN resources||Deltails here||Entry point How-to on how to get access to a machine for ML R&D|
|5||A summary of available resources||List of resources kept as updated as possible|
The ongoing developments towards a future Cloud-native provisioning model for hardware accelerators is describe in the document "A Scalable and Replicable Kubernetes Platform for ML_INFN".
T.Boccali, March 2nd 2020