Submission utility
To ease the transition to the new cluster we implemented a solution based on environment modules. It will set all environment variables needed to correctly submit to both the old and new cluster.
Once logged into any UI this utility will be available. You can list all the available modules using:
apascolinit1@ui-tier1 ~ $ module avail -------------------------------------------------------- /opt/exp_software/opssw/modules/modulefiles --------------------------------------------------------- htc/auth htc/ce htc/local use.own Key: modulepath default-version
These htc/* modules have different roles:
- htc/local - to be used once you want to submit/query local schedds sn-02 or sn01-htc (HTCondor23 schedd), supports variables specification:
variable values description ver 9 connects to the old HTCondor cluster and local schedd (sn-02) 23 connects to the new HTCondor cluster and local schedd (sn01-htc) Usage of htc/local moduleapascolinit1@ui-tier1 ~ $ module switch htc/local ver=9 apascolinit1@ui-tier1 ~ $ condor_q -- Schedd: sn-02.cr.cnaf.infn.it : <131.154.192.42:9618?... @ 04/17/24 14:58:44 OWNER BATCH_NAME SUBMITTED DONE RUN IDLE HOLD TOTAL JOB_IDS Total for query: 0 jobs; 0 completed, 0 removed, 0 idle, 0 running, 0 held, 0 suspended Total for apascolinit1: 0 jobs; 0 completed, 0 removed, 0 idle, 0 running, 0 held, 0 suspended Total for all users: 50164 jobs; 30960 completed, 1 removed, 12716 idle, 4514 running, 1973 held, 0 suspended apascolinit1@ui-tier1 ~ $ module switch htc/local ver=23 apascolinit1@ui-tier1 ~ $ condor_q -- Schedd: sn01-htc.cr.cnaf.infn.it : <131.154.192.242:9618?... @ 04/17/24 14:58:52 OWNER BATCH_NAME SUBMITTED DONE RUN IDLE HOLD TOTAL JOB_IDS Total for query: 0 jobs; 0 completed, 0 removed, 0 idle, 0 running, 0 held, 0 suspended Total for apascolinit1: 0 jobs; 0 completed, 0 removed, 0 idle, 0 running, 0 held, 0 suspended Total for all users: 0 jobs; 0 completed, 0 removed, 0 idle, 0 running, 0 held, 0 suspende
- htc/auth - helps to setup authentication methods for Grid submission
variable values description auth
GSI sets up GSI authentication (old cluster only)
SSL
sets up SSL authentication (new cluster only) SCITOKENS
sets up SCITOKENS authentication Usage of htc/auth moduleapascolinit1@ui-tier1 ~ $ module switch htc/auth auth=SSL Don't forget to voms-proxy-init! apascolinit1@ui-tier1 ~ $ module switch htc/auth auth=SCITOKENS Don't forget to "export BEARER_TOKEN=$(oidc-token <client-name>)"!
- htc/ce - eases the usage of condor_q and condor_submit commands setting up all the needed variables to contact our CEs
variable values description num 1,2,3,4 connects to ce{num}-htc (new cluster) 5,6,7 connects to ce{num}-htc (old cluster) auth GSI,SSL,SCITOKENS calls htc/auth with the selected auth method Usage of htc/ce moduleapascolinit1@ui-tier1 ~ $ condor_q Error: ...... apascolinit1@ui-tier1 ~ $ module switch htc/ce auth=SCITOKENS num=2 Don't forget to "export BEARER_TOKEN=$(oidc-token <client-name>)"! Switching from htc/ce{auth=SCITOKENS:num=2} to htc/ce{auth=SCITOKENS:num=2} Loading requirement: htc/auth{auth=SCITOKENS} apascolinit1@ui-tier1 ~ $ export BEARER_TOKEN=$(oidc-token htc23) apascolinit1@ui-tier1 ~ $ condor_q -- Schedd: ce02-htc.cr.cnaf.infn.it : <131.154.192.41:9619?... @ 04/17/24 15:48:24 OWNER BATCH_NAME SUBMITTED DONE RUN IDLE HOLD TOTAL JOB_IDS .......... .......... ..........
Local Submission
To submit local jobs, the behavior is the same as for HTCondor 9 using the Jobs UI.
- Submitting a job to the cluster.Executable and Submit file
apascolinit1@ui-tier1 ~ $ cat sleep.sh #!/bin/env bash sleep $1 apascolinit1@ui-tier1 ~ $ cat submit.sub # Unix submit description file # subimt.sub -- simple sleep job batch_name = Local-Sleep executable = sleep.sh arguments = 3600 log = $(batch_name).log.$(Process) output = $(batch_name).out.$(Process) error = $(batch_name).err.$(Process) should_transfer_files = Yes when_to_transfer_output = ON_EXIT queue
Submission and control of job statusapascolinit1@ui-tier1 ~ $ module switch htc/local ver=23 apascolinit1@ui-tier1 ~ $ condor_submit submit.sub Submitting job(s). 1 job(s) submitted to cluster 15. apascolinit1@ui-tier1 ~ $ condor_q -- Schedd: sn01-htc.cr.cnaf.infn.it : <131.154.192.242:9618?... @ 03/18/24 17:15:44 OWNER BATCH_NAME SUBMITTED DONE RUN IDLE TOTAL JOB_IDS apascolinit1 Local-Sleep 3/18 17:15 _ 1 _ 1 15.0 Total for query: 1 jobs; 0 completed, 0 removed, 0 idle, 1 running, 0 held, 0 suspended Total for apascolinit1: 1 jobs; 0 completed, 0 removed, 0 idle, 1 running, 0 held, 0 suspended Total for all users: 1 jobs; 0 completed, 0 removed, 0 idle, 1 running, 0 held, 0 suspended
Grid Submission
The GRID submission part on the ce01-htc is nearly the same as the one used to submit on the old cluster. You can use 2 types of authentication methods:
Token submission
That the steps are identical to those in the HTCondor 9 cluster::
- Register a Client (or upload it of an already submitted)Register a new Client
apascolinit1@ui-tier1 ~ $ eval `oidc-agent-service use` 23025 apascolinit1@ui-tier1 ~ $ oidc-gen -w device Enter short name for the account to configure: htc23 [1] https://iam-t1-computing.cloud.cnaf.infn.it/ ... ... Issuer [https://iam-t1-computing.cloud.cnaf.infn.it/]: <enter> The following scopes are supported: openid profile email address phone offline_access eduperson_scoped_affiliation eduperson_entitlement eduperson_assurance entitlements Scopes or 'max' (space separated) [openid profile offline_access]: profile wlcg.groups wlcg compute.create compute.modify compute.read compute.cancel Registering Client ... Generating account configuration ... accepted Using a browser on any device, visit: https://iam-t1-computing.cloud.cnaf.infn.it/device And enter the code: HQ2WYL ... ... ... Enter encryption password for account configuration 'htc23': <passwd> Confirm encryption Password: <passwd> Everything setup correctly!
- Get a token for submission
apascolinit1@ui-tier1 ~ $ oidc-add htc23 Enter decryption password for account config 'htc23': <passwd> success apascolinit1@ui-tier1 ~ $ umask 0077 ; oidc-token htc23 > ${HOME}/token
- Submit a test jobSubmit file
apascolinit1@ui-tier1 ~ $ cat submit_token.sub # Unix submit description file # subimt.sub -- simple sleep job scitokens_file = $ENV(HOME)/token +owner = undefined batch_name = Grid-Token-Sleep executable = sleep.sh arguments = 3600 log = $(batch_name).log.$(Process) output = $(batch_name).out.$(Process) error = $(batch_name).err.$(Process) should_transfer_files = Yes when_to_transfer_output = ON_EXIT queue
Job submission with Tokenapascolinit1@ui-tier1 ~ $ module switch htc/ce auth=SCITOKENS num=1 Don't forget to "export BEARER_TOKEN=$(oidc-token <client-name>)"! apascolinit1@ui-tier1 ~ $ export BEARER_TOKEN=$(oidc-token htc23) apascolinit1@ui-tier1 ~ $ condor_submit submit_token.sub Submitting job(s). 1 job(s) submitted to cluster 35. apascolinit1@ui-tier1 ~ $ condor_q -- Schedd: ce01-htc.cr.cnaf.infn.it : <131.154.193.64:9619?... @ 03/19/24 10:35:43 OWNER BATCH_NAME SUBMITTED DONE RUN IDLE TOTAL JOB_IDS apascolinius Grid-Token-Sleep 3/19 10:35 _ _ 1 1 35.0 Total for query: 1 jobs; 0 completed, 0 removed, 1 idle, 0 running, 0 held, 0 suspended Total for apascolinius: 1 jobs; 0 completed, 0 removed, 1 idle, 0 running, 0 held, 0 suspended Total for all users: 1 jobs; 0 completed, 0 removed, 1 idle, 0 running, 0 held, 0 suspended
SSL submission
The SSL Submission substitution of proxy, this process is almost identical.
CAVEAT
Tobe able to submit jobs using the SSL authentication, your x509UserProxyFQAN must be mapped in the CE configuration.
You will need to send your x509UserProxyFQAN to the support team via user-support@lists.cnaf.infn.it
The attribute can be recovered in different ways:
- after you have a valid proxy you can retreive it with:the x509UserProxyFQAN will be composed by "<subject>,<attribute1>,<attribute2>...", in this case:
apascolinit1@ui-tier1 ~ $ voms-proxy-info --all subject : /DC=ch/DC=cern/OU=Organic Units/OU=Users/CN=apascoli/CN=842035/CN=Alessandro Pascolini/CN=1239012205 issuer : /DC=ch/DC=cern/OU=Organic Units/OU=Users/CN=apascoli/CN=842035/CN=Alessandro Pascolini identity : /DC=ch/DC=cern/OU=Organic Units/OU=Users/CN=apascoli/CN=842035/CN=Alessandro Pascolini type : RFC3820 compliant impersonation proxy strength : 2048 path : /tmp/x509up_u23077 timeleft : 11:59:53 key usage : Digital Signature, Key Encipherment === VO cms extension information === VO : cms subject : /DC=ch/DC=cern/OU=Organic Units/OU=Users/CN=apascoli/CN=842035/CN=Alessandro Pascolini issuer : /DC=ch/DC=cern/OU=computers/CN=lcg-voms2.cern.ch attribute : /cms/Role=production/Capability=NULL attribute : /cms/Role=NULL/Capability=NULL timeleft : 11:59:52 uri : lcg-voms2.cern.ch:15002
x509UserProxyFQAN = "/DC=ch/DC=cern/OU=Organic Units/OU=Users/CN=apascoli/CN=842035/CN=Alessandro Pascolini,/cms/Role=production/Capability=NULL,/cms/Role=NULL/Capability=NULL"
- if you already have running jobs submitted with GSI auth you can get the x509UserProxyFQAN attribute with:
apascolinit1@ui-tier1 ~ $ condor_q -pool ce02-htc.cr.cnaf.infn.it:9619 -n ce02-htc.cr.cnaf.infn.it <job_id> -af x509UserProxyFQAN /DC=ch/DC=cern/OU=Organic Units/OU=Users/CN=apascoli/CN=842035/CN=Alessandro Pascolini,/cms/Role=NULL/Capability=NULL
In case your x509UserProxyFQAN hasn't been mapped into the CE configuration you will be shown the following error:
apascolinit1@ui-tier1 ~ $ condor_submit -pool ce01-htc.cr.cnaf.infn.it:9619 -remote ce01-htc.cr.cnaf.infn.it submit_ssl.sub ERROR: Can't find address of schedd ce01-htc.cr.cnaf.infn.it
- Get a proxy with voms-proxy-init
apascolinit1@ui-tier1 ~ $ voms-proxy-init --voms cms Enter GRID pass phrase for this identity: Contacting voms2.cern.ch:15002 [/DC=ch/DC=cern/OU=computers/CN=voms2.cern.ch] "cms"... Remote VOMS server contacted succesfully. Created proxy in /tmp/x509up_u23077. Your proxy is valid until Tue Mar 19 22:39:41 CET 2024
- Submit a job to the CESubmit file
apascolinit1@ui-tier1 ~ $ cat submit_ssl.sub # Unix submit description file # subimt.sub -- simple sleep job use_x509userproxy = true +owner = undefined batch_name = Grid-SSL-Sleep executable = sleep.sh arguments = 3600 log = $(batch_name).log.$(Process) output = $(batch_name).out.$(Process) error = $(batch_name).err.$(Process) should_transfer_files = Yes when_to_transfer_output = ON_EXIT queue
Submit a job with SSLapascolinit1@ui-tier1 ~ $ module switch htc/ce auth=SSL num=1 Don't forget to voms-proxy-init! apascolinit1@ui-tier1 ~ $ condor_submit submit_ssl.sub Submitting job(s). 1 job(s) submitted to cluster 36. apascolinit1@ui-tier1 ~ $ condor_q -- Schedd: ce01-htc.cr.cnaf.infn.it : <131.154.193.64:9619?... @ 03/19/24 10:45:18 OWNER BATCH_NAME SUBMITTED DONE RUN IDLE TOTAL JOB_IDS apascolini Grid-SSL-Sleep 3/19 10:44 _ 1 _ 1 36.0 Total for query: 1 jobs; 0 completed, 0 removed, 0 idle, 1 running, 0 held, 0 suspended Total for apascolini: 1 jobs; 0 completed, 0 removed, 0 idle, 1 running, 0 held, 0 suspended Total for all users: 2 jobs; 1 completed, 0 removed, 0 idle, 1 running, 0 held, 0 suspended