The new cluster HTCondor 23 currently consists of:
Node(s) | Description | OS |
---|---|---|
cm01-htc e cm02-htc | A Central Management cluster of the new cluster | AlmaLinux9 |
sn01-htc | A local submit node (working on the Access Point (AP)) | AlmaLinus9 |
ce01-htc | CE for submission grid (Token + SSL) | AlmaLinux9 |
wn-204-11-* | Worker Nodes | CentOS |
Submission to the new cluster
The main difference in the submission workflow is to refer to the Cental Manager of the new cluster, i.e. adding -pool cm01-htc in the submission and query commands.
Local Submission
To submit local jobs, the behavior is the same as for HTCondor 9 using the Jobs UI.
- Submitting a job to the cluster.Executable and Submit file
apascolinit1@ui-tier1 ~ $ cat sleep.sh #!/bin/env bash sleep $1 apascolinit1@ui-tier1 ~ $ cat submit.sub # Unix submit description file # subimt.sub -- simple sleep job batch_name = Local-Sleep executable = sleep.sh arguments = 3600 log = $(batch_name).log.$(Process) output = $(batch_name).out.$(Process) error = $(batch_name).err.$(Process) should_transfer_files = Yes when_to_transfer_output = ON_EXIT queue
Submission and control of job statusapascolinit1@ui-tier1 ~ $ condor_submit -pool cm01-htc -remote sn01-htc submit.sub Submitting job(s). 1 job(s) submitted to cluster 15. apascolinit1@ui-tier1 ~ $ condor_q -pool cm01-htc -n sn01-htc -- Schedd: sn01-htc.cr.cnaf.infn.it : <131.154.192.242:9618?... @ 03/18/24 17:15:44 OWNER BATCH_NAME SUBMITTED DONE RUN IDLE TOTAL JOB_IDS apascolinit1 Local-Sleep 3/18 17:15 _ 1 _ 1 15.0 Total for query: 1 jobs; 0 completed, 0 removed, 0 idle, 1 running, 0 held, 0 suspended Total for apascolinit1: 1 jobs; 0 completed, 0 removed, 0 idle, 1 running, 0 held, 0 suspended Total for all users: 1 jobs; 0 completed, 0 removed, 0 idle, 1 running, 0 held, 0 suspended
Grid Submission
The GRID submission part on the ce01-htc is nearly the same as the one used to submit on the old cluster. You can use 2 types of authentication methods:
Token submission
That the steps are identical to those in the HTCondor 9 cluster::
- Register a Client (or upload it of an already submitted)Register a new Client
apascolinit1@ui-tier1 ~ $ eval `oidc-agent-service use` 23025 apascolinit1@ui-tier1 ~ $ oidc-gen -w device Enter short name for the account to configure: htc23 [1] https://iam-t1-computing.cloud.cnaf.infn.it/ ... ... Issuer [https://iam-t1-computing.cloud.cnaf.infn.it/]: <enter> The following scopes are supported: openid profile email address phone offline_access eduperson_scoped_affiliation eduperson_entitlement eduperson_assurance entitlements Scopes or 'max' (space separated) [openid profile offline_access]: profile wlcg.groups wlcg compute.create compute.modify compute.read compute.cancel Registering Client ... Generating account configuration ... accepted Using a browser on any device, visit: https://iam-t1-computing.cloud.cnaf.infn.it/device And enter the code: HQ2WYL ... ... ... Enter encryption password for account configuration 'htc23': <passwd> Confirm encryption Password: <passwd> Everything setup correctly!
- Get a token for submission
apascolinit1@ui-tier1 ~ $ oidc-add htc23 Enter decryption password for account config 'htc23': <passwd> success apascolinit1@ui-tier1 ~ $ umask 0077 ; oidc-token htc23 > ${HOME}/token
- Submit a test jobSubmit file
apascolinit1@ui-tier1 ~ $ cat submit_token.sub # Unix submit description file # subimt.sub -- simple sleep job scitokens_file = $ENV(HOME)/token +owner = undefined batch_name = Grid-Token-Sleep executable = sleep.sh arguments = 3600 log = $(batch_name).log.$(Process) output = $(batch_name).out.$(Process) error = $(batch_name).err.$(Process) should_transfer_files = Yes when_to_transfer_output = ON_EXIT queue
Job submission with Tokenapascolinit1@ui-tier1 ~ $ export _condor_SEC_CLIENT_AUTHENTICATION_METHODS=SCITOKEN apascolinit1@ui-tier1 ~ $ export BEARER_TOKEN=$(cat ${HOME}/token) apascolinit1@ui-tier1 ~ $ condor_submit -pool ce01-htc.cr.cnaf.infn.it:9619 -remote ce01-htc.cr.cnaf.infn.it submit_token.sub Submitting job(s). 1 job(s) submitted to cluster 35. apascolinit1@ui-tier1 ~ $ condor_q -pool ce01-htc.cr.cnaf.infn.it:9619 -n ce01-htc.cr.cnaf.infn.it -- Schedd: ce01-htc.cr.cnaf.infn.it : <131.154.193.64:9619?... @ 03/19/24 10:35:43 OWNER BATCH_NAME SUBMITTED DONE RUN IDLE TOTAL JOB_IDS apascolinius Grid-Token-Sleep 3/19 10:35 _ _ 1 1 35.0 Total for query: 1 jobs; 0 completed, 0 removed, 1 idle, 0 running, 0 held, 0 suspended Total for apascolinius: 1 jobs; 0 completed, 0 removed, 1 idle, 0 running, 0 held, 0 suspended Total for all users: 1 jobs; 0 completed, 0 removed, 1 idle, 0 running, 0 held, 0 suspended
SSL submission
The SSL Submission substitution of proxy, this process is almost identical.
CAVEAT
Tobe able to submit jobs using the SSL authentication, your x509UserProxyFQAN must be mapped in the CE configuration.
You will need to send your x509UserProxyFQAN to the support team via user-support@lists.cnaf.infn.it
The attribute can be recovered in different ways:
- after you have a valid proxy you can retreive it with:the x509UserProxyFQAN will be composed by "<subject>,<attribute1>,<attribute2>...", in this case:
apascolinit1@ui-tier1 ~ $ voms-proxy-info --all subject : /DC=ch/DC=cern/OU=Organic Units/OU=Users/CN=apascoli/CN=842035/CN=Alessandro Pascolini/CN=1239012205 issuer : /DC=ch/DC=cern/OU=Organic Units/OU=Users/CN=apascoli/CN=842035/CN=Alessandro Pascolini identity : /DC=ch/DC=cern/OU=Organic Units/OU=Users/CN=apascoli/CN=842035/CN=Alessandro Pascolini type : RFC3820 compliant impersonation proxy strength : 2048 path : /tmp/x509up_u23077 timeleft : 11:59:53 key usage : Digital Signature, Key Encipherment === VO cms extension information === VO : cms subject : /DC=ch/DC=cern/OU=Organic Units/OU=Users/CN=apascoli/CN=842035/CN=Alessandro Pascolini issuer : /DC=ch/DC=cern/OU=computers/CN=lcg-voms2.cern.ch attribute : /cms/Role=production/Capability=NULL attribute : /cms/Role=NULL/Capability=NULL timeleft : 11:59:52 uri : lcg-voms2.cern.ch:15002
x509UserProxyFQAN = "/DC=ch/DC=cern/OU=Organic Units/OU=Users/CN=apascoli/CN=842035/CN=Alessandro Pascolini,/cms/Role=production/Capability=NULL,/cms/Role=NULL/Capability=NULL"
- if you already have running jobs submitted with GSI auth you can get the x509UserProxyFQAN attribute with:
apascolinit1@ui-tier1 ~ $ condor_q -pool ce02-htc.cr.cnaf.infn.it:9619 -n ce02-htc.cr.cnaf.infn.it <job_id> -af x509UserProxyFQAN /DC=ch/DC=cern/OU=Organic Units/OU=Users/CN=apascoli/CN=842035/CN=Alessandro Pascolini,/cms/Role=NULL/Capability=NULL
In case your x509UserProxyFQAN hasn't been mapped into the CE configuration you will be shown the following error:
apascolinit1@ui-tier1 ~ $ condor_submit -pool ce01-htc.cr.cnaf.infn.it:9619 -remote ce01-htc.cr.cnaf.infn.it submit_ssl.sub ERROR: Can't find address of schedd ce01-htc.cr.cnaf.infn.it
- Get a proxy with voms-proxy-init
apascolinit1@ui-tier1 ~ $ voms-proxy-init --voms cms Enter GRID pass phrase for this identity: Contacting voms2.cern.ch:15002 [/DC=ch/DC=cern/OU=computers/CN=voms2.cern.ch] "cms"... Remote VOMS server contacted succesfully. Created proxy in /tmp/x509up_u23077. Your proxy is valid until Tue Mar 19 22:39:41 CET 2024
- Submit a job to the CESubmit file
apascolinit1@ui-tier1 ~ $ cat submit_ssl.sub # Unix submit description file # subimt.sub -- simple sleep job use_x509userproxy = true +owner = undefined batch_name = Grid-SSL-Sleep executable = sleep.sh arguments = 3600 log = $(batch_name).log.$(Process) output = $(batch_name).out.$(Process) error = $(batch_name).err.$(Process) should_transfer_files = Yes when_to_transfer_output = ON_EXIT queue
Submit a job with SSLapascolinit1@ui-tier1 ~ $ export _condor_SEC_CLIENT_AUTHENTICATION_METHODS=SSL apascolinit1@ui-tier1 ~ $ condor_submit -pool ce01-htc.cr.cnaf.infn.it:9619 -remote ce01-htc.cr.cnaf.infn.it submit_ssl.sub Submitting job(s). 1 job(s) submitted to cluster 36. apascolinit1@ui-tier1 ~ $ condor_q -pool ce01-htc.cr.cnaf.infn.it:9619 -n ce01-htc.cr.cnaf.infn.it -- Schedd: ce01-htc.cr.cnaf.infn.it : <131.154.193.64:9619?... @ 03/19/24 10:45:18 OWNER BATCH_NAME SUBMITTED DONE RUN IDLE TOTAL JOB_IDS apascolini Grid-SSL-Sleep 3/19 10:44 _ 1 _ 1 36.0 Total for query: 1 jobs; 0 completed, 0 removed, 0 idle, 1 running, 0 held, 0 suspended Total for apascolini: 1 jobs; 0 completed, 0 removed, 0 idle, 1 running, 0 held, 0 suspended Total for all users: 2 jobs; 1 completed, 0 removed, 0 idle, 1 running, 0 held, 0 suspended