To ease the transition to the new cluster and the general use of HTCondor, we implemented a solution based on environment modules. The traditional interaction methods, i.e. specifying all command line options, remain valid, yet less handy and more verbose.
The htc
modules will set all environment variables needed to correctly submit to both the old and the new HTCondor clusters.
Once logged into any Tier 1 user interface, this utility will be available. You can list all the available modules using:
apascolinit1@ui-tier1 ~ $ module avail ----------- /opt/exp_software/opssw/modules/modulefiles ----------- htc/auth htc/ce htc/local use.own Key: modulepath default-version |
These htc/* modules have different roles:
variable | values | description |
---|---|---|
ver | 9 | selects the old HTCondor cluster and local schedd (sn-02) |
23 | selects the new HTCondor cluster and local schedd (sn01-htc) |
apascolinit1@ui-tier1 ~ $ module switch htc ver=9 apascolinit1@ui-tier1 ~ $ condor_q -- Schedd: sn-02.cr.cnaf.infn.it : <131.154.192.42:9618?... @ 04/17/24 14:58:44 OWNER BATCH_NAME SUBMITTED DONE RUN IDLE HOLD TOTAL JOB_IDS Total for query: 0 jobs; 0 completed, 0 removed, 0 idle, 0 running, 0 held, 0 suspended Total for apascolinit1: 0 jobs; 0 completed, 0 removed, 0 idle, 0 running, 0 held, 0 suspended Total for all users: 50164 jobs; 30960 completed, 1 removed, 12716 idle, 4514 running, 1973 held, 0 suspended apascolinit1@ui-tier1 ~ $ module switch htc ver=23 apascolinit1@ui-tier1 ~ $ condor_q -- Schedd: sn01-htc.cr.cnaf.infn.it : <131.154.192.242:9618?... @ 04/17/24 14:58:52 OWNER BATCH_NAME SUBMITTED DONE RUN IDLE HOLD TOTAL JOB_IDS Total for query: 0 jobs; 0 completed, 0 removed, 0 idle, 0 running, 0 held, 0 suspended Total for apascolinit1: 0 jobs; 0 completed, 0 removed, 0 idle, 0 running, 0 held, 0 suspended Total for all users: 0 jobs; 0 completed, 0 removed, 0 idle, 0 running, 0 held, 0 suspende |
variable | values | description |
---|---|---|
num | 1,2,3,4 | connects to ce{num}-htc (new cluster) |
5,6,7 | connects to ce{num}-htc (old cluster) | |
auth | VOMS,SCITOKENS | calls htc/auth with the selected auth method |
apascolinit1@ui-tier1 ~ $ condor_q Error: ...... apascolinit1@ui-tier1 ~ $ module switch htc/ce auth=SCITOKENS num=2 Don't forget to "export BEARER_TOKEN=$(oidc-token <client-name>)"! Switching from htc/ce{auth=SCITOKENS:num=2} to htc/ce{auth=SCITOKENS:num=2} Loading requirement: htc/auth{auth=SCITOKENS} apascolinit1@ui-tier1 ~ $ export BEARER_TOKEN=$(oidc-token htc23) apascolinit1@ui-tier1 ~ $ condor_q -- Schedd: ce02-htc.cr.cnaf.infn.it : <131.154.192.41:9619?... @ 04/17/24 15:48:24 OWNER BATCH_NAME SUBMITTED DONE RUN IDLE HOLD TOTAL JOB_IDS .......... .......... .......... |
All modules in the htc
family provide on-line help via the "module help <module name>
" command, e.g.:
budda@ui-tier1:~ $ module help htc ------------------------------------------------------------------- Module Specific Help for /opt/exp_software/opssw/modules/modulefiles/htc/local: Defines environment variables and aliases to ease the interaction with the INFN-T1 HTCondor local job submission system ------------------------------------------------------------------- |
To submit local jobs, the behavior is the same as for HTCondor 9 using the Jobs UI.
apascolinit1@ui-tier1 ~ $ cat sleep.sh #!/bin/env bash sleep $1 apascolinit1@ui-tier1 ~ $ cat submit.sub # Unix submit description file # subimt.sub -- simple sleep job batch_name = Local-Sleep executable = sleep.sh arguments = 3600 log = $(batch_name).log.$(Process) output = $(batch_name).out.$(Process) error = $(batch_name).err.$(Process) should_transfer_files = Yes when_to_transfer_output = ON_EXIT queue |
apascolinit1@ui-tier1 ~ $ module switch htc ver=23 apascolinit1@ui-tier1 ~ $ condor_submit submit.sub Submitting job(s). 1 job(s) submitted to cluster 15. apascolinit1@ui-tier1 ~ $ condor_q -- Schedd: sn01-htc.cr.cnaf.infn.it : <131.154.192.242:9618?... @ 03/18/24 17:15:44 OWNER BATCH_NAME SUBMITTED DONE RUN IDLE TOTAL JOB_IDS apascolinit1 Local-Sleep 3/18 17:15 _ 1 _ 1 15.0 Total for query: 1 jobs; 0 completed, 0 removed, 0 idle, 1 running, 0 held, 0 suspended Total for apascolinit1: 1 jobs; 0 completed, 0 removed, 0 idle, 1 running, 0 held, 0 suspended Total for all users: 1 jobs; 0 completed, 0 removed, 0 idle, 1 running, 0 held, 0 suspended |
The GRID submission part on the ce01-htc is nearly the same as the one used to submit on the old cluster. You can use 2 types of authentication methods:
That the steps are identical to those in the HTCondor 9 cluster::
apascolinit1@ui-tier1 ~ $ eval `oidc-agent-service use` 23025 apascolinit1@ui-tier1 ~ $ oidc-gen -w device Enter short name for the account to configure: htc23 [1] https://iam-t1-computing.cloud.cnaf.infn.it/ ... ... Issuer [https://iam-t1-computing.cloud.cnaf.infn.it/]: <enter> The following scopes are supported: openid profile email address phone offline_access eduperson_scoped_affiliation eduperson_entitlement eduperson_assurance entitlements Scopes or 'max' (space separated) [openid profile offline_access]: profile wlcg.groups wlcg compute.create compute.modify compute.read compute.cancel Registering Client ... Generating account configuration ... accepted Using a browser on any device, visit: https://iam-t1-computing.cloud.cnaf.infn.it/device And enter the code: REDACTED ... ... ... Enter encryption password for account configuration 'htc23': <passwd> Confirm encryption Password: <passwd> Everything setup correctly! |
apascolinit1@ui-tier1 ~ $ oidc-add htc23 Enter decryption password for account config 'htc23': <passwd> success apascolinit1@ui-tier1 ~ $ umask 0077 ; oidc-token htc23 > ${HOME}/token |
apascolinit1@ui-tier1 ~ $ cat submit_token.sub # Unix submit description file # subimt.sub -- simple sleep job scitokens_file = $ENV(HOME)/token +owner = undefined batch_name = Grid-Token-Sleep executable = sleep.sh arguments = 3600 log = $(batch_name).log.$(Process) output = $(batch_name).out.$(Process) error = $(batch_name).err.$(Process) should_transfer_files = Yes when_to_transfer_output = ON_EXIT queue |
apascolinit1@ui-tier1 ~ $ module switch htc/ce auth=SCITOKENS num=1 Don't forget to "export BEARER_TOKEN=$(oidc-token <client-name>)"! apascolinit1@ui-tier1 ~ $ export BEARER_TOKEN=$(oidc-token htc23) apascolinit1@ui-tier1 ~ $ condor_submit submit_token.sub Submitting job(s). 1 job(s) submitted to cluster 35. apascolinit1@ui-tier1 ~ $ condor_q -- Schedd: ce01-htc.cr.cnaf.infn.it : <131.154.193.64:9619?... @ 03/19/24 10:35:43 OWNER BATCH_NAME SUBMITTED DONE RUN IDLE TOTAL JOB_IDS apascolinius Grid-Token-Sleep 3/19 10:35 _ _ 1 1 35.0 Total for query: 1 jobs; 0 completed, 0 removed, 1 idle, 0 running, 0 held, 0 suspended Total for apascolinius: 1 jobs; 0 completed, 0 removed, 1 idle, 0 running, 0 held, 0 suspended Total for all users: 1 jobs; 0 completed, 0 removed, 1 idle, 0 running, 0 held, 0 suspended |
The SSL Submission substitution of proxy, this process is almost identical.
To be able to submit jobs using the SSL authentication, your x509 User Proxy FQAN must be mapped in the CE configuration.
|
apascolinit1@ui-tier1 ~ $ voms-proxy-init --voms cms Enter GRID pass phrase for this identity: Contacting voms2.cern.ch:15002 [/DC=ch/DC=cern/OU=computers/CN=voms2.cern.ch] "cms"... Remote VOMS server contacted succesfully. Created proxy in /tmp/x509up_u23077. Your proxy is valid until Tue Mar 19 22:39:41 CET 2024 |
apascolinit1@ui-tier1 ~ $ cat submit_ssl.sub # Unix submit description file # subimt.sub -- simple sleep job use_x509userproxy = true +owner = undefined batch_name = Grid-SSL-Sleep executable = sleep.sh arguments = 3600 log = $(batch_name).log.$(Process) output = $(batch_name).out.$(Process) error = $(batch_name).err.$(Process) should_transfer_files = Yes when_to_transfer_output = ON_EXIT queue |
apascolinit1@ui-tier1 ~ $ module switch htc/ce auth=VOMS num=1 Don't forget to voms-proxy-init! apascolinit1@ui-tier1 ~ $ condor_submit submit_ssl.sub Submitting job(s). 1 job(s) submitted to cluster 36. apascolinit1@ui-tier1 ~ $ condor_q -- Schedd: ce01-htc.cr.cnaf.infn.it : <131.154.193.64:9619?... @ 03/19/24 10:45:18 OWNER BATCH_NAME SUBMITTED DONE RUN IDLE TOTAL JOB_IDS apascolini Grid-SSL-Sleep 3/19 10:44 _ 1 _ 1 36.0 Total for query: 1 jobs; 0 completed, 0 removed, 0 idle, 1 running, 0 held, 0 suspended Total for apascolini: 1 jobs; 0 completed, 0 removed, 0 idle, 1 running, 0 held, 0 suspended Total for all users: 2 jobs; 1 completed, 0 removed, 0 idle, 1 running, 0 held, 0 suspended |