You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 9 Next »


This document states workflows to develop and deploy IOC on a ECS k8s infrastructure.

Common and preparatory steps


Create a GIT repository

Every IOC/application MUST have a GIT repository associated. A project must be create under https://baltig.infn.it/lnf-da-control or a different group like for instance:

https://baltig.infn.it/infn-epics

https://baltig.infn.it/epics-containers

Pay attention to not set the project private otherwise will not be possible to load it into the ECS k8s.

Setup IDE

It's highly suggested to use https://code.visualstudio.com/ to handle the project. It's also highly suggested to use a container for development to decouple development of the application from the platform where the application is developed. It's really simple to use a container please read this guide: https://code.visualstudio.com/docs/devcontainers/containers


Setup environment variables

Depending on the target beamline (i.e. sparc environ Sparc ECS):

  1.  Setup EPICS_CA_ADDR_LIST if needed,
  2. Phoebus environ if needed


Python SoftIOC workflow

Setup a target container

For this kind of IOC it's recommended to use the docker image: baltig.infn.it:4567/epics-containers/epics-py-base since it contains the required packages and will be also the image that will be used to run this kind of softioc.

Here below an example of configuration of the devontainer devcontainer.json

// For format details, see https://containers.dev/implementors/json_reference/
{
    "name": "python container",
    //  "build": {
    //     "dockerfile": "../Dockerfile",
    //     "args": {
    //         "TARGET_ARCHITECTURE": "linux"
    //     }
    // },
    "image": "baltig.infn.it:4567/epics-containers/epics-py-base",
    "remoteEnv": {
        // allows X11 apps to run inside the container
        "DISPLAY": "${localEnv:DISPLAY}",
        // provides a name for epics-containers to use in bash prompt etc.
        "EC_PROJECT": "${localWorkspaceFolderBasename}"
    },
    "features": {
        
    },
    // IMPORTANT for this devcontainer to work with docker EC_REMOTE_USER must be
    // set to vscode. For podman it should be left blank.
    "remoteUser": "${localEnv:EC_REMOTE_USER}",
    "customizations": {
        "vscode": {
            // Add the IDs of extensions you want installed when the container is created.
             "extensions": [
                "ms-python.python",
                "ms-python.vscode-pylance",
                "tamasfe.even-better-toml",
                "redhat.vscode-yaml",
                "ryanluker.vscode-coverage-gutters",
                "epicsdeb.vscode-epics",
                "ms-python.black-formatter"
            ]
        }
    },
    // Make sure the files we are mapping into the container exist on the host
    // You can place any other outside of the container before-launch commands here
    "initializeCommand": "bash .devcontainer/initializeCommand ${devcontainerId}",
    // Hooks the global .bashprofile_dev_container but also can add any other commands
    // to run in the container at creation in here
    "postCreateCommand": "bash .devcontainer/postCreateCommand ${devcontainerId}",
    //"forwardPorts": [5064, 5065],

    "runArgs": [
        // Allow the container to access the host X11 display and EPICS CA
        //"--net=host",
        // Make sure SELinux does not disable with access to host filesystems like tmp
        "--security-opt=label=disable"
    ],
    "workspaceMount": "source=${localWorkspaceFolder},target=/app/${localWorkspaceFolderBasename},type=bind",
    "workspaceFolder": "/app/${localWorkspaceFolderBasename}",
    "mounts": [
        // Mount some useful local files from the user's home directory
        // By mounting the parent of the workspace we can work on multiple peer projects
        "source=${localWorkspaceFolder}/../,target=/repos,type=bind",
        // this provides eternal bash history in and out of the container
        "source=${localEnv:HOME}/.bash_eternal_history,target=/root/.bash_eternal_history,type=bind",
        // this bashrc hooks up the .bashrc_dev_container in the following mount
        "source=${localWorkspaceFolder}/.devcontainer/.bashrc,target=/root/.bashrc,type=bind",
        // provides a place for you to put your shell customizations for all your dev containers
        "source=${localEnv:HOME}/.bashrc_dev_container,target=/root/.bashrc_dev_container,type=bind",
        // provides a place to install any packages you want to have across all your dev containers
        "source=${localEnv:HOME}/.bashprofile_dev_container,target=/root/.bashprofile_dev_container,type=bind",
        // provides the same command line editing experience as your host
        "source=${localEnv:HOME}/.inputrc,target=/root/.inputrc,type=bind"
    ]
}


A fully functional example:

https://baltig.infn.it/infn-epics/py-ioc-collector


Follow development guidelines

SOFT IOC Development SoftIOC in a Linux-like environment

Deploy on the target ECS

Once your IOC/application is tested and ready to be published on the target ECS.

In your application repository create a bash file named start.sh (bash script) that will be used to start your application. Add it to your repo.

An simple example took from (https://baltig.infn.it/lnf-da-control/dafne-k8s-ecs/-/tree/main/config/ioc/ioc-dafne-accumulator-orbit?ref_type=heads):

start.sh
#!/bin/bash
script_dir=$(dirname "$0")
cd $script_dir
echo "Starting Accumulator Orbit addr: $EPICS_CA_ADDR_LIST"
python ./scripts/py-ioc-collector.py -c accumulator_orbit.json


If the application/ioc is generic a more generic approach should be followed decoupling the source and the startup. In this approach the configuration of the IOC resides in a dedicated directory <TARGET ECS>/config/ioc of the target K8s ECS see for instance https://baltig.infn.it/lnf-da-control/dafne-k8s-ecs/-/tree/main/config/ioc



Follow this steps:

  1.  Identify your target ECS information page (i.e Sparc ECS ) retrieving "GIT Control Source" URL, clone it:
    Full CS
    git clone https://baltig.infn.it/lnf-da-control/<BEAMLINE>-k8s-ecs.git --recurse-submodules
  2. go in deploy/templates
  3.  create a manifest yaml for your application like this:
    Application yaml
    apiVersion: argoproj.io/v1alpha1
    kind: Application
    metadata:
      name: <myapplicationname>-ioc
      namespace: argocd
      labels:
        rootapp: {{ .Chart.Name }}
        rootappver: {{ .Chart.Version | quote }}
        beamline: {{ .Values.beamline | quote }}
    spec:
      project: default
      source:
        repoURL: 'https://baltig.infn.it/epics-containers/ioc-chart.git'
        path: .
        targetRevision: HEAD
        helm:
          values: |
              image: baltig.infn.it:4567/epics-containers/epics-py-base
              beamline: {{ .Values.beamline | quote }}
              replicaCount: {{ .Values.consoleReplica }}
              configCA:
                existingConfigMap: {{ .Values.configCA.configName | quote}}
                address_list: {{ .Values.configCA.gatewayName }} ## override with gateway
        
              gitRepoConfig:
                url: 'http://<your repo url>'
                path: '<path to reach your app and conf... usually just .>'
                branch: 'main'
                init: 'true'
              start: '/epics/ioc/config/start.sh'
              
            
      destination:
        server: 'https://kubernetes.default.svc'
        namespace: {{ .Values.namespace | quote }}
      syncPolicy:
        automated:
          prune: true  # Optional: Automatically remove resources not specified in Helm chart
          selfHeal: true
  4. name it as <myapplicationname>-ioc.yaml,
  5. update the deploy/values.yaml by adding <myapplicationname>-ioc to the address_list, so that it can be found from interface and services
  6. git add <myapplicationname>-ioc.yaml
  7. git commit -m "a meaningful comment" .
  8. git push origin main
  9. The application manifest is ok the application should be started by ArgoCD https://argocd-server-argocd.apps.okd-datest.lnf.infn.it/applications
  10. Login in the argoCD using 'admin' and a password that you've to ask. Check the status of your application and 'delete ca-gateway' application so that the Gateway is restarted updating the configuration for all the clients. 
  • No labels