This document states workflows to develop and deploy IOC on a EPIK8S infrastructure.

Common and preparatory steps

Install docker engine for mac/windows https://www.docker.com/products/docker-desktop/

For debian:

 apt-get install docker-io

Create a GIT repository

Every IOC/application MUST have a GIT repository associated. A project must be created under https://baltig.infn.it/lnf-da-control or a different group like for instance:

https://baltig.infn.it/infn-epics

https://baltig.infn.it/epics-containers

Pay attention to not set the project private otherwise will not be possible to load it into the EPIK8S.

Setup VSCODE IDE

It's highly suggested to use https://code.visualstudio.com/ to handle the project.

It's also recommended to use a container for development to decouple development of the application from the platform where the application is developed.

It's really simple to use a container please read this guide: https://code.visualstudio.com/docs/devcontainers/containers

Setup environment variables

Depending on the target beamline (i.e. sparc environ EPIK8s Sparc):

  1.  Setup EPICS_CA_ADDR_LIST if needed,
  2. Phoebus environ if needed (settings.ini)


Python SoftIOC workflow

Setup a target container

For this kind of IOC it's recommended to use the docker image: baltig.infn.it:4567/epics-containers/epics-py-base since it contains the required packages and will be also the image that will be used to run this kind of softioc.

Create .devcontainer directory into your vscode workspace and create a devcontainer.json like the following:

devcontainer.json
// For format details, see https://containers.dev/implementors/json_reference/
{
    "name": "python container",
    "image": "baltig.infn.it:4567/epics-containers/epics-py-base",
    "remoteEnv": {
        // allows X11 apps to run inside the container
        "DISPLAY": "${localEnv:DISPLAY}",
        // provides a name for epics-containers to use in bash prompt etc.
        "EC_PROJECT": "${localWorkspaceFolderBasename}"
    },
    "features": {
        
    },
    // IMPORTANT for this devcontainer to work with docker EC_REMOTE_USER must be
    // set to vscode. For podman it should be left blank.
    "remoteUser": "${localEnv:EC_REMOTE_USER}",
    "customizations": {
        "vscode": {
            // Add the IDs of extensions you want installed when the container is created.
             "extensions": [
                "ms-python.python",
                "ms-python.vscode-pylance",
                "tamasfe.even-better-toml",
                "redhat.vscode-yaml",
                "ryanluker.vscode-coverage-gutters",
                "epicsdeb.vscode-epics",
                "ms-python.black-formatter"
            ]
        }
    },
    // Make sure the files we are mapping into the container exist on the host
    // You can place any other outside of the container before-launch commands here
    //"initializeCommand": "bash .devcontainer/initializeCommand ${devcontainerId}",
    // Hooks the global .bashprofile_dev_container but also can add any other commands
    // to run in the container at creation in here
    //"postCreateCommand": "bash .devcontainer/postCreateCommand ${devcontainerId}",
	// forward ports to clients
     "appPort": [5064,"5064:5064/udp","5065:5065/udp"],

    "runArgs": [
        // Allow the container to access the host X11 display and EPICS CA
        //"--net=host",
        // Make sure SELinux does not disable with access to host filesystems like tmp
        "--security-opt=label=disable"
    ],
    "workspaceMount": "source=${localWorkspaceFolder},target=/app/${localWorkspaceFolderBasename},type=bind",
    "workspaceFolder": "/app/${localWorkspaceFolderBasename}",
    "mounts": [
        // Mount some useful local files from the user's home directory
        // By mounting the parent of the workspace we can work on multiple peer projects
        "source=${localWorkspaceFolder}/../,target=/repos,type=bind",
        // this provides eternal bash history in and out of the container
        "source=${localEnv:HOME}/.bash_eternal_history,target=/root/.bash_eternal_history,type=bind",
        // this bashrc hooks up the .bashrc_dev_container in the following mount
        "source=${localWorkspaceFolder}/.devcontainer/.bashrc,target=/root/.bashrc,type=bind",
        // provides a place for you to put your shell customizations for all your dev containers
        "source=${localEnv:HOME}/.bashrc_dev_container,target=/root/.bashrc_dev_container,type=bind",
        // provides a place to install any packages you want to have across all your dev containers
        "source=${localEnv:HOME}/.bashprofile_dev_container,target=/root/.bashprofile_dev_container,type=bind",
        // provides the same command line editing experience as your host
        "source=${localEnv:HOME}/.inputrc,target=/root/.inputrc,type=bind"
    ]
}


A fully functional example:

https://baltig.infn.it/infn-epics/py-ioc-collector

Access container from command line

To test the container connection:


Epics Python Container CLI
docker run -it baltig.infn.it:4567/epics-containers/epics-py-base bash

epics@d916ac2d3987:~$ export EPICS_CA_ADDR_LIST=rdsparcpitaia001.lnf.infn.it ## list of IOCs or gateway
epics@d916ac2d3987:~$ python
Python 3.10.12 (main, Nov 20 2023, 15:14:05) [GCC 11.4.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import epics
>>> epics.caget("SR00RPA01:DVDR")
'Redpitaya'


Follow development guidelines

SOFT IOC Development SoftIOC in a Linux-like environment

EPICS IOC/support


A docker image that can be used as epics7  for development is:

baltig.infn.it:4567/epics-containers/infn-epics-ioc:devel


It contains several support modules used in deployment.


So for instance to start a shell and mount the current directory '.' to /mnt :

 The example here below initialize a directory with a standard ioc/support tree:

Epics support/example
docker run -p 5064:5064/udp -p 5064:5064/tcp -p 5065:5065/udp -p 5065:5065/tcp -v .:/mnt -it baltig.infn.it:4567/epics-containers/infn-epics-ioc:devel bash
## docker shell
cd /mnt ## go in the . directory that is mounted 
mkdir mynewiocsample
cd mynewuicsample
makeBaseApp.pl -t example mynewiocsample
make


Setup a target container

Create .devcontainer directory into your vscode workspace and create a devcontainer.json like the following:

// For format details, see https://containers.dev/implementors/json_reference/
{
    "name": "Native IOC development container",
    "image": "baltig.infn.it:4567/epics-containers/infn-epics-ioc:devel",
    "remoteEnv": {
        // allows X11 apps to run inside the container
        "DISPLAY": "${localEnv:DISPLAY}",
        // provides a name for epics-containers to use in bash prompt etc.
        "EC_PROJECT": "${localWorkspaceFolderBasename}"
    },
    "features": {
        
    },
    // IMPORTANT for this devcontainer to work with docker EC_REMOTE_USER must be
    // set to vscode. For podman it should be left blank.
    "remoteUser": "${localEnv:EC_REMOTE_USER}",
    "customizations": {
        "vscode": {
            // Add the IDs of extensions you want installed when the container is created.
             "extensions": [
                "ms-python.python",
                "ms-python.vscode-pylance",
                "tamasfe.even-better-toml",
                "redhat.vscode-yaml",
                "ryanluker.vscode-coverage-gutters",
                "epicsdeb.vscode-epics",
                "ms-python.black-formatter"
            ]
        }
    },
    // Make sure the files we are mapping into the container exist on the host
    // You can place any other outside of the container before-launch commands here
    //"initializeCommand": "bash .devcontainer/initializeCommand ${devcontainerId}",
    // Hooks the global .bashprofile_dev_container but also can add any other commands
    // to run in the container at creation in here
    //"postCreateCommand": "bash .devcontainer/postCreateCommand ${devcontainerId}",
	// forward ports to clients
     "appPort": [5064,"5064:5064/udp","5065:5065/udp"],

    "runArgs": [
        // Allow the container to access the host X11 display and EPICS CA
        //"--net=host",
        // Make sure SELinux does not disable with access to host filesystems like tmp
        "--security-opt=label=disable"
    ],
    "workspaceMount": "source=${localWorkspaceFolder},target=/app/${localWorkspaceFolderBasename},type=bind",
    "workspaceFolder": "/app/${localWorkspaceFolderBasename}",
    "mounts": [
        // Mount some useful local files from the user's home directory
        // By mounting the parent of the workspace we can work on multiple peer projects
        "source=${localWorkspaceFolder}/../,target=/repos,type=bind",
        
        
    ]
}

GIGE Camera

Gigavision cameras that support GIGE protocol can be acquired by ADAravis support that is already included in production/development infn-epics-ioc container.

NOTE: this container must be launched with the  --network=host to access GIGE cameras.


Devel example

Example development the arv-tool command inside the container can be used to explore cameras that can be accessed:


Camera development start
docker run --network=host -v .:/epics/ioc/config -it baltig.infn.it:4567/epics-containers/infn-epics-ioc:devel bash
..

root@chaost-camera01:/epics/generic-source/ioc/config# arv-tool-0.8 
Basler-a2A1920-51gmBAS-40426579 (192.168.115.49)
Basler-a2A2600-20gmBAS-40437925 (192.168.115.48)
Basler-scA640-70gm-24159532 (192.168.115.50)


Production example

The configuration and pipelining of plugin can be very difficult and error prone, so the IBEK + templating support is highly recommended IBEK.

Create a directory <test> and create a camera_template.j2 file like that:

camera template

Camera template.j2
# yaml-language-server: $schema=../schemas/ibek.support.schema.json
ioc_name: {{name}}
description: Camera SIM model with plugins

entities:
  {%- if devtype == "camerasim" %}
  - type: ADSimDetector.simDetector
  {% else %}
  - type: ADAravis.aravisCamera
    ID: {{CAMERA_ID}}
    CLASS: {{CAMERA_CLASS}}
  {% endif %}
    PORT: {{iocroot}}
    P: "{{iocprefix}}:"
    R: "{{iocroot}}:"

    

  - type: ADCore.NDROI
    PORT: {{iocroot}}.ROI1
    NDARRAY_PORT: {{iocroot}}
    P: "{{iocprefix}}:{{iocroot}}"
    R: ":Roi1:"
    ENABLED: 1

  - type: ADCore.NDProcess
    PORT: {{iocroot}}.PROC
    P: "{{iocprefix}}:{{iocroot}}"
    R: ":Proc1:"
    NDARRAY_PORT: {{iocroot}}.ROI1
    ENABLED: 1

  - type: ADCore.NDOverlay
    PORT: {{iocroot}}.OVERLAY1
    NDARRAY_PORT: {{iocroot}}
    P: "{{iocprefix}}:{{iocroot}}"
    R: ":Overlay1:"
    NAME: "Reference"
    NOverlays: 8
    SHAPE: "3"
    XPOS: ""
    YPOS: ""
    XCENT: ""
    YCENT: ""
    XSIZE: ""
    YSIZE: ""
    XWIDTH: ""
    YWIDTH: ""
    O: "1:"

  # Want to have also high throuput PVA protocol
  - type: ADCore.NDPvaPlugin
    PORT: {{iocroot}}.PVA
    PVNAME: "{{iocprefix}}:{{iocroot}}:PVA:OUTPUT"
    P: "{{iocprefix}}:{{iocroot}}"
    R: ":Pva1:"
    NDARRAY_PORT: {{iocroot}}
    ENABLED: 1
  
  - type: ADCore.NDPvaPlugin
    PORT: {{iocroot}}.PVA2
    PVNAME: "{{iocprefix}}:{{iocroot}}:PROC:OUTPUT"
    P: "{{iocprefix}}:{{iocroot}}"
    R: ":Proc1:Pva1:"
    NDARRAY_PORT: {{iocroot}}.PROC
    ENABLED: 1

  
  - type: ADCore.NDPvaPlugin
    PORT: {{iocroot}}.PVA3
    PVNAME: "{{iocprefix}}:{{iocroot}}:ROI1:OUTPUT"
    P: "{{iocprefix}}:{{iocroot}}"
    R: ":Roi1:Pva1:"
    NDARRAY_PORT: {{iocroot}}.ROI1
    ENABLED: 1

  

  - type: ADCore.NDStdArrays
    P: "{{iocprefix}}:{{iocroot}}"
    R: ":image1:"
    PORT: {{iocroot}}.NTD
    NDARRAY_PORT: {{iocroot}}
    TYPE: {{CAMERA_TYPE}}
    FTVL: {{CAMERA_FTVL}}
    NELEMENTS: {{CAMERA_ELEMS}}
    ENABLED: 1

  
  - type: ADCore.NDPvaPlugin
    PORT: {{iocroot}}.PVA4
    PVNAME: "{{iocprefix}}:{{iocroot}}:OVERLAY1:OUTPUT"
    P: "{{iocprefix}}:{{iocroot}}"
    R: ":Overlay1:Pva1:"
    NDARRAY_PORT: {{iocroot}}.OVERLAY1
    ENABLED: 1

  - type: ADCore.NDStdArrays
    P: "{{iocprefix}}:{{iocroot}}"
    R: ":image2:"
    PORT: {{iocroot}}.NTD2
    NDARRAY_PORT: {{iocroot}}.PROC
    TYPE: {{CAMERA_TYPE}}
    FTVL: {{CAMERA_FTVL}}
    NELEMENTS: {{CAMERA_ELEMS}}
    ENABLED: 1

  - type: ADCore.NDStats
    PORT: {{iocroot}}.STATS
    NDARRAY_PORT: {{iocroot}}
    HIST_SIZE: 50
    P: "{{iocprefix}}:{{iocroot}}"
    R: ":Stats1:"
    XSIZE: {{CAMERA_STATS_XSIZE}}
    YSIZE: {{CAMERA_STATS_YSIZE}}
    ENABLED: 1

  - type: ADCore.NDStats
    PORT: {{iocroot}}.STATS2
    NDARRAY_PORT: {{iocroot}}.PROC
    HIST_SIZE: 50
    P: "{{iocprefix}}:{{iocroot}}"
    R: ":Proc1:Stats1:"
    XSIZE: {{CAMERA_STATS_XSIZE}}
    YSIZE: {{CAMERA_STATS_YSIZE}}
    ENABLED: 1
  
  - type: ADCore.NDStats
    PORT: {{iocroot}}.STATS3
    NDARRAY_PORT: {{iocroot}}.ROI1
    HIST_SIZE: 50
    P: "{{iocprefix}}:{{iocroot}}"
    R: ":Roi1:Stats1:"
    XSIZE: {{CAMERA_STATS_XSIZE}}
    YSIZE: {{CAMERA_STATS_YSIZE}}
    ENABLED: 1



  - type: ADCore.NDFileTIFF
    PORT: {{iocroot}}.TIFF
    NDARRAY_PORT: {{iocroot}}
    P: "{{iocprefix}}:{{iocroot}}"
    R: ":TIFF1:"
    ENABLED: 1

  - type: ADCore.NDFileTIFF
    PORT: {{iocroot}}.TIFF2
    NDARRAY_PORT: {{iocroot}}.PROC
    P: "{{iocprefix}}:{{iocroot}}"
    R: ":Proc1:TIFF1:"
    ENABLED: 1

  - type: ADCore.NDFileTIFF
    PORT: {{iocroot}}.TIFF3
    NDARRAY_PORT: {{iocroot}}.ROI1
    P: "{{iocprefix}}:{{iocroot}}"
    R: ":Roi1:TIFF1:"
    ENABLED: 1

  - type: ADCore.NDFileTIFF
    PORT: {{iocroot}}.TIFF4
    NDARRAY_PORT: {{iocroot}}.OVERLAY1
    P: "{{iocprefix}}:{{iocroot}}"
    R: ":Overlay1:TIFF1:"
    ENABLED: 1

  - type: epics.PostStartupCommand 
    command: dbl              ## dumps PV NAMES

  - type: epics.PostStartupCommand 
    command: |
      dbl("*") > {{data_config}}/pvlist.txt
      dbpf("{{iocprefix}}:{{iocroot}}:TIFF1:FilePath", "{{data_dir}}")
      dbpf("{{iocprefix}}:{{iocroot}}:TIFF1:FileWriteMode",2)
      dbpf("{{iocprefix}}:{{iocroot}}:TIFF1:FileName","camera")
      dbpf("{{iocprefix}}:{{iocroot}}:TIFF1:AutoIncrement",1)

      dbpf("{{iocprefix}}:{{iocroot}}:TIFF1:FileTemplate","%s%s_%3.3d.tiff")
      dbpf("{{iocprefix}}:{{iocroot}}:Proc1:TIFF1:FilePath", "{{data_dir}}")
      dbpf("{{iocprefix}}:{{iocroot}}:Proc1:TIFF1:FileWriteMode",2)
      dbpf("{{iocprefix}}:{{iocroot}}:Proc1:TIFF1:FileName","{{iocroot}}")
      dbpf("{{iocprefix}}:{{iocroot}}:Proc1:TIFF1:AutoIncrement",1)
      dbpf("{{iocprefix}}:{{iocroot}}:Proc1:TIFF1:FileTemplate","%s%s_proc_%3.3d.tiff")

      dbpf("{{iocprefix}}:{{iocroot}}:Proc1:TIFF:FilePath", "{{data_dir}}")
      dbpf("{{iocprefix}}:{{iocroot}}:Proc1:TIFF:FileWriteMode",2)
      dbpf("{{iocprefix}}:{{iocroot}}:Proc1:TIFF:FileName","{{iocroot}}")
      dbpf("{{iocprefix}}:{{iocroot}}:Proc1:TIFF:AutoIncrement",1)
      dbpf("{{iocprefix}}:{{iocroot}}:Proc1:TIFF:FileTemplate","%s%s_proc_%3.3d.tiff")

      dbpf("{{iocprefix}}:{{iocroot}}:Roi1:TIFF1:FilePath", "{{data_dir}}")
      dbpf("{{iocprefix}}:{{iocroot}}:Roi1:TIFF1:FileWriteMode",2)
      dbpf("{{iocprefix}}:{{iocroot}}:Roi1:TIFF1:FileName","{{iocroot}}")
      dbpf("{{iocprefix}}:{{iocroot}}:Roi1:TIFF1:AutoIncrement",1)
      dbpf("{{iocprefix}}:{{iocroot}}:Roi1:TIFF1:FileTemplate","%s%s_roi_%3.3d.tiff")
      dbpf("{{iocprefix}}:{{iocroot}}:Overlay1:TIFF1:FilePath", "{{data_dir}}")
      dbpf("{{iocprefix}}:{{iocroot}}:Overlay1:TIFF1:FileWriteMode",2)
      dbpf("{{iocprefix}}:{{iocroot}}:Overlay1:TIFF1:FileTemplate","%s%s_overlay_%3.3d.tiff")
      dbpf("{{iocprefix}}:{{iocroot}}:Overlay1:TIFF1:FileName","{{iocroot}}")
      dbpf("{{iocprefix}}:{{iocroot}}:Overlay1:EnableCallbacks","1")
      {%- for param in iocinit %}
        dbpf("{{iocprefix}}:{{iocroot}}:{{param.name}}","{{param.value}}")
      {%- endfor %}

  - type: epics.EpicsCaMaxArrayBytes 
    max_bytes: 10000000


camera ibek specific rendering

And a camerainit.yml that produce the ibek yaml for the given camera:

camera ini
name: "SCOUT640"
asset: "https://confluence.infn.it/x/nYD8DQ"
charturl: 'https://baltig.infn.it/epics-containers/ioc-launcher-chart.git'
host: "192.168.197.24"
user: "root"
iocdir: "camera"
ca_server_port: 5264
pva_server_port: 5275
docker:
  enable: true
  image: baltig.infn.it:4567/epics-containers/infn-epics-ioc:latest
devtype: camera
devgroup: diag
iocprefix: "EUAPS:CAM"
iocroot: "SCOUT64"
autosync: false ## restart automatically on changes
opi:
  url: https://baltig.infn.it/infn-epics/camera-opi.git
  main: Camera_Main.bob
  macro:
    - name: "DEVICE"
      value: EUAPS:CAM
    - name: "CAM"
      value: "SCOUT64"

CAMERA_ID: "Basler-scA640-70gm-24159532"
CAMERA_CLASS: "Basler-scA640-70gm"
CAMERA_TYPE: "Int8"
CAMERA_FTVL: "USHORT"
CAMERA_ELEMS: 5616000
CAMERA_STATS_XSIZE: 1024
CAMERA_STATS_YSIZE: 768

The application jnjrender (pip install jnjrender) will render a valid ibek yaml  camera_template.yaml


camera ini
jnjrender camera_template.j2  camerainit.yml  --output camera_template.yaml


camera run

Now launch the docker mounting the directory that contains the camera_template.yaml in /epics/ioc/config

Camera development start
docker run --network=host -v .:/epics/ioc/config -it baltig.infn.it:4567/epics-containers/infn-epics-ioc

Deploy on the target EPIK8S 

Once your IOC/application named 'mynewioc' is tested and ready to be published on the target EPIK8S.

Identify your target EPIK8S information page (i.e EPIK8s Sparc ) retrieving "GIT Control Source" URL, clone it:

Full CS from scratch
git clone https://baltig.infn.it/lnf-da-control/epik8-<BEAMLINE>.git --recurse-submodules

or update a pre-existing one:

Full CS Update
git pull --recurse-submodules ## to update remote changes

you should have a directory like this:

BeamLine Tree
├── README.md
├── config
│   ├── applications
│   │   ├── flame-state-import 		
│   │   └── icpdastemp01
│   ├── iocs
│   │   ├── mrf01
│   │   ├── pitaya
│   │   ├── temp01
│   │   └── mynewioc                    <-- add here your folder or git subproject for configuration
│   └── services
│       └── cagateway
├── deploy
│   ├── Chart.yaml
│   ├── templates
│   │   └── epik8.yaml
│   └── values.yaml						<-- add here your IOC to deploy
├─
...


Adding IOC

Suppose your iocname is mynewioc you should create a folder with the same name. This folder should contain a:

  1. ioc.yaml if your IOC is generic and has support in ibek see IBEK support
  2. start.sh + other ioc startup files or submodules. The start.sh is the entry point that can perform some useful substitutions in case of multiple instances of same ioc and start your IOC.

In case mynewioc is a soft IOC this directory should also contain the git submodule that point to your application git repository.

Example softioc

If your mynewioc is a softioc you should have a repository for it. So in the EPIK8S folder config/iocs/mynewioc you must add your repository as submodule:

Add your project as submodule
cd  <EPIK8Sfolder/config/iocs>
git pull --recurse-submodules ## to update remote changes
git submodule update --init ## to update eventually new submodules
mkdir mynewioc
git add mynewioc
cd mynewioc
## here add your new submodule
git submodule add <your repository_URL> scripts # your softioc repository will be added as scripts
git commit -m "your comment Add submodule <submodule_name>"
git push origin <your remote branch i.e main/master>


In your application repository create a bash file named start.sh (bash script) that will be used to start your application. 

start.sh
#!/bin/bash
script_dir=$(dirname "$0")
cd $script_dir
echo "Starting $__IOC_NAME__ : $EPICS_CA_ADDR_LIST"
python ./scripts/mynewioc <parameters> -c myconfig.json

This script will launch ./scripts/mynewioc (remember scripts scripts is the folder name you gave to your submodule in the previous  step) and will give some parameters and a configuration myconfig.json file that must be added to the mynewioc.


Add IOC configuration to EPIK8s

Once your configuration directory mynewioc  is complete. We need to add to the main git repository.

start.sh
cd  <EPIK8Sfolder/config/iocs/mynewioc>
chmod a+x start.sh # make start.sh executable
git add start.sh myconfig.json scripts
git commit -m "my comment" .
git push origin



This way we just added the information needed to start the ioc, but we still instruct ArgoCD to launch the IOC in the EPIK8S target infrastructure.


Deploy IOC to EPIK8s infrastructure

We should now add and entry into <EPIK8Sfolder/deploy/values.yaml>

We must distinguish IOC types, the configuration is slightly different if the IOC runs inside the cluster or outside.

In cluster

You must find the iocs section in deploy/values.yaml and add your IOC:

Deploy an internal IOC
name: "mynewioc"
      asset: "https://confluence.infn.it/x/nYD8DQ"
      charturl: 'https://baltig.infn.it/epics-containers/ioc-chart.git'
      image: baltig.infn.it:4567/epics-containers/epics-py-base
      iocprefix: "SPARC:ORBIT"
      start: "/epics/ioc/config/start.sh" ## if your mynewioc has a start.sh this line must be kept
      gitinit: true


Out Cluster

This is the case of devices that have an IOC embedded. This case EPIK8S will transfer the configuration on the remote target and will start the IOC via ssh.

NOTE: the remote IOC must have sshd and an authorised key to perform ssh commands.

Deploy an internal IOC
 - name: "mrf01"
      asset: "https://confluence.infn.it/x/nYD8DQ"
      charturl: 'https://baltig.infn.it/epics-containers/ioc-launcher-chart.git'
      iocname: "mrf01"
      iocprefix: "MRF01"
      host: "plsparcmrf001.lnf.infn.it"									## HOST
      user: "root"														## USER
      workdir: "/home/nat/progetti/mrfioc2/iocBoot/epik8s"				## WORKDIR (where transfer config/iocs/mrf01 configuration)
      exec: "start.sh"


Test syntax deploy

It's recommended to test the deploy before commit changes to avoid putting not working deploy in ARGOCD that will block any other deploy.

To test a deploy, you need to have helm tool installed, then go into the deploy directory just where the values.yaml is. Then do the following command:

Deploy an internal IOC
 helm template --debug .

Commit the deploy

Once your deploy configuration is ready you should commit.


start.sh
cd  <EPIK8Sfolder/>
git commit -m "my comment" .
git push origin



ArgoCD after few minutes will update the cluster.

The status should be visible on https://argocd-server-argocd.apps.okd-datest.lnf.infn.it/applications 


Updating an existing IOC

If you only wants to updated an existing IOC, after it has been modified in the linked repository:

Add your project as submodule
cd /<your git directory>/epik8-<beamline name> ## for example epik8-sparc
git pull --recurse-submodules ## to update remote changes
git submodule update --init ## to update eventually new submodules
cd  <config/iocs/<ioc_name>/scripts>
git pull ## updates to remote changes
cd ..
git add ./scripts/
git commit -m "updated IOC <ioc_name>"
git push origin

To make the changes effective in the running IOC, you have to restart the related process in ArgoCD.


Update EPICS CA clients

If you want your new IOC be visible to all EPICS CA clients you should restart ca-gateway service via ArgoCD. This could bring few seconds of services interruption.

Setup Phoebus and develop the OPI for your softioc

Install Phoebus: Phoebus.

If you want to develop the OPI for your IOC however each K8S ECS installation has in its GIT Control Source project a directory opi (that is a git subproject with other subproject) that contain a Launcher.bob and a start.shthis setup is intended to start the main control interface of a given beamline.

If you are developing an gui interface that must be integrated in the control a OPI Development Workflow must be followed.