The procedure has been tested on a ubuntu 22.04LTS 64GB Ram.


Install microk8s


If you haven't installed microk8s yet, you can install it by running the following command:

snap install microk8s --classic --channel=latest/stable


The start to install required packages:

microk8s enable dns
microk8s status --wait-ready
microk8s enable hostpath-storage
microk8s enable metallb # Make sure the IP range specified is within your local network range and does not conflict with existing devices.
microk8s enable ingress
microk8s enable dashboard
microk8s kubectl describe secret -n kube-system microk8s-dashboard-token ## copy the token for k8s dashboard
microk8s enable community
microk8s enable argocd

alias kubectl='microk8s kubectl'

kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d ## get the token for argocd


NOTE: the alias command should (must) be inserted under ~/.bashrc

if you don't define the alias, remember to explain the command microk8s kubectl or sudo microk8s kubectl in the following steps


REFERENCES: 

  1. https://microk8s.io/docs/getting-started


Explore the microk8s environment

You can verify the microk8s  installation by checking the node status:

kubectl get nodes

You can verify the microk8s installation by checking the POD status:

kubectl get pod -A

Retrieve Service CIDR 

For further EPIK8s installation is important to take note of Service CIDR that is the interval range of internal service cluster IPs.


cat /var/snap/microk8s/current/args/kube-apiserver | grep service-cluster-ip-range

or for kubernates vanilla

cat /etc/kubernetes/manifests/kube-apiserver.yaml|grep service-cluster-ip-range


This interval should be indicated to EPIK8s to instruct the range of IPs to give to pod services (because CA/PVA protocols dont support correctly dynamic DNS)

https://github.com/epics-base/epics-base/issues/488.


Metallb configuration

For further EPIK8s installation is important to take note of loadbalancer IP configured in the install process.



Some of this IP should be used to access internal EPIK8s service like cagateway and pvagateway.

To check the addresses already in use see the EXTERNAL-IP column:

kubectl get svc -o wide -A





Expose K8s Dashboard 

You can expose the Dashboard using a NodePort, Ingress, or LoadBalancer service, depending on your setup. By default, it uses a ClusterIP, which is not accessible externally.

To use NodePort:

kubectl patch svc kubernetes-dashboard -n kube-system -p '{"spec": {"type": "NodePort"}}'

kubectl get svc kubernetes-dashboard -n kube-system


Look for the NodePort value under the PORT(S) column. You can now access the K8s Dashboard web UI at

http://<Node_IP>:<NodePort>.

https://<Node_IP>:<NodePort>


Recover K8s dashboard token

kubectl describe secret -n kube-system microk8s-dashboard-token

Expose the ArgoCD Server

By default, the ArgoCD API server is only accessible inside the cluster. To access it externally, you can expose it using either a NodePort or LoadBalancer service. For a minimal installation like K3s, NodePort is typically used.

a. Expose ArgoCD with a NodePort:

Run this command to patch the argocd-server service to be of type NodePort:


kubectl patch svc argocd-server -n argocd -p '{"spec": {"type": "NodePort"}}'

or 

kubectl patch svc argo-cd-argocd-server -n argocd -p '{"spec": {"type": "NodePort"}}'


To retrieve the NodePort, run the following command:


kubectl get svc -n argocd argocd-server

or 

kubectl get svc -n argocd argo-cd-argocd-server 


Look for the NodePort value under the PORT(S) column. You can now access the ArgoCD web UI at

http://<Node_IP>:<NodePort>.

https://<Node_IP>:<NodePort>


Recover ArgoCD admin password

 kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d ## get the token for argocd




Proxy setting for ArgoCD (optional)

If you are behind a proxy probably some chart repository cannot be accessed.


You can edit the ArgoCD deployment to add the necessary environment variables to the containers.

  1. Edit the argocd-repo-server and argocd-application-controller deployments:


kubectl edit deployment argocd-repo-server -n argocd kubectl edit deployment argocd-application-controller -n argocd

  1. Add the following environment variables under the spec.containers section in both deployments:


spec:
  containers:
  - name: argocd-repo-server
    env:
    - name: HTTP_PROXY
      value: "http://your-http-proxy:port"
    - name: HTTPS_PROXY
      value: "http://your-https-proxy:port"
    - name: NO_PROXY
      value: "localhost,127.0.0.1,.svc,.cluster.local,argocd-repo-server,argocd-server"


env:
    - name: HTTP_PROXY
      value: "http://squid.lnf.infn.it:3128"
    - name: HTTPS_PROXY
      value: "http://squid.lnf.infn.it:3128"
    - name: NO_PROXY
      value: "baltig.infn.it,argocd-repo-server,argocd-server,localhost,127.0.0.0/24,::1,*.lnf.infn.it,.svc,.cluster.local,10.0.0.0/8,192.168.0.0/16"  


Restart the ArgoCD Components

After updating the deployments, restart the affected components to apply the changes:


kubectl rollout restart deployment argocd-repo-server -n argocd

kubectl rollout restart deployment argocd-application-controller -n argocd


Install Multus CNI

Next, you need to install Multus to get access to hw devices such as gigavision ethernet cameras.


microk8s enable multus



add a NetworkAttachmentDefinition

To use Multus, you will need to define additional networks for your pods. This is done by creating a NetworkAttachmentDefinition.

Here's an example YAML file for our testbeamline it adds access to the gigavision network of our cameras. 

enp4s0f0 is the network interface that is connected with the cams
rangeStart- rangeEnd are the address that the pod can acquire (note this addresses should not be assigned to HW)


apiVersion: k8s.cni.cncf.io/v1
kind: NetworkAttachmentDefinition
metadata:
  name: gigevision-network
  namespace: testbeamline
spec:
  config: '{
    "cniVersion": "0.3.0",
    "type": "macvlan",
    "mode": "bridge",
    "master": "enp4s0f0", ## the ethernet adapter that you want to access
     
    "ipam": {
      "type": "host-local",
      "subnet": "192.168.115.0/24",
      "rangeStart": "192.168.115.220",
      "rangeEnd": "192.168.115.254",
      "routes": [
        {
          "dst": "192.168.115.0/24",
          "gw": "192.168.115.2"
        }
      ],
      "gateway": "192.168.115.2"
    }
  }'