Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

The procedure has been tested on a ubuntu 22.04LTS 64GB Ram.


Install microk8s


If you haven't installed microk8s yet, you can install it by running the following command:

Code Block
languagebash
titleinstall procedure
snap install microk8s --classic --channel=latest/stable


The start to install required packages:

Code Block
languagebash
titleinstall procedure
microk8s enable dns
microk8s status --wait-ready
microk8s enable hostpath-storage
microk8s enable metallb # Make sure the IP range specified is within your local network range and does not conflict with existing devices.
microk8s enable ingress
microk8s enable dashboard
microk8s kubectl describe secret -n kube-system microk8s-dashboard-token ## copy the token for k8s dashboard
microk8s enable community
microk8s enable argocd

alias kubectl='microk8s kubectl'

kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d ## get the token for argocd


NOTE: the alias command should (must) be inserted under ~/.bashrc

if you don't define the alias, remember to explain the command microk8s kubectl or sudo microk8s kubectl in the following steps


REFERENCES: 

  1. https://microk8s.io/docs/getting-started


Explore the microk8s environment

You can verify the K3s microk8s  installation by checking the node status:

kubectl get nodes

You can verify the K3s microk8s installation by checking the POD status:

kubectl get pod -A

Retrieve Service CIDR 
Anchor
servicecidr
servicecidr

For further EPIK8s installation is important to take note of Service CIDR that is the interval range of internal service cluster IPs.


cat /var/snap/microk8s/current/args/kube-apiserver | grep service-cluster-ip-range

or for kubernates vanilla

cat /etc/kubernetes/manifests/kube-apiserver.yaml|grep service-cluster-ip-range


This interval should be indicated to EPIK8s to instruct the range of IPs to give to pod services (because CA/PVA protocols dont support correctly dynamic DNS)

https://github.com/epics-base/epics-base/issues/488.


Metallb configuration

For further EPIK8s installation is important to take note of loadbalancer IP configured in the install process.


  • Check the MetalLB Configuration: To see the current configuration of MetalLB, you can list the IPAddressPool and L2Advertisement CRDs:

    microk8s kubectl get ipaddresspool -n metallb-system microk8s kubectl get l2advertisement -n metallb-system 

  • View the Details of an IPAddressPool: If you have an existing IPAddressPool, you can view its configuration with:

    microk8s kubectl get ipaddresspool <pool-name> -n metallb-system -o yaml 

    Replace <pool-name> with the actual name of the IP address pool you want to inspect. The output will show the range of IP addresses that MetalLB can use.


Some of this IP should be used to access internal EPIK8s service like cagateway and pvagateway.

To check the addresses already in use see the EXTERNAL-IP column:

kubectl get svc -o wide -A





Expose K8s Dashboard 

You can expose the Dashboard using a NodePort, Ingress, or LoadBalancer service, depending on your setup. By default, it uses a ClusterIP, which is not accessible externally.

To use NodePort:

kubectl patch svc argokubernetes-cddashboard -argocd-server -n argocd n kube-system -p '{"spec": {"type": "NodePort"}}'

kubectl get svc kubernetes-dashboard -kong -proxy -n kuberneteskube-systemdashboard


Look for the the NodePort value under the PORT(S) column. You can now access the K8s Dashboard web UI at

...

Run this command to patch the argocd-server service to be of type NodePort:


kubectl patch svc argocd-server -n argocd -p '{"spec": {"type": "NodePort"}}'

or 

kubectl patch svc argo-cd-argocd-server -n argocd -p '{"spec": {"type": "NodePort"}}'


To retrieve the NodePort, run the following command:


kubectl get svc -n argocd argocd-server

or 

kubectl get svc -n argocd argo-cd-argocd-server 


Look for the NodePort value under the PORT(S) column. You can now access the ArgoCD web UI at

...

Next, you need to install Multus. You can do this by applying the official Multus CNI manifest from its GitHub repository. Here's the command to download and apply the Multus DaemonSet to your MicroK8s cluster:

kubectl apply -f https://raw.githubusercontent.com/k8snetworkplumbingwg/multus-cni/master/deployments/multus-daemonset-thick.yml

This will install Multus CNI as a DaemonSet across all nodes in the cluster.

Verify Multus Installation

Once installed, verify that the Multus pod is running in the kube-system namespace:

...

Look for the multus pod. The pod should be in a Running state. If it is not, inspect the logs to troubleshoot:

to get access to hw devices such as gigavision ethernet cameras.


microk8s enable multus



add a

...

NetworkAttachmentDefinition

To use Multus, you will need to define additional networks for your pods. This is done by creating a NetworkAttachmentDefinition.

Here's an example YAML file for our testbeamline it adds access to the gigavision network of our cameras. 

enp4s0f0 is the network interface that is connected with the cams
rangeStart- rangeEnd are the address that the pod can acquire (note this addresses should not be assigned to HW)


Code Block
languageyaml
titleNetwork attachement configuration
apiVersion: k8s.cni.cncf.io/v1
kind: NetworkAttachmentDefinition
metadata:
  name: gigevision-network
  namespace: testbeamline
spec:
  config: '{
    

...

"cniVersion": "0.3.

...

0",
    "type": "

...

macvlan",
    "

...

mode": "

...

bridge",
    

...

"

...

master": "

...

enp4s0f0", ## the ethernet adapter that you want to access
     
    "ipam": {
      "type": "host-local",
      "subnet": "192.168.115.0/24",
      "rangeStart": "192.168.115.220",
      "rangeEnd": "192.168.115.254",
      "routes": [
        {
          "dst": "192.168.115.0/24",
          "gw": "192.168.115.2"
        }
      ],
      "gateway": "192.168.115.2"
    }
  }'




  • Apply the NetworkAttachmentDefinition with:


    microk8s kubectl apply -f <filename>.yaml

  • Deploy Pods Using Multiple Networks

    After Multus is installed and your custom networks are defined, you can deploy pods with multiple network interfaces. Here's an example pod spec using two networks (one default and one from Multus):

    Code Block
    languageyaml
    titleTest gige camera access
    apiVersion: v1
    kind: Pod
    metadata:
      name: pod-with-multus
      namespace: testbeamline
      annotations:
        k8s.v1.cni.cncf.io/networks: gigevision-network
    spec:
      containers:
      - name: app-container
        image: https:// baltig.infn.it:4567/epics-containers/ioc-chart.git
        infn-epics-ioc:latest
        command: ["/bin/sh", "-c", "sleep 3600"]




    • Verify Pod's Network Interfaces & camera access

      Once the pod is running, you can verify that it has multiple network interfaces by logging into the pod and using the ip command:

      microk8s kubectl exec -it pod-with-multus -n testbeamline -

  • ip a
    • - ip addr

    • Code Block
      languagebash
      titleexpected output
      1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
          link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
          inet 127.0.0.1/8 scope host lo
             valid_lft forever preferred_lft forever
          inet6 ::1/128 scope host 
             valid_lft forever preferred_lft forever
      3: eth0@if1758: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default 
          link/ether ee:b2:9c:77:b3:6e brd ff:ff:ff:ff:ff:ff link-netnsid 0
          inet 10.1.24.151/32 scope global eth0
             valid_lft forever preferred_lft forever
          inet6 fe80::ecb2:9cff:fe77:b36e/64 scope link 
             valid_lft forever preferred_lft forever
      4: net1@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
          link/ether 22:32:b3:ae:55:5e brd ff:ff:ff:ff:ff:ff link-netnsid 0
          inet 192.168.115.220/24 brd 192.168.115.255 scope global net1
             valid_lft forever preferred_lft forever
          inet6 fe80::2032:b3ff:feae:555e/64 scope link 
             valid_lft forever preferred_lft forever

    • microk8s kubectl exec -it pod-with-multus -n testbeamline -- arv-tool-0.8


    • Code Block
      languagebash
      titlearv-tool expected output
      microk8s kubectl exec -it pod-with-multus -n testbeamline -- 
  • microk8s kubectl exec -it pod-with-multus --
    • arv-tool-0.8
      Basler-a2A2600-20gmBAS-40437926 (192.168.115.47) <-- your camera