Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

The procedure has been tested on a ubuntu 22.04LTS 64GB Ram.


Install microk8s


If you haven't installed microk8s yet, you can install it by running the following command:

Code Block
languagebash
titleinstall procedure
snap install microk8s --classic --channel=latest/stable


The start to install required packages:

Code Block
languagebash
titleinstall procedure
microk8s enable dns
microk8s status --wait-ready
microk8s enable hostpath-storage
microk8s enable metallb # Make sure the IP range specified is within your local network range and does not conflict with existing devices.
microk8s enable ingress
microk8s enable dashboard
microk8s kubectl describe secret -n kube-system microk8s-dashboard-token ## copy the token for k8s dashboard
microk8s enable community
microk8s enable argocd

alias kubectl='microk8s kubectl'

kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d ## get the token for argocd

...


NOTE: the alias command should (must) be inserted under ~/.bashrc

if you don't define the alias, remember to explain the command microk8s kubectl or sudo microk8s kubectl in the following steps


REFERENCES: 

  1. https://microk8s.io/docs/

...

  1. getting-started


Explore the microk8s environment

You can verify the K3s microk8s  installation by checking the node status:

kubectl get nodes

You can verify the K3s microk8s installation by checking the POD status:

kubectl get pod -A

...

Retrieve Service CIDR 
Anchor
servicecidr
servicecidr

For further EPIK8s installation is important to take note of Service CIDR that is the interval range of internal service cluster IPs.


cat /var/snap/microk8s/current/args/kube-apiserver | grep service-cluster-ip-range

or for kubernates vanilla

cat /etc/kubernetes/manifests/kube-apiserver.yaml|grep service-cluster-ip-range


This interval should be indicated to EPIK8s to instruct the range of IPs to give to pod services (because CA/PVA protocols dont support correctly dynamic DNS)

https://github.com/epics-base/epics-base/issues/488.


Metallb configuration

For further EPIK8s installation is important to take note of loadbalancer IP configured in the install process.


  • Check the MetalLB Configuration: To see the current configuration of MetalLB, you can list the IPAddressPool and L2Advertisement CRDs:

    microk8s kubectl get ipaddresspool -n metallb-system microk8s kubectl get l2advertisement -n metallb-system 

  • View the Details of an IPAddressPool: If you have an existing IPAddressPool, you can view its configuration with:

    microk8s kubectl get ipaddresspool <pool-name> -n metallb-system -o yaml 

    Replace <pool-name> with the actual name of the IP address pool you want to inspect. The output will show the range of IP addresses that MetalLB can use.


Some of this IP should be used to access internal EPIK8s service like cagateway and pvagateway.

To check the addresses already in use see the EXTERNAL-IP column:

kubectl get svc -o wide -A





Expose K8s Dashboard 

You can expose the Dashboard using a NodePort, Ingress, or LoadBalancer service, depending on your setup. By default, it uses a ClusterIP, which is not accessible externally.

To use NodePort, you can add the following to your Helm install command:

export KUBECONFIG=/etc/rancher/k3s/k3s.yaml 

helm upgrade --install kubernetes-dashboard kubernetes-dashboard/kubernetes-dashboard \ --namespace kubernetes-dashboard --create-namespace

kubectl patch svc kubernetes-dashboard -kong-proxy -n kuberneteskube-dashboard system -p '{"spec": {"type": "NodePort"}}'

kubectl get svc kubernetes-dashboard -kong-proxy -n kuberneteskube-systemdashboard


Look for the the NodePort value under the PORT(S) column. You can now access the K8s Dashboard web UI at

...

https://<Node_IP>:<NodePort>

Get Admin Token for Access

To access the Dashboard, you will need a token. Create a ServiceAccount and ClusterRoleBinding for full admin access.

Create dashboard_admin.yaml

Code Block
languageyaml
titledashboard_admin.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: dashboard-admin
  namespace: kubernetes-dashboard
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: dashboard-admin
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: dashboard-admin
  namespace: kubernetes-dashboard

kubectl apply -f dashboard_admin.yaml

To obtain the token:

kubectl create token -n kubernetes-dashboard dashboard-admin

Install MetalLB

a. Create a Namespace for MetalLB

It’s a good practice to create a separate namespace for MetalLB:

kubectl create namespace metallb-system

b. Apply the MetalLB Manifest

Run the following command to deploy MetalLB using its official manifest:

kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.14.8/config/manifests/metallb-native.yaml

Make sure to check for the latest version of MetalLB on the MetalLB GitHub Releases page.

c. Check MetalLB Pods

Verify that the MetalLB components are running:

kubectl get pods -n metallb-system

You should see controller and speaker pods running.

Configure MetalLB

MetalLB needs a configuration to specify which IP address range to use for load balancing. You can create a ConfigMap with the configuration.

a. Define the IP Address Range

Create a YAML file named metallb-config.yaml with the following content, adjusting the ipAddressPool to match your network setup. For example:

Code Block
languageyaml
titleCreate MetalLB configuration
apiVersion: v1
kind: ConfigMap
metadata:
  namespace: metallb-system
  name: config
data:
  config: |
    layer2:
      addresses:
 	  - 192.168.114.101-192.168.114.101 # machine where K3S is installed 
      - 192.168.114.200-192.168.114.210

Make sure the IP range specified is within your local network range and does not conflict with existing devices.

b. Apply the ConfigMap

Apply the configuration:

...

c. Advertise choosen addresses

Create a YAML file named metallb-advertise.yaml :

Code Block
languageyaml
titleCreate MetalLB Advertisement
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
  name: example
  namespace: metallb-system
spec:
  ipAddressPools:
  - first-pool

kubectl apply -f metallb-advertise.yaml

d. Check ingress (traefik) gets loadbalancer address of the machine

kubectl get services -A

Install ArgoCD

Before installing ArgoCD, create a namespace where ArgoCD resources will live:

...

Install ArgoCD Using the Official Manifests

ArgoCD is installed by applying a YAML manifest. The official manifest deploys all necessary ArgoCD components, such as the API server, controller, and UI.

Run the following command to install ArgoCD:

...


Recover K8s dashboard token

kubectl describe secret -n kube-system microk8s-dashboard-token

This command will install ArgoCD into the argocd namespace.

Check the ArgoCD Pods

After applying the manifest, you can check if the ArgoCD pods are running:

kubectl get pods -n argocd

You should see several pods, including argocd-server, argocd-repo-server, argocd-application-controller, and others.

Wait for everything ready.

Expose the ArgoCD Server

By default, the ArgoCD API server is only accessible inside the cluster. To access it externally, you can expose it using either a NodePort or LoadBalancer service. For a minimal installation like K3s, NodePort is typically used.

...

Run this command to patch the argocd-server service to be of type NodePort:


kubectl patch svc argocd-server -n argocd -p '{"spec": {"type": "NodePort"}}'

or 

kubectl patch svc argo-cd-argocd-server -n argocd -p '{"spec": {"type": "NodePort"}}'


To retrieve the NodePort, run the following command:


kubectl get svc -n argocd argocd-server

or 

kubectl get svc -n argocd argo-cd-argocd-server 


Look for the NodePort value under the PORT(S) column. You can now access the ArgoCD web UI at

...

https://<Node_IP>:<NodePort>

Node_IP = address/dns of the machine where is installed k3s

Get the ArgoCD Initial Admin Password

ArgoCD generates a default admin password during installation. You can retrieve it by running this command:


Recover ArgoCD admin password

 kubectl -n argocd kubectl get secret argocd-initial-admin-secret -n argocd -o jsonpath="{.data.password}" | base64 --decode

The username is admin, and the password is what you just retrieved.

Access the ArgoCD Web UI

  • URL: http://<Node_IP>:<NodePort> or http://<LoadBalancer_IP>
  • Username: admin
  • Password: Use the password retrieved in the previous step.

Apply the Proxy Settings (optional)

d ## get the token for argocd




Proxy setting for ArgoCD (optional)

If you are behind a proxy probably some chart repository cannot be accessed.


You can edit the ArgoCD deployment to add the necessary environment variables to the containers.

...

Code Block
languageyaml
titleProxy env LNF example
env:
    - name: HTTP_PROXY
      value: "http://squid.lnf.infn.it:3128"
    - name: HTTPS_PROXY
      value: "http://squid.lnf.infn.it:3128"
    - name: NO_PROXY
      value: "baltig.infn.it,argocd-repo-server,argocd-server,localhost,127.0.0.0/24,::1,*.lnf.infn.it,.svc,.cluster.local,10.0.0.0/8,192.168.0.0/16"  


Restart the ArgoCD Components

After updating the deployments, restart the affected components to apply the changes:

...

kubectl rollout restart deployment argocd-application-controller -n argocd


Install Multus CNI

Next, you need to install Multus to get access to hw devices such as gigavision ethernet cameras.


microk8s enable multus



add a NetworkAttachmentDefinition

To use Multus, you will need to define additional networks for your pods. This is done by creating a NetworkAttachmentDefinition.

Here's an example YAML file for our testbeamline it adds access to the gigavision network of our cameras. 

enp4s0f0 is the network interface that is connected with the cams
rangeStart- rangeEnd are the address that the pod can acquire (note this addresses should not be assigned to HW)


Code Block
languageyaml
titleNetwork attachement configuration
apiVersion: k8s.cni.cncf.io/v1
kind: NetworkAttachmentDefinition
metadata:
  name: gigevision-network
  namespace: testbeamline
spec:
  config: '{
    "cniVersion": "0.3.0",
    "type": "macvlan",
    "mode": "bridge",
    "master": "enp4s0f0", ## the ethernet adapter that you want to access
     
    "ipam": {
      "type": "host-local",
      "subnet": "192.168.115.0/24",
      "rangeStart": "192.168.115.220",
      "rangeEnd": "192.168.115.254",
      "routes": [
        {
          "dst": "192.168.115.0/24",
          "gw": "192.168.115.2"
        }
      ],
      "gateway": "192.168.115.2"
    }
  }'




  • Apply the NetworkAttachmentDefinition with:


    microk8s kubectl apply -f <filename>.yaml

  • Deploy Pods Using Multiple Networks

    After Multus is installed and your custom networks are defined, you can deploy pods with multiple network interfaces. Here's an example pod spec using two networks (one default and one from Multus):

    Code Block
    languageyaml
    titleTest gige camera access
    apiVersion: v1
    kind: Pod
    metadata:
      name: pod-with-multus
      namespace: testbeamline
      annotations:
        k8s.v1.cni.cncf.io/networks: gigevision-network
    spec:
      containers:
      - name: app-container
        image:  baltig.infn.it:4567/epics-containers/infn-epics-ioc:latest
        command: ["/bin/sh", "-c", "sleep 3600"]




    • Verify Pod's Network Interfaces & camera access

      Once the pod is running, you can verify that it has multiple network interfaces by logging into the pod and using the ip command:

      microk8s kubectl exec -it pod-with-multus -n testbeamline -- ip addr

    • Code Block
      languagebash
      titleexpected output
      1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
          link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
          inet 127.0.0.1/8 scope host lo
             valid_lft forever preferred_lft forever
          inet6 ::1/128 scope host 
             valid_lft forever preferred_lft forever
      3: eth0@if1758: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default 
          link/ether ee:b2:9c:77:b3:6e brd ff:ff:ff:ff:ff:ff link-netnsid 0
          inet 10.1.24.151/32 scope global eth0
             valid_lft forever preferred_lft forever
          inet6 fe80::ecb2:9cff:fe77:b36e/64 scope link 
             valid_lft forever preferred_lft forever
      4: net1@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
          link/ether 22:32:b3:ae:55:5e brd ff:ff:ff:ff:ff:ff link-netnsid 0
          inet 192.168.115.220/24 brd 192.168.115.255 scope global net1
             valid_lft forever preferred_lft forever
          inet6 fe80::2032:b3ff:feae:555e/64 scope link 
             valid_lft forever preferred_lft forever

    • microk8s kubectl exec -it pod-with-multus -n testbeamline -- arv-tool-0.8


    • Code Block
      languagebash
      titlearv-tool expected output
      microk8s kubectl exec -it pod-with-multus -n testbeamline -- arv-tool-0.8
      Basler-a2A2600-20gmBAS-40437926 (192.168.115.47) <-- your camera