The procedure has been tested on a ubuntu 22.04LTS 64GB Ram.
Install microk8s
If you haven't installed microk8s
yet, you can install it by running the following command:
snap install microk8s --classic --channel=latest/stable
The start to install required packages:
microk8s enable dns microk8s status --wait-ready microk8s enable hostpath-storage microk8s enable metallb # Make sure the IP range specified is within your local network range and does not conflict with existing devices. microk8s enable ingress microk8s enable dashboard microk8s kubectl describe secret -n kube-system microk8s-dashboard-token ## copy the token for k8s dashboard microk8s enable community microk8s enable argocd alias kubectl='microk8s kubectl' kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d ## get the token for argocd
NOTE: the alias command should (must)
be inserted under ~/.bashrc
if you don't define the alias, remember to explain the command microk8s kubectl
or sudo microk8s kubectl
in the following steps
REFERENCES:
Explore the microk8s environment
You can verify the microk8s installation by checking the node status:
kubectl get nodes
You can verify the microk8s installation by checking the POD status:
kubectl get pod -A
Retrieve Service CIDR
For further EPIK8s installation is important to take note of Service CIDR that is the interval range of internal service cluster IPs.
cat /var/snap/microk8s/current/args/kube-apiserver | grep service-cluster-ip-range
or for kubernates vanilla
cat /etc/kubernetes/manifests/kube-apiserver.yaml|grep service-cluster-ip-range
This interval should be indicated to EPIK8s to instruct the range of IPs to give to pod services (because CA/PVA protocols dont support correctly dynamic DNS)
https://github.com/epics-base/epics-base/issues/488.
Metallb configuration
For further EPIK8s installation is important to take note of
loadbalancer IP configured in the install process.
Check the MetalLB Configuration: To see the current configuration of MetalLB, you can list the
IPAddressPool
andL2Advertisement
CRDs:microk8s kubectl get ipaddresspool -n metallb-system microk8s kubectl get l2advertisement -n metallb-system
View the Details of an IPAddressPool: If you have an existing
IPAddressPool
, you can view its configuration with:microk8s kubectl get ipaddresspool <pool-name> -n metallb-system -o yaml
Replace
<pool-name>
with the actual name of the IP address pool you want to inspect. The output will show the range of IP addresses that MetalLB can use.
Some of this IP should be used to access internal EPIK8s service like cagateway and pvagateway.
To check the addresses already in use see the EXTERNAL-IP column:
kubectl get svc -o wide -A
Expose K8s Dashboard
You can expose the Dashboard using a NodePort, Ingress, or LoadBalancer service, depending on your setup. By default, it uses a ClusterIP
, which is not accessible externally.
To use NodePort:
kubectl patch svc kubernetes-dashboard -n kube-system -p '{"spec": {"type": "NodePort"}}'
kubectl get svc kubernetes-dashboard -n kube-system
Look for the NodePort
value under the PORT(S)
column. You can now access the K8s Dashboard web UI at
http://<Node_IP>:<NodePort>
.
https://<Node_IP>:<NodePort>
Recover K8s dashboard token
kubectl describe secret -n kube-system microk8s-dashboard-token
Expose the ArgoCD Server
By default, the ArgoCD API server is only accessible inside the cluster. To access it externally, you can expose it using either a NodePort
or LoadBalancer
service. For a minimal installation like K3s, NodePort
is typically used.
a. Expose ArgoCD with a NodePort:
Run this command to patch the argocd-server
service to be of type NodePort
:
kubectl patch svc argocd-server -n argocd -p '{"spec": {"type": "NodePort"}}'
or
kubectl patch svc argo-cd-argocd-server -n argocd -p '{"spec": {"type": "NodePort"}}'
To retrieve the NodePort, run the following command:
kubectl get svc -n argocd argocd-server
or
kubectl get svc -n argocd argo-cd-argocd-server
Look for the NodePort
value under the PORT(S)
column. You can now access the ArgoCD web UI at
http://<Node_IP>:<NodePort>
.
https://<Node_IP>:<NodePort>
Recover ArgoCD admin password
kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d ## get the token for argocd
Proxy setting for ArgoCD (optional)
If you are behind a proxy probably some chart repository cannot be accessed.
You can edit the ArgoCD deployment to add the necessary environment variables to the containers.
- Edit the argocd-repo-server and argocd-application-controller deployments:
kubectl edit deployment argocd-repo-server -n argocd
kubectl edit deployment argocd-application-controller -n argocd
- Add the following environment variables under the
spec.containers
section in both deployments:
spec: containers: - name: argocd-repo-server env: - name: HTTP_PROXY value: "http://your-http-proxy:port" - name: HTTPS_PROXY value: "http://your-https-proxy:port" - name: NO_PROXY value: "localhost,127.0.0.1,.svc,.cluster.local,argocd-repo-server,argocd-server"
env: - name: HTTP_PROXY value: "http://squid.lnf.infn.it:3128" - name: HTTPS_PROXY value: "http://squid.lnf.infn.it:3128" - name: NO_PROXY value: "baltig.infn.it,argocd-repo-server,argocd-server,localhost,127.0.0.0/24,::1,*.lnf.infn.it,.svc,.cluster.local,10.0.0.0/8,192.168.0.0/16"
Restart the ArgoCD Components
After updating the deployments, restart the affected components to apply the changes:
kubectl rollout restart deployment argocd-repo-server -n argocd
kubectl rollout restart deployment argocd-application-controller -n argocd
Install Multus CNI
Next, you need to install Multus to get access to hw devices such as gigavision ethernet cameras.
microk8s enable multus
add a NetworkAttachmentDefinition
To use Multus, you will need to define additional networks for your pods. This is done by creating a NetworkAttachmentDefinition.
Here's an example YAML file for our testbeamline it adds access to the gigavision network of our cameras.
enp4s0f0 is the network interface that is connected with the cams
rangeStart- rangeEnd are the address that the pod can acquire (note this addresses should not be assigned to HW)
apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: name: gigevision-network namespace: testbeamline spec: config: '{ "cniVersion": "0.3.0", "type": "macvlan", "mode": "bridge", "master": "enp4s0f0", ## the ethernet adapter that you want to access "ipam": { "type": "host-local", "subnet": "192.168.115.0/24", "rangeStart": "192.168.115.220", "rangeEnd": "192.168.115.254", "routes": [ { "dst": "192.168.115.0/24", "gw": "192.168.115.2" } ], "gateway": "192.168.115.2" } }'
Apply the NetworkAttachmentDefinition with:
microk8s kubectl apply -f <filename>.yaml
Deploy Pods Using Multiple Networks
After Multus is installed and your custom networks are defined, you can deploy pods with multiple network interfaces. Here's an example pod spec using two networks (one default and one from Multus):
Test gige camera accessapiVersion: v1 kind: Pod metadata: name: pod-with-multus namespace: testbeamline annotations: k8s.v1.cni.cncf.io/networks: gigevision-network spec: containers: - name: app-container image: baltig.infn.it:4567/epics-containers/infn-epics-ioc:latest command: ["/bin/sh", "-c", "sleep 3600"]
Verify Pod's Network Interfaces & camera access
Once the pod is running, you can verify that it has multiple network interfaces by logging into the pod and using the
ip
command:microk8s kubectl exec -it pod-with-multus -n testbeamline -- ip addr
- expected output
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 3: eth0@if1758: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default link/ether ee:b2:9c:77:b3:6e brd ff:ff:ff:ff:ff:ff link-netnsid 0 inet 10.1.24.151/32 scope global eth0 valid_lft forever preferred_lft forever inet6 fe80::ecb2:9cff:fe77:b36e/64 scope link valid_lft forever preferred_lft forever 4: net1@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default link/ether 22:32:b3:ae:55:5e brd ff:ff:ff:ff:ff:ff link-netnsid 0 inet 192.168.115.220/24 brd 192.168.115.255 scope global net1 valid_lft forever preferred_lft forever inet6 fe80::2032:b3ff:feae:555e/64 scope link valid_lft forever preferred_lft forever
microk8s kubectl exec -it pod-with-multus -n testbeamline -- arv-tool-0.8
- arv-tool expected output
microk8s kubectl exec -it pod-with-multus -n testbeamline -- arv-tool-0.8 Basler-a2A2600-20gmBAS-40437926 (192.168.115.47) <-- your camera