The procedure has been tested on a ubuntu 22.04LTS 64GB Ram.
Install microk8s
If you haven't installed microk8s
yet, you can install it by running the following command:
snap install microk8s --classic --channel=latest/stable microk8s enable dns microk8s status --wait-ready microk8s enable hostpath-storage microk8s enable metallb # Make sure the IP range specified is within your local network range and does not conflict with existing devices. microk8s enable ingress microk8s enable dashboard microk8s kubectl describe secret -n kube-system microk8s-dashboard-token ## copy the token for k8s dashboard microk8s enable community microk8s enable argocd alias kubectl='microk8s kubectl' kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d ## get the token for argocd
https://microk8s.io/docs/getting-started
You can verify the K3s installation by checking the node status:
kubectl get nodes
You can verify the K3s installation by checking the POD status:
kubectl get pod -A
Expose K8s Dashboard
You can expose the Dashboard using a NodePort, Ingress, or LoadBalancer service, depending on your setup. By default, it uses a ClusterIP
, which is not accessible externally.
To use NodePort:
kubectl patch svc argo-cd-argocd-server -n argocd -p '{"spec": {"type": "NodePort"}}'
kubectl get svc kubernetes-dashboard-kong-proxy -n kubernetes-dashboard
Look for the NodePort
value under the PORT(S)
column. You can now access the K8s Dashboard web UI at
http://<Node_IP>:<NodePort>
.
https://<Node_IP>:<NodePort>
Recover K8s dashboard token
kubectl describe secret -n kube-system microk8s-dashboard-token
Expose the ArgoCD Server
By default, the ArgoCD API server is only accessible inside the cluster. To access it externally, you can expose it using either a NodePort
or LoadBalancer
service. For a minimal installation like K3s, NodePort
is typically used.
a. Expose ArgoCD with a NodePort:
Run this command to patch the argocd-server
service to be of type NodePort
:
kubectl patch svc argocd-server -n argocd -p '{"spec": {"type": "NodePort"}}'
To retrieve the NodePort, run the following command:
kubectl get svc -n argocd argocd-server
Look for the NodePort
value under the PORT(S)
column. You can now access the ArgoCD web UI at
http://<Node_IP>:<NodePort>
.
https://<Node_IP>:<NodePort>
Recover ArgoCD admin password
kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d ## get the token for argocd
Proxy setting for ArgoCD (optional)
If you are behind a proxy probably some chart repository cannot be accessed.
You can edit the ArgoCD deployment to add the necessary environment variables to the containers.
- Edit the argocd-repo-server and argocd-application-controller deployments:
kubectl edit deployment argocd-repo-server -n argocd
kubectl edit deployment argocd-application-controller -n argocd
- Add the following environment variables under the
spec.containers
section in both deployments:
spec: containers: - name: argocd-repo-server env: - name: HTTP_PROXY value: "http://your-http-proxy:port" - name: HTTPS_PROXY value: "http://your-https-proxy:port" - name: NO_PROXY value: "localhost,127.0.0.1,.svc,.cluster.local,argocd-repo-server,argocd-server"
env: - name: HTTP_PROXY value: "http://squid.lnf.infn.it:3128" - name: HTTPS_PROXY value: "http://squid.lnf.infn.it:3128" - name: NO_PROXY value: "baltig.infn.it,argocd-repo-server,argocd-server,localhost,127.0.0.0/24,::1,*.lnf.infn.it,.svc,.cluster.local,10.0.0.0/8,192.168.0.0/16"
Restart the ArgoCD Components
After updating the deployments, restart the affected components to apply the changes:
kubectl rollout restart deployment argocd-repo-server -n argocd
kubectl rollout restart deployment argocd-application-controller -n argocd
Install Multus CNI
Next, you need to install Multus. You can do this by applying the official Multus CNI manifest from its GitHub repository. Here's the command to download and apply the Multus DaemonSet to your MicroK8s cluster:
kubectl apply -f https://raw.githubusercontent.com/k8snetworkplumbingwg/multus-cni/master/deployments/multus-daemonset-thick.yml
This will install Multus CNI as a DaemonSet across all nodes in the cluster.
Verify Multus Installation
Once installed, verify that the Multus pod is running in the
kube-system
namespace:microk8s kubectl get pods -n kube-system
Look for the multus pod. The pod should be in a
Running
state. If it is not, inspect the logs to troubleshoot:microk8s kubectl logs <multus-pod-name> -n kube-system
Modify NetworkAttachmentDefinition
To use Multus, you will need to define additional networks for your pods. This is done by creating a NetworkAttachmentDefinition.
Here's an example YAML file for our testbeamline it adds access to the gigavision network of our cameras.
enp4s0f0 is the network interface that is connected with the cams
rangeStart- rangeEnd are the address that the pod can acquire (note this addresses should not be assigned to HW)Network attachement configurationapiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: name: gigevision-network namespace: testbeamline spec: config: '{ "cniVersion": "0.3.1", "type": "bridge", "master": "enp4s0f0", "bridge": "br0", "ipam": { "type": "host-local", "subnet": "192.168.115.0/24", "rangeStart": "192.168.115.220", "rangeEnd": "192.168.115.254", "routes": [ { "dst": "192.168.115.0/24", "gw": "192.168.115.2" } ], "gateway": "192.168.115.2" } }'
Apply the NetworkAttachmentDefinition with:
microk8s kubectl apply -f <filename>.yaml
Deploy Pods Using Multiple Networks
After Multus is installed and your custom networks are defined, you can deploy pods with multiple network interfaces. Here's an example pod spec using two networks (one default and one from Multus):
Test gige camera accessapiVersion: v1 kind: Pod metadata: name: pod-with-multus namespace: testbeamline annotations: k8s.v1.cni.cncf.io/networks: gigevision-network spec: containers: - name: app-container image: baltig.infn.it:4567/epics-containers/infn-epics-ioc:latest command: ["/bin/sh", "-c", "sleep 3600"]
Verify Pod's Network Interfaces & camera access
Once the pod is running, you can verify that it has multiple network interfaces by logging into the pod and using the
ip
command:microk8s kubectl exec -it pod-with-multus -n testbeamline -- ip a
microk8s kubectl exec -it pod-with-multus -n testbeamline -- arv-tool-0.8