The procedure has been tested on a ubuntu 22.04LTS 64GB Ram.
Install microk8s
If you haven't installed microk8s
yet, you can install it by running the following command:
snap install microk8s --classic --channel=latest/stable microk8s enable dns microk8s status --wait-ready microk8s enable hostpath-storage microk8s enable metallb microk8s enable ingress microk8s enable dashboard microk8s kubectl describe secret -n kube-system microk8s-dashboard-token ## copy the token for k8s dashboard microk8s enable community microk8s enable argocd kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d ## get the token for argocd
follow https://microk8s.io/#install-microk8s
You can verify the K3s installation by checking the node status:
kubectl get nodes
You can verify the K3s installation by checking the POD status:
kubectl get pod -A
Install K8s Dashboard
You can expose the Dashboard using a NodePort, Ingress, or LoadBalancer service, depending on your setup. By default, it uses a ClusterIP
, which is not accessible externally.
To use NodePort, you can add the following to your Helm install command:
export KUBECONFIG=/etc/rancher/k3s/k3s.yaml
helm upgrade --install kubernetes-dashboard kubernetes-dashboard/kubernetes-dashboard \
--namespace kubernetes-dashboard --create-namespace
kubectl patch svc kubernetes-dashboard-kong-proxy -n kubernetes-dashboard -p '{"spec": {"type": "NodePort"}}'
kubectl get svc kubernetes-dashboard-kong-proxy -n kubernetes-dashboard
Look for the NodePort
value under the PORT(S)
column. You can now access the K8s Dashboard web UI at
http://<Node_IP>:<NodePort>
.
https://<Node_IP>:<NodePort>
Get Admin Token for Access
To access the Dashboard, you will need a token. Create a ServiceAccount and ClusterRoleBinding for full admin access.
Create dashboard_admin.yaml
apiVersion: v1 kind: ServiceAccount metadata: name: dashboard-admin namespace: kubernetes-dashboard --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: dashboard-admin roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-admin subjects: - kind: ServiceAccount name: dashboard-admin namespace: kubernetes-dashboard
kubectl apply -f dashboard_admin.yaml
To obtain the token:
kubectl create token -n kubernetes-dashboard dashboard-admin
Install MetalLB
a. Create a Namespace for MetalLB
It’s a good practice to create a separate namespace for MetalLB:
kubectl create namespace metallb-system
b. Apply the MetalLB Manifest
Run the following command to deploy MetalLB using its official manifest:
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.14.8/config/manifests/metallb-native.yaml
Make sure to check for the latest version of MetalLB on the MetalLB GitHub Releases page.
c. Check MetalLB Pods
Verify that the MetalLB components are running:
kubectl get pods -n metallb-system
You should see controller
and speaker
pods running.
Configure MetalLB
MetalLB needs a configuration to specify which IP address range to use for load balancing. You can create a ConfigMap with the configuration.
a. Define the IP Address Range
Create a YAML file named metallb-config.yaml
with the following content, adjusting the ipAddressPool
to match your network setup. For example:
apiVersion: v1 kind: ConfigMap metadata: namespace: metallb-system name: config data: config: | layer2: addresses: - 192.168.114.101-192.168.114.101 # machine where K3S is installed - 192.168.114.200-192.168.114.210
Make sure the IP range specified is within your local network range and does not conflict with existing devices.
b. Apply the ConfigMap
Apply the configuration:
kubectl apply -f metallb-config.yaml
c. Advertise choosen addresses
Create a YAML file named metallb-advertise.yaml
:
apiVersion: metallb.io/v1beta1 kind: L2Advertisement metadata: name: example namespace: metallb-system spec: ipAddressPools: - first-pool
kubectl apply -f
metallb-advertise.yaml
d. Check ingress (traefik) gets loadbalancer address of the machine
kubectl get services -A
Install ArgoCD
Before installing ArgoCD, create a namespace where ArgoCD resources will live:
kubectl create namespace argocd
Install ArgoCD Using the Official Manifests
ArgoCD is installed by applying a YAML manifest. The official manifest deploys all necessary ArgoCD components, such as the API server, controller, and UI.
Run the following command to install ArgoCD:
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml
This command will install ArgoCD into the argocd
namespace.
Check the ArgoCD Pods
After applying the manifest, you can check if the ArgoCD pods are running:
kubectl get pods -n argocd
You should see several pods, including argocd-server
, argocd-repo-server
, argocd-application-controller
, and others.
Wait for everything ready.
Expose the ArgoCD Server
By default, the ArgoCD API server is only accessible inside the cluster. To access it externally, you can expose it using either a NodePort
or LoadBalancer
service. For a minimal installation like K3s, NodePort
is typically used.
a. Expose ArgoCD with a NodePort:
Run this command to patch the argocd-server
service to be of type NodePort
:
kubectl patch svc argocd-server -n argocd -p '{"spec": {"type": "NodePort"}}'
To retrieve the NodePort, run the following command:
kubectl get svc -n argocd argocd-server
Look for the NodePort
value under the PORT(S)
column. You can now access the ArgoCD web UI at
http://<Node_IP>:<NodePort>
.
https://<Node_IP>:<NodePort>
Node_IP
= address/dns of the machine where is installed k3s
Get the ArgoCD Initial Admin Password
ArgoCD generates a default admin password during installation. You can retrieve it by running this command:
kubectl get secret argocd-initial-admin-secret -n argocd -o jsonpath="{.data.password}" | base64 --decode
The username is admin
, and the password is what you just retrieved.
Access the ArgoCD Web UI
- URL:
http://<Node_IP>:<NodePort>
orhttp://<LoadBalancer_IP>
- Username:
admin
- Password: Use the password retrieved in the previous step.
Apply the Proxy Settings (optional)
You can edit the ArgoCD deployment to add the necessary environment variables to the containers.
- Edit the argocd-repo-server and argocd-application-controller deployments:
kubectl edit deployment argocd-repo-server -n argocd
kubectl edit deployment argocd-application-controller -n argocd
- Add the following environment variables under the
spec.containers
section in both deployments:
spec: containers: - name: argocd-repo-server env: - name: HTTP_PROXY value: "http://your-http-proxy:port" - name: HTTPS_PROXY value: "http://your-https-proxy:port" - name: NO_PROXY value: "localhost,127.0.0.1,.svc,.cluster.local,argocd-repo-server,argocd-server"
env: - name: HTTP_PROXY value: "http://squid.lnf.infn.it:3128" - name: HTTPS_PROXY value: "http://squid.lnf.infn.it:3128" - name: NO_PROXY value: "baltig.infn.it,argocd-repo-server,argocd-server,localhost,127.0.0.0/24,::1,*.lnf.infn.it,.svc,.cluster.local,10.0.0.0/8,192.168.0.0/16"
Restart the ArgoCD Components
After updating the deployments, restart the affected components to apply the changes:
kubectl rollout restart deployment argocd-repo-server -n argocd
kubectl rollout restart deployment argocd-application-controller -n argocd