The procedure has been tested on a ubuntu 22.04LTS 64GB Ram.
Install microk8s
If you haven't installed microk8s
yet, you can install it by running the following command:
snap install microk8s --classic --channel=latest/stable microk8s enable dns microk8s status --wait-ready microk8s enable hostpath-storage microk8s enable metallb # Make sure the IP range specified is within your local network range and does not conflict with existing devices. microk8s enable ingress microk8s enable dashboard microk8s kubectl describe secret -n kube-system microk8s-dashboard-token ## copy the token for k8s dashboard microk8s enable community microk8s enable argocd alias kubectl='microk8s kubectl' kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d ## get the token for argocd
https://microk8s.io/docs/getting-started
You can verify the K3s installation by checking the node status:
kubectl get nodes
You can verify the K3s installation by checking the POD status:
kubectl get pod -A
Expose K8s Dashboard
You can expose the Dashboard using a NodePort, Ingress, or LoadBalancer service, depending on your setup. By default, it uses a ClusterIP
, which is not accessible externally.
To use NodePort:
kubectl patch svc argo-cd-argocd-server -n argocd -p '{"spec": {"type": "NodePort"}}'
kubectl get svc kubernetes-dashboard-kong-proxy -n kubernetes-dashboard
Look for the NodePort
value under the PORT(S)
column. You can now access the K8s Dashboard web UI at
http://<Node_IP>:<NodePort>
.
https://<Node_IP>:<NodePort>
Recover K8s dashboard token
kubectl describe secret -n kube-system microk8s-dashboard-token
Expose the ArgoCD Server
By default, the ArgoCD API server is only accessible inside the cluster. To access it externally, you can expose it using either a NodePort
or LoadBalancer
service. For a minimal installation like K3s, NodePort
is typically used.
a. Expose ArgoCD with a NodePort:
Run this command to patch the argocd-server
service to be of type NodePort
:
kubectl patch svc argocd-server -n argocd -p '{"spec": {"type": "NodePort"}}'
To retrieve the NodePort, run the following command:
kubectl get svc -n argocd argocd-server
Look for the NodePort
value under the PORT(S)
column. You can now access the ArgoCD web UI at
http://<Node_IP>:<NodePort>
.
https://<Node_IP>:<NodePort>
Recover ArgoCD admin password
kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d ## get the token for argocd
Proxy setting for ArgoCD (optional)
If you are behind a proxy probably some chart repository cannot be accessed.
You can edit the ArgoCD deployment to add the necessary environment variables to the containers.
- Edit the argocd-repo-server and argocd-application-controller deployments:
kubectl edit deployment argocd-repo-server -n argocd
kubectl edit deployment argocd-application-controller -n argocd
- Add the following environment variables under the
spec.containers
section in both deployments:
spec: containers: - name: argocd-repo-server env: - name: HTTP_PROXY value: "http://your-http-proxy:port" - name: HTTPS_PROXY value: "http://your-https-proxy:port" - name: NO_PROXY value: "localhost,127.0.0.1,.svc,.cluster.local,argocd-repo-server,argocd-server"
env: - name: HTTP_PROXY value: "http://squid.lnf.infn.it:3128" - name: HTTPS_PROXY value: "http://squid.lnf.infn.it:3128" - name: NO_PROXY value: "baltig.infn.it,argocd-repo-server,argocd-server,localhost,127.0.0.0/24,::1,*.lnf.infn.it,.svc,.cluster.local,10.0.0.0/8,192.168.0.0/16"
Restart the ArgoCD Components
After updating the deployments, restart the affected components to apply the changes:
kubectl rollout restart deployment argocd-repo-server -n argocd
kubectl rollout restart deployment argocd-application-controller -n argocd