The procedure has been tested on a ubuntu 22.04LTS 64GB Ram.
Install multus and calico or CNI
cni: - multus - calico |
This pass will be possible to expose some address to the external of the cluster.
1- Prepare metallb_config.yaml
copy the following content (by using free IP ranges where your cluster uses)
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
namespace: metallb-system
name: default-pool-10-6
spec:
addresses:
- 10.10.6.240-10.10.6.250 # Adjust to your available range
---
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
namespace: metallb-system
name: l2
spec:
ipAddressPools:
- default-pool-10-6
nodeSelectors:
- matchLabels:
vlan: vlan-10-6
---
## if you have other network to expose
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
namespace: metallb-system
name: default-pool-109
spec:
addresses:
- 192.168.109.240-192.168.109.250 # Adjust to your available range
---
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
namespace: metallb-system
name: l2
spec:
ipAddressPools:
- default-pool-109
nodeSelectors:
- matchLabels:
vlan: vlan-109
|
2- Install metallb and configure
## metallb kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.15.2/config/manifests/metallb-native.yaml kubectl apply -f metallb_config.yaml |
Use this command to install the default local-path-provisioner:
kubectl apply -f https://raw.githubusercontent.com/rancher/local-path-provisioner/master/deploy/local-path-storage.yaml |
This deploys:
A StorageClass named local-path
A local-path-provisioner DaemonSet
The necessary RBAC and helper scripts
To make local-path the default StorageClass (so you don’t need to specify it in every PVC):
kubectl patch storageclass local-path -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}' |
You can verify it with:
kubectl get storageclass |
Look for (default) in the local-path row.
Install cert-manager using the official manifests:
kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.15.0/cert-manager.yaml |
Create a file named cluster-issuer.yaml:
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-prod
spec:
acme:
email: andrea.michelotti@infn.it # 📧 Required
server: https://acme-v02.api.letsencrypt.org/directory
privateKeySecretRef:
name: letsencrypt-prod-key
solvers:
- http01:
ingress:
class: "nginx" |
Apply the official dashboard manifest:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.7.0/aio/deploy/recommended.yaml |
This will install the dashboard into the kubernetes-dashboard namespace.
Create a file dashboard-ingress.yaml:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: kubernetes-dashboard
namespace: kubernetes-dashboard
annotations:
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
cert-manager.io/cluster-issuer: letsencrypt-prod
#nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/rewrite-target: /
# Traefik examples:
# traefik.ingress.kubernetes.io/router.entrypoints: websecure
# traefik.ingress.kubernetes.io/router.tls: "true"
spec:
rules:
- host: dashboard.da # 🔁 Change to your domain
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: kubernetes-dashboard
port:
number: 443
tls:
- hosts:
- dashboard.da
secretName: dashboard-cert |
Apply it:
kubectl apply -f dashboard-ingress.yaml |
Check the address exposed and add in the /etc/hosts as dashboard.da
kubectl get ingress -n kubernetes-dashboard |
🧠 You must configure a DNS entry or
/etc/hostspointingdashboard.dato your ingress controller IP.
Create an admin user:
# dashboard-admin.yaml apiVersion: v1 kind: ServiceAccount metadata: name: admin-user namespace: kubernetes-dashboard --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: admin-user roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-admin subjects: - kind: ServiceAccount name: admin-user namespace: kubernetes-dashboard |
Apply it:
kubectl apply -f dashboard-admin.yaml |
kubectl -n kubernetes-dashboard create token admin-user |
Copy the token and use it to log in at https://dashboard.da
If you're not using a wildcard or auto TLS (e.g. via cert-manager), you can create your own TLS secret:
kubectl -n kubernetes-dashboard create secret tls dashboard-tls \ --cert=/path/to/cert.crt \ --key=/path/to/cert.key |
prepare argocd_ingress.yaml
# argocd-ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: argocd-ingress
namespace: argocd
annotations:
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
nginx.ingress.kubernetes.io/ssl-passthrough: "true"
nginx.ingress.kubernetes.io/ssl-redirect: "true"
spec:
rules:
- host: argocd.da
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: argocd-server
port:
number: 443 |
kubectl create namespace argocd kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml ### wait kubectl get pod -n argocd -w kubectl apply -f argocd_ingress.yaml local_path storage class: kubectl apply -f https://raw.githubusercontent.com/rancher/local-path-provisioner/master/deploy/local-path-storage.yaml |