You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 8 Next »


The procedure has been tested on a ubuntu 22.04LTS 64GB Ram.

RKE2 config

Install multus and calico or  CNI

/etc/rancher/rke2/config.yaml
cni:
 - multus
 - calico


1- Install metallb (LoadBalancer)

This pass will be possible to expose some address to the external of the cluster.

1- Prepare metallb_config.yaml

copy the following content (by using free IP ranges where your cluster uses)

metallb_config.yaml
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
  namespace: metallb-system
  name: default-pool-10-6
spec:
  addresses:
  - 10.10.6.240-10.10.6.250  # Adjust to your available range

---
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
  namespace: metallb-system
  name: l2
spec:
  ipAddressPools:
    - default-pool-10-6
  nodeSelectors:
    - matchLabels:
        vlan: vlan-10-6

---
## if you have other network to expose
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
  namespace: metallb-system
  name: default-pool-109
spec:
  addresses:
  - 192.168.109.240-192.168.109.250  # Adjust to your available range

---
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
  namespace: metallb-system
  name: l2
spec:
  ipAddressPools:
    - default-pool-109
  nodeSelectors:
    - matchLabels:
        vlan: vlan-109




2- Install metallb and configure

RKE
## metallb
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.15.2/config/manifests/metallb-native.yaml
kubectl apply -f metallb_config.yaml

2 Install  local_path storage class


1. 🛠️ Apply the official manifests

Use this command to install the default local-path-provisioner:


RKE
kubectl apply -f https://raw.githubusercontent.com/rancher/local-path-provisioner/master/deploy/local-path-storage.yaml


This deploys:

  • A StorageClass named local-path

  • A local-path-provisioner DaemonSet

  • The necessary RBAC and helper scripts


2. ☑️ Set it as the default (optional)

To make local-path the default StorageClass (so you don’t need to specify it in every PVC):

RKE
 kubectl patch storageclass local-path -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'


You can verify it with:

RKE
 kubectl get storageclass


Look for (default) in the local-path row.


 3 Install the Kubernetes Dashboard

Apply the official dashboard manifest:

Shell Command
 kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.7.0/aio/deploy/recommended.yaml


This will install the dashboard into the kubernetes-dashboard namespace.


1. 🌍 Expose the Dashboard with an Ingress

Option  for NGINX 

Create a file dashboard-ingress.yaml:


YAML MANIFEST
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
  annotations:
    nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
    nginx.ingress.kubernetes.io/ssl-redirect: "true"
    nginx.ingress.kubernetes.io/rewrite-target: /
    # Traefik examples:
    # traefik.ingress.kubernetes.io/router.entrypoints: websecure
    # traefik.ingress.kubernetes.io/router.tls: "true"
spec:
  rules:
  - host: dashboard.da  # 🔁 Change to your domain
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: kubernetes-dashboard
            port:
              number: 443
  tls:
  - hosts:
    - dashboard.example.com
    secretName: dashboard-tls  # Must match a created TLS secret




Apply it:

Shell Command
 kubectl apply -f dashboard-ingress.yaml




🧠 You must configure a DNS entry or /etc/hosts pointing dashboard.da to your ingress controller IP.


3. 🔐 Create a ServiceAccount + ClusterRoleBinding

Create an admin user:

yaml

# dashboard-admin.yaml apiVersion: v1 kind: ServiceAccount metadata: name: admin-user namespace: kubernetes-dashboard --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: admin-user roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-admin subjects: - kind: ServiceAccount name: admin-user namespace: kubernetes-dashboard

Apply it:

bash

kubectl apply -f dashboard-admin.yaml


4. 🔑 Get the Login Token

bash

kubectl -n kubernetes-dashboard create token admin-user

Copy the token and use it to log in at https://dashboard.example.com.

Install  ARGOCD

prepare argocd_ingress.yaml


argocd_ingress.yaml
# argocd-ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: argocd-ingress
  namespace: argocd
  annotations:
    nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
    nginx.ingress.kubernetes.io/ssl-passthrough: "true"
    nginx.ingress.kubernetes.io/ssl-redirect: "true"
spec:
  rules:
  - host: argocd.da
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: argocd-server
            port:
              number: 443
RKE
kubectl create namespace argocd
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml
### wait
kubectl get pod -n argocd -w

kubectl apply -f argocd_ingress.yaml



local_path storage class:
kubectl apply -f https://raw.githubusercontent.com/rancher/local-path-provisioner/master/deploy/local-path-storage.yaml












  • No labels