You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 25 Next »


The procedure has been tested on a ubuntu 22.04LTS 64GB Ram.

RKE2 install

refer to RKE2 QUICK START

Install multus and calico or  CNI

/etc/rancher/rke2/config.yaml
cni:
 - multus
 - calico


1 Install metallb (LoadBalancer)

This pass will be possible to expose some address to the external of the cluster.


Prepare metallb_config.yaml

copy the following content (by using free IP ranges where your cluster uses)

metallb_config.yaml
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
  namespace: metallb-system
  name: default-pool-10-6
spec:
  addresses:
  - 10.10.6.240-10.10.6.250  # Adjust to your available range

---
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
  namespace: metallb-system
  name: advice-106
spec:
  ipAddressPools:
    - default-pool-10-6
  nodeSelectors:
    - matchLabels:
        lb-network-access: vlan-10-6

---
## if you have other network to expose
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
  namespace: metallb-system
  name: default-pool-109
spec:
  addresses:
  - 192.168.109.240-192.168.109.250  # Adjust to your available range

---
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
  namespace: metallb-system
  name: advice-107
spec:
  ipAddressPools:
    - default-pool-109
  nodeSelectors:
    - matchLabels:
        lb-network-access: vlan-109




Install metallb and configure

Shell Command
## metallb
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.15.2/config/manifests/metallb-native.yaml
kubectl apply -f metallb_config.yaml

2 Install  local_path storage class


🛠️ Apply the official manifests

Use this command to install the default local-path-provisioner:


Shell Command
kubectl apply -f https://raw.githubusercontent.com/rancher/local-path-provisioner/master/deploy/local-path-storage.yaml


This deploys:

  • A StorageClass named local-path

  • A local-path-provisioner DaemonSet

  • The necessary RBAC and helper scripts


 ☑️ Set it as the default (optional)

To make local-path the default StorageClass (so you don’t need to specify it in every PVC):

Shell Command
 kubectl patch storageclass local-path -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'


You can verify it with:

Shell Command
 kubectl get storageclass


Look for (default) in the local-path row.


 3 Install cert-manager


Install cert-manager using the official manifests:

Shell Command
kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.15.0/cert-manager.yaml


 📄 Create a ClusterIssuer for Let's Encrypt

Create a file named cluster-issuer.yaml:

YAML MANIFEST
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
  name: letsencrypt-prod
spec:
  acme:
    email: andrea.michelotti@infn.it  # 📧 Required
    server: https://acme-v02.api.letsencrypt.org/directory
    privateKeySecretRef:
      name: letsencrypt-prod-key
    solvers:
    - http01:
        ingress:
          class: "nginx"


4 Install the Kubernetes Dashboard

Apply the official dashboard manifest:

Shell Command
 kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.7.0/aio/deploy/recommended.yaml


This will install the dashboard into the kubernetes-dashboard namespace.


🌍 Expose the Dashboard with an Ingress

Option  for NGINX 

Create a file dashboard-ingress.yaml:


YAML MANIFEST
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
  annotations:
    nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
    cert-manager.io/cluster-issuer: letsencrypt-prod
    #nginx.ingress.kubernetes.io/ssl-redirect: "true"
    nginx.ingress.kubernetes.io/rewrite-target: /
    # Traefik examples:
    # traefik.ingress.kubernetes.io/router.entrypoints: websecure
    # traefik.ingress.kubernetes.io/router.tls: "true"
spec:
  rules:
  - host: dashboard.da  # 🔁 Change to your domain
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: kubernetes-dashboard
            port:
              number: 443
  tls:
  - hosts:
    - dashboard.da
    secretName: dashboard-cert




Apply it:

Shell Command
 kubectl apply -f dashboard-ingress.yaml


Check the address exposed and add in the /etc/hosts as dashboard.da

Shell Command
kubectl get ingress -n kubernetes-dashboard


🧠 You must configure a DNS entry or /etc/hosts pointing dashboard.da to your ingress controller IP.


🔐 Create a ServiceAccount + ClusterRoleBinding

Create an admin user:


YAML MANIFEST
# dashboard-admin.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kubernetes-dashboard
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: admin-user
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kubernetes-dashboard




Apply it:

Shell Command
 kubectl apply -f dashboard-admin.yaml

 🔑 Get the Login Token


More secure option is to make a token that expires.

The token will expire.

Shell Command
 kubectl -n kubernetes-dashboard create token admin-user


Copy the token and use it to log in at https://dashboard.da

Create a Secret Token (manually)


Create a ServiceAccount


Shell Command
kubectl create serviceaccount dashboard-sa -n kubernetes-dashboard


Bind It to the Cluster Role (e.g. cluster-admin)


Shell Command
kubectl create clusterrolebinding dashboard-sa-admin --clusterrole=cluster-admin --serviceaccount=kubernetes-dashboard:dashboard-sa





Create a file dashboard-token.yaml:

YAML MANIFEST
apiVersion: v1
kind: Secret
metadata:
  name: dashboard-sa-token
  namespace: kubernetes-dashboard
  annotations:
    kubernetes.io/service-account.name: dashboard-sa
type: kubernetes.io/service-account-token


Shell Command
 kubectl apply -f dashboard-token.yaml





Wait & Retrieve the Token

It may take a few seconds for Kubernetes to populate the token. Then:

Shell Command
kubectl -n kubernetes-dashboard describe secret dashboard-sa-token




5 Install Argo CD

Install Argo CD in the argocd namespace:

Shell Command
kubectl create namespace argocd
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml




Expose Argo CD with an Ingress

🔹 Ingress with NGINX

Create a file argocd-ingress.yaml:

prepare argocd_ingress.yaml

argocd_ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: argocd-server-ingress
  namespace: argocd
  annotations:
    kubernetes.io/ingress.class: "nginx"
    nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
    cert-manager.io/cluster-issuer: "letsencrypt-prod"   # Optional: if using TLS
spec:
  rules:
  - host: argocd.da       # 🔁 Replace with your DNS name
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: argocd-server
            port:
              number: 443
  tls:
  - hosts:
    - argocd.da
    secretName: argocd-tls          # Auto-created by cert-manager if using TLS



Apply it:

RKE
kubectl apply -f argocd-ingress.yaml


Retrieve initial password

RKE
 kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d

connect to UI (admin and password)

see the address 

Shell Command
kubectl get ingress -n argocd

🧠 You must configure a DNS entry or /etc/hosts pointing argocd.da (or whatever) with your ingress controller IP.

Change the password! the inital password wont work for long


If you are behind a proxy

replace squid.lnf.infn.it:3128 with your proxy:port

RKE
kubectl -n argocd set env deployment/argocd-application-controller \
  --env HTTP_PROXY=http://squid.lnf.infn.it:3128 \
  --env HTTPS_PROXY=http://squid.lnf.infn.it:3128 \
  --env NO_PROXY=127.0.0.0/8,10.0.0.0/8,172.16.0.0/12,192.168.0.0/16

kubectl -n argocd set env deployment/argocd-repo-server \
  --env HTTP_PROXY=http://squid.lnf.infn.it:3128 \
  --env HTTPS_PROXY=http://squid.lnf.infn.it:3128 \
  --env NO_PROXY=127.0.0.0/8,10.0.0.0/8,172.16.0.0/12,192.168.0.0/16


kubectl -n argocd set env deployment/argocd-repo-server \
  HTTP_PROXY=http://your.proxy.address:port \
  HTTPS_PROXY=http://your.proxy.address:port \
  NO_PROXY=localhost,127.0.0.1,.cluster.local,.svc,yourcluster.local



6 Install EPIK8S backend services (optional)

The backend services install:

  • kafka
  • mongodb
  • elasticsearch

At the moment serve all the beamlines of the cluster. These services can be also installed in other ways and/or pre-esistent.

A manifest like the following must be prepared, paying attentio to specify domain, loadBalancerIPs.

Prepare a manifest 


epik8s-backend.yaml
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: epik8s-backend
  namespace: argocd
spec:
  project: default
  source:
    repoURL: 'https://baltig.infn.it/epics-containers/epik8-backend.git'
    path: .
    targetRevision: HEAD
    helm:
      values: |
          namespace: backend
          domain: "da"
          ingressClassName: "ngnix"

          kafka:
            externalAccess:
              enabled: true
      
              service:
                type: LoadBalancer
                loadBalancerIPs:
                    - 10.10.6.247
                ports:
                  external: 9092
  destination:
    server: 'https://kubernetes.default.svc'
    namespace: backend
  syncPolicy:
    automated:
      prune: true  # Optional: Automatically remove resources not specified in Helm chart
      selfHeal: true
    syncOptions:
      - CreateNamespace=true 
      - Prune=true

Apply:

Shell Command
kubectl apply -f epik8s-backend.yaml

7 Install NFS storage class

Create app file called nfs_app.yaml


apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: nfs-subdir-external-provisioner
  namespace: argocd        # namespace dove gira Argo CD
  finalizers:
    - resources-finalizer.argocd.argoproj.io
spec:
  project: default
  source:
    repoURL: https://kubernetes-sigs.github.io/nfs-subdir-external-provisioner/
    chart: nfs-subdir-external-provisioner
    targetRevision: 4.0.18
    helm:
      values: |
        nfs:
          server: atlasdisk19.lnf.infn.it
          path: /St-Dell3800-A-A13Vd0
        storageClass:
          name: nfs-atlas
          accessModes: ReadWriteMany
  destination:
    server: https://kubernetes.default.svc
    namespace: nfs-provisioner
  syncPolicy:
    automated:
      prune: true
      selfHeal: true
    syncOptions:
      - CreateNamespace=true

Apply:

Shell Command
kubectl apply -f nfs_app.yaml

Test SC (Storage Class):


apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: nfs-test-pvc
  namespace: default
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 1Gi
  storageClassName: nfs-atlas
---
apiVersion: v1
kind: Pod
metadata:
  name: nfs-test-pod
  namespace: default
spec:
  containers:
    - name: test
      image: busybox:1.36
      command: ["/bin/sh", "-c", "sleep 3600"]
      volumeMounts:
        - name: nfs-storage
          mountPath: /mnt/nfs
  volumes:
    - name: nfs-storage
      persistentVolumeClaim:
        claimName: nfs-test-pvc

Apply:

Shell Command
kubectl apply -f pod_nfs.yaml

Test nfs inside pod:

Shell Command
kubectl exec -it nfs-test-pod -- sh
cd /mnt/nfs
touch prova.txt
echo "ciao" > prova.txt
cat prova.txt

 8 Install EPIK8S beamline

A beamline GIT EPIK8s repo must exists.

Prepare a epiks8-beamline.yaml manifest replacing repoURL with the url of the EPIK8s beamline.

This is repo is the beamline EPIK8s helm chart for ELI test:

https://github.com/infn-epics/epik8s-rke2-test.git.

Look:
EPIK8s Beamline



epik8s-beamline.yaml
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: rke2-test-deploy-deploy
  namespace: argocd
  labels:
      deploy: eli
spec:
  project: default
  source:
    repoURL: 'https://github.com/infn-epics/epik8s-rke2-test.git'
    path: deploy
    targetRevision: main
    helm:
      values: |
          namespace: da-test
          domain: "da"
          ingressClassName: "nginx"
  destination:
    server: 'https://kubernetes.default.svc'
    namespace: eli
  syncPolicy:
    automated:
      prune: true  # Optional: Automatically remove resources not specified in Helm chart
      selfHeal: true
    syncOptions:
      - CreateNamespace=true 
      - Prune=true


The following will install the full beamline control on your cluster:

Apply:

Shell Command
kubectl apply -f epik8s-beamline.yaml

EPIK8s secrets

is it possible that your system needs to log in machines to start process, or define token to access repositories.



epik8s-beamline.yaml
kubectl create secret generic epik8s-secret   --from-file=git_token --from-file=id_rsa=id_rsa --from-file=id_rsa.pub=id_rsa.pub -n da-test
  • No labels