...
| Code Block | ||||
|---|---|---|---|---|
| ||||
apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: namespace: metallb-system name: default-pool-10-6 spec: addresses: - 10.10.6.240-10.10.6.250 # Adjust to your available range --- apiVersion: metallb.io/v1beta1 kind: L2Advertisement metadata: namespace: metallb-system name: l2advice-106 spec: ipAddressPools: - default-pool-10-6 nodeSelectors: - matchLabels: vlan lb-network-access: vlan-10-6 --- ## if you have other network to expose apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: namespace: metallb-system name: default-pool-109 spec: addresses: - 192.168.109.240-192.168.109.250 # Adjust to your available range --- apiVersion: metallb.io/v1beta1 kind: L2Advertisement metadata: namespace: metallb-system name: l2advice-107 spec: ipAddressPools: - default-pool-109 nodeSelectors: - matchLabels: vlan lb-network-access: vlan-109 |
Install metallb and configure
...
kubectl -n argocd set env deployment/argocd-repo-server \
HTTP_PROXY=http://your.proxy.address:port \
HTTPS_PROXY=http://your.proxy.address:port \
NO_PROXY=localhost,127.0.0.1,.cluster.local,.svc,yourcluster.local
6 Install
...
NFS storage class
Create app file called nfs_app.yaml
| Code Block |
|---|
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: nfs-subdir-external-provisioner
namespace: argocd # namespace dove gira Argo CD
finalizers:
- resources-finalizer.argocd.argoproj.io
spec:
project: default
source:
repoURL: https://kubernetes-sigs.github.io/nfs-subdir-external-provisioner/
chart: nfs-subdir-external-provisioner
targetRevision: 4.0.18
helm:
values: |
nfs:
server: atlasdisk19.lnf.infn.it
path: /St-Dell3800-A-A13Vd0
storageClass:
name: nfs-atlas
accessModes: ReadWriteMany
destination:
server: https://kubernetes.default.svc
namespace: nfs-provisioner
syncPolicy:
automated:
prune: true
selfHeal: true
syncOptions:
- CreateNamespace=true |
Apply:
| Code Block | ||||
|---|---|---|---|---|
| ||||
kubectl apply -f nfs_app.yaml |
Test SC (Storage Class):
| Code Block |
|---|
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nfs-test-pvc
namespace: default
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
storageClassName: nfs-atlas
---
apiVersion: v1
kind: Pod
metadata:
name: nfs-test-pod
namespace: default
spec:
containers:
- name: test
image: busybox:1.36
command: ["/bin/sh", "-c", "sleep 3600"]
volumeMounts:
- name: nfs-storage
mountPath: /mnt/nfs
volumes:
- name: nfs-storage
persistentVolumeClaim:
claimName: nfs-test-pvc |
Apply:
| Code Block | ||||
|---|---|---|---|---|
| ||||
kubectl apply -f pod_nfs.yaml |
Test nfs inside pod:
| Code Block | ||||
|---|---|---|---|---|
| ||||
kubectl exec -it nfs-test-pod -- sh
cd /mnt/nfs
touch prova.txt
echo "ciao" > prova.txt
cat prova.txt |
7 Install EPIK8S backend services (optional)
The backend services install:
- kafka
- mongodb
- elasticsearch
...
The backend services install:
- kafka
- mongodb
- elasticsearch
At the moment serve all the beamlines of the cluster. These services can be also installed in other ways and/or pre-esistent.
...
| Code Block | ||||
|---|---|---|---|---|
| ||||
kubectl apply -f epik8s-backend.yaml |
...
| Code Block | ||||
|---|---|---|---|---|
| ||||
8 Install EPIK8S beamline
A beamline GIT EPIK8s repo must exists.
...