...
Code Block |
---|
|
docker run --rm -v $(pwd):/mnt -e OS=redhat -e VERSION=1.2330.73 -e CONTAINER_RUNTIME=dockercri_containerd -e CNI_PROVIDER=cilium calico-e CNI_PROVIDER_VERSION=1.9.0 tigera -e ETCD_INITIAL_CLUSTER=plsparcdom001plelinpdom001:19210.16816.1094.10099 -e ETCD_IP="%{networking.ip}" -e KUBE_API_ADVERTISE_ADDRESS="%{networking.ip}" -e INSTALL_DASHBOARD=true -e CNI_NETWORK_PREINSTALL=https://raw.githubusercontent.com/projectcalico/calico/v3.27.0/manifests/tigera-operator.yaml -e CNI_NETWORK_PROVIDER=https://raw.githubusercontent.com/ciliumprojectcalico/ciliumcalico/v1v3.27.90/installmanifests/kubernetes/quickcustom-installresources.yaml puppet/kubetool:6.2.0 |
The initialization phase, as seen from the command, requires a container manager installed on the Puppet server to generate two files:
- common.yaml (which contains all the keys for registering the workers)
- Redhat.yaml (the The name depends on the variable in the previous command , (OS=redhat, ) and is used to instantiate the K8s master/control node)
The generated files will be placed in the appropriate Puppet directory associated with the machine to be installed, for example:
...
Code Block |
---|
|
/etc/puppetlabs/code/environments/<your environment>/data/common.yaml
/etc/puppetlabs/code/environments/<your environment>/data/node/plsparcdom001nodes/{$hostname-controll}.yaml (The name of your master explain into the command) is specified in the command) |
For a bug in the Docker image, add or replace the following string inside the common.yaml
file o change the Puppet Class Pameters as in the examples:
Code Block |
---|
|
kubernetes::cni_network_preinstall: https://raw.githubusercontent.com/projectcalico/calico/v3.27.0/manifests/tigera-operator.yaml
kubernetes::cni_network_provider: https://raw.githubusercontent.com/projectcalico/calico/v3.27.0/manifests/custom-resources.yaml |
Each variable in the two generated files can be redefined by Foreman, following Puppet's ENC paradigm
Image Added
Note |
---|
|
If the nodes are on a private network, they need to be NATED and also specify the master's/control's endpoints in no_proxy |
Note |
---|
|
Install tar into the machine |
Enable bridging in linux:
Code Block |
---|
|
sudo echo "net.bridge.bridge-nf-call-iptables = 1" >> /etc/sysctl.d/99-sysctl.conf
sudo echo "net.ipv4.ip_forward = 1" >> /etc/sysctl.d/99-sysctl.conf
# sysctl -p /etc/sysctl.d/99-sysctl.conf
# sysctl -e net.bridge.bridge-nf-call-iptables net.ipv4.ip_forward
|
Install HELM and Helm_deploy_chart modules
longhorn requirerd:
dnf install iscsi-initiator-utils
Utils command
Code Block |
---|
|
$ mkdir -p $HOME/.kube
$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
$ sudo chown $(id -u):$(id -g) $HOME/.kube/config
# kubectl completion bash | sudo tee /etc/bash_completion.d/kubectl > /dev/null (kubectl bash completion)
# kubectl get node (test cluster)
# for i in `kubectl api-resources | awk '{print $1}'`; do echo "### Resource $i ###" && kubectl get $i -A; done (show all resources)
# kubectl patch <resource> <name> -n <namespace> -p '{"metadata": {"finalizers": null}}' --type merge (set resource finalized)
# helm completion bash > /etc/bash_completion.d/helm (helm bash completion)
This command (with kubectl 1.11+) will show you what resources remain in the namespace:
# kubectl api-resources --verbs=list --namespaced -o name \
| xargs -n 1 kubectl get --show-kind --ignore-not-found -n <namespace>
# kubectl port-forward service/argo-cd-argocd-server -n argocd --address 192.168.109.100 8080:443
|
Link utils for debug
https://stackoverflow.com/questions/52369247/namespace-stucked-as-terminating-how-i-removed-it
TODO: token for new repo