You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 26 Next »

The basic installation of operating systems and a foundational Kubernetes infrastructure with Calico CNI is carried out using Puppet and Foreman."

The modules used are as follows, some created as needed and others taken from the Puppet Forge repository:

Module using from puppet-forge:

puppetlabs-kubernetes  version 8.0.0

puppetlabs-helm  version 4.0.0 (Patched in baltig)

Module create purpose:

Il rgargana-helm_deploy_chart (in baltig)

rgargana-installer (in baltig)



The puppetlabs-kubernetes module requires an initial setup, as per the guide, to set some parameters common to both the control and worker nodes:


Hiera for K8s
docker run --rm -v $(pwd):/mnt -e OS=redhat -e VERSION=1.30.3 -e CONTAINER_RUNTIME=cri_containerd -e CNI_PROVIDER=calico-tigera -e ETCD_INITIAL_CLUSTER=plelinpdom001:10.16.4.99 -e ETCD_IP="%{networking.ip}" -e KUBE_API_ADVERTISE_ADDRESS="%{networking.ip}" -e INSTALL_DASHBOARD=true -e CNI_NETWORK_PREINSTALL=https://raw.githubusercontent.com/projectcalico/calico/v3.27.0/manifests/tigera-operator.yaml -e CNI_NETWORK_PROVIDER=https://raw.githubusercontent.com/projectcalico/calico/v3.27.0/manifests/custom-resources.yaml puppet/kubetool:6.2.0


The initialization phase, as seen from the command, requires a container manager installed on the Puppet server to generate two files:

  1. common.yaml (which contains all the keys for registering the workers)
  2. Redhat.yaml (The name depends on the variable in the previous command (OS=redhat) and is used to instantiate the K8s master/control node)

The generated files will be placed in the appropriate Puppet directory associated with the machine to be installed, for example:


hiera directory
/etc/puppetlabs/code/environments/<your environment>/data/common.yaml

/etc/puppetlabs/code/environments/<your environment>/data/node/plsparcdom001.yaml (The name of your master is specified in the command)


For a bug in the Docker image, add or replace the following string inside the common.yaml file:

tigera-calico-env
kubernetes::cni_network_preinstall: https://raw.githubusercontent.com/projectcalico/calico/v3.27.0/manifests/tigera-operator.yaml
kubernetes::cni_network_provider: https://raw.githubusercontent.com/projectcalico/calico/v3.27.0/manifests/custom-resources.yaml


Each variable specified in the two generated files can be rewritten by Foreman, based on Puppet's ENC paradigm.




nat and proxy

If the nodes are on a private network, they need to be NATED and also specify the master's/control's endpoints in no_proxy


Enable bridging in linux:

Bridging
sudo echo "net.bridge.bridge-nf-call-iptables = 1" >> /etc/sysctl.d/99-sysctl.conf
sudo echo "net.ipv4.ip_forward = 1" >> /etc/sysctl.d/99-sysctl.conf
# sysctl -p /etc/sysctl.d/99-sysctl.conf
# sysctl -e net.bridge.bridge-nf-call-iptables net.ipv4.ip_forward


Utils command

Utils command
# kubectl completion bash | sudo tee /etc/bash_completion.d/kubectl > /dev/null (kubectl bash completion)

# kubectl get node (test cluster)

# for i in `kubectl api-resources | awk '{print $1}'`; do echo "### Resource $i ###" && kubectl get $i -A; done (show all resources)

# kubectl patch <resource> <name> -n <namespace> -p '{"metadata": {"finalizers": null}}' --type merge (set resource finalized)

# helm completion bash > /etc/bash_completion.d/helm (helm bash completion)

This command (with kubectl 1.11+) will show you what resources remain in the namespace:

# kubectl api-resources --verbs=list --namespaced -o name \
  | xargs -n 1 kubectl get --show-kind --ignore-not-found -n <namespace>

# kubectl port-forward  service/argo-cd-argocd-server -n argocd --address 192.168.109.100 8080:443




Link utils for debug

https://stackoverflow.com/questions/52369247/namespace-stucked-as-terminating-how-i-removed-it


TODO: token for new repo

  • No labels