Kubernetes requires the deployment of Container Network Interface (CNI) based Pod Network Add-on so that pods become able to communicate with each other. There are several compatible Network Add-ons but we will focus on Calico in this page. 

Calico as Network add-on

Calico is an open source networking and network security solution for containers, virtual machines, and native host-based workloads. It offers high performance, scalability, and full Kubernetes network support.

Installing Calico

The guide uses the Tigera operator to install calico. It provides lifecycle management for Calico exposed via the Kubernetes API defined as a custom resource definition.

NOTE: If you're migrating from another network add-on, you have to reset the cluster, clear /etc/cni/net.d from configuration files and reset the iptables.

Initialize the cluster with pod network CIDR

CIDR definition allows Calico to route and manage network policies for pods.

pod network CIDR
kubeadm init --pod-network-cidr=20.100.0.0/16

Install Tigera Operator

Install gira
kubectl create -f https://projectcalico.docs.tigera.io/manifests/tigera-operator.yaml

Configuring Calico:

Calico provide the option to allow the communication between pods without the need for an overlay network, unlike other Network add-ons. However, Creating Network overlay using VXLAN was necessary since the Cluster's security group rules for allowing traffic for Pods IP addresses.

Tigera operator yaml file can be seen on https://projectcalico.docs.tigera.io/manifests/tigera-operator.yaml


Calico yaml file

calico yaml file
# This section includes base Calico installation configuration.
# For more information, see: https://projectcalico.docs.tigera.io/v3.22/reference/installation/api#operator.tigera.io/v1.Installation
apiVersion: operator.tigera.io/v1
kind: Installation
metadata:
  name: default
spec:
  # Configures Calico networking.
  calicoNetwork:
    # Note: The ipPools section cannot be modified post-install.
    ipPools:
    - blockSize: 26
      cidr: 20.100.0.0/16
      encapsulation: VXLAN
      natOutgoing: Enabled
      nodeSelector: all()

---


# This section configures the Calico API server.
# For more information, see: https://projectcalico.docs.tigera.io/v3.22/reference/installation/api#operator.tigera.io/v1.APIServer
apiVersion: operator.tigera.io/v1
kind: APIServer
metadata:
  name: default
spec: {}

When Calico configuration file is create, install calico by executing kubectl create -f <calico\-configuration>.yaml.

Observing Calico

There are 3 Calico components deployed in the cluster:

Calico pods
root@k8s-master:~# kubectl get pods -o wide -n calico-system
NAME                                       READY   STATUS    RESTARTS   AGE   IP               NODE            NOMINATED NODE   READINESS GATES
calico-kube-controllers-67f85d7449-c44gx   1/1     Running   0          41h   20.100.235.193   k8s-master      <none>           <none>
calico-node-9hzxt                          1/1     Running   0          15h   192.168.1.4      k8s-worker-7    <none>           <none>
calico-node-ff5dz                          1/1     Running   0          40h   192.168.1.9      k8s-worker-2    <none>           <none>
calico-node-g8x6g                          1/1     Running   0          41h   192.168.1.6      k8s-worker-1    <none>           <none>
calico-node-gpdbw                          1/1     Running   0          15h   192.168.1.13     k8s-worker-4    <none>           <none>
calico-node-hp782                          1/1     Running   0          15h   192.168.1.19     k8s-worker-6    <none>           <none>
calico-node-hsvgq                          1/1     Running   0          15h   192.168.1.20     k8s-worker-10   <none>           <none>
calico-node-jkzt9                          1/1     Running   0          15h   192.168.1.30     k8s-worker-9    <none>           <none>
calico-node-ktzlf                          1/1     Running   0          15h   192.168.1.8      k8s-worker-8    <none>           <none>
calico-node-mw7mm                          1/1     Running   0          15h   192.168.1.10     k8s-worker-3    <none>           <none>
calico-node-tdd5b                          1/1     Running   0          15h   192.168.1.5      k8s-worker-5    <none>           <none>
calico-node-xvlnx                          1/1     Running   0          41h   192.168.1.25     k8s-master      <none>           <none>
calico-typha-84fb965d6-894n2               1/1     Running   0          40h   192.168.1.9      k8s-worker-2    <none>           <none>
calico-typha-84fb965d6-d8wk5               1/1     Running   0          15h   192.168.1.13     k8s-worker-4    <none>           <none>
calico-typha-84fb965d6-pqthb               1/1     Running   0          41h   192.168.1.25     k8s-master      <none>           <none>

calico-kube-controllers

The Calico Kubernetes controllers are deployed in a Kubernetes cluster. The different controllers monitor the Kubernetes API and perform actions based on cluster state. The pod includes the following controllers:

  • policy controller: watches network policies and programs Calico policies.
  • namespace controller: watches namespaces and programs Calico profiles.
  • serviceaccount controller: watches service accounts and programs Calico profiles.
  • workloadendpoint controller: watches for changes to pod labels and updates Calico workload endpoints.
  • node controller: watches for the removal of Kubernetes nodes and removes corresponding data from Calico, and optionally watches for node updates to create and sync host endpoints for each node.

 calico-node

The node resource run on each kubernetes node and contains the configuration of calico.

calico-typha

Typha watches for changes in various resources, and does a fan-out to all calico-nodes, reducing the load on the Kubernetes API server.

  • No labels