Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

The tests will be performed on a cluster consisting of 4 nodes (1 master and 3 worker) with the same flavor. The flavor will also be modified in turn, remaining the same between the VMs in the cluster, multiplying the CPU and RAM by a factor of 2: passing from training medium (2 CPUs and 4GB RAM) to large (4 CPUs and 8GB RAM) and finally xlarge (8 CPUs and 16GB RAM). Finally, the clusters used for the tests are set to "factory settings", ie i.e. they will contain only the starting software of a typical k8s cluster just created. Before we continue, let's familiarize ourselves with a couple of tools suitable for our purposes: Metrics Server and Horizontal Pod Autoscaler.

...

Metrics Server deployment will likely not be Ready. If, analyzing the Pod logs, you see the error "unable to fully scrape metrics", then edit the deployment by inserting the flag

...

Run and expose php-apache server

To demonstrate HPA , we will use a custom docker image based on the php-apache image. Copy the following lines of code into the Dockerfile and index.php files and place them in the same folder. Then run the docker build . command Apply the following file, to install a simple PHP web application in the Kubernetes cluster. Then, verify the pods were created.

Code Block
languagephpyml
titleDockerfile and index.phpphp-apache.yaml
collapsetrue
FROM php:5-apache
COPY index.php /var/www/html/index.php
RUN chmod a+rx index.php

// These lines of code define the index.php file, which performs some CPU intensive computations
<?php
  $x = 0.0001;
  for ($i = 0; $i <= 1000000; $i++) {
    $x += sqrt($x);
  }
  echo "OK!";
?>

First, we will start a deployment running the image and expose it as a service using the following configuration

Code Block
languageyml
titlephp-apache.yaml
collapsetrue
apiVersion: apps/v1
kind: Deployment
metadata:
  name: php-apache
spec:
  selector:
    matchLabels:
      runapiVersion: apps/v1
kind: Deployment
metadata:
  name: php-apache
spec:
  selector:
    matchLabels:
      run: php-apache
  replicas: 1
  template:
    metadata:
      labels:
        run: php-apache
    spec:
      containers:
      - name: php-apache
  replicas: 1
  template:
    metadata:
image: k8s.gcr.io/hpa-example
        labelsports:
        - runcontainerPort: php-apache80
    spec:
      containers:
      - name: php-apacheresources:	# <--- Pay attention
        image: k8s.gcr.io/hpa-example
        ports limits:
        - containerPort: 80
        resources:	# <--- Pay attentioncpu: 500m
          limitsrequests:
            cpu: 500m
          requests:
            cpu: 200m
--200m
---
apiVersion: v1
kind: Service
metadata:
  name: php-apache
  labels:
    run: php-apache
spec:
  ports:
  - port: 80
  selector:
    run: php-apache

...

Now, we will see how the autoscaler reacts to increased load. We will start a container, and send an infinite loop of queries to the php-apache service (run it in a different terminal)increased load. Once the PHP web application is running in the cluster and we have set up an autoscaling deployment, introduce load on the web application. Here we use a BusyBox image in a container and infinite web requests running from BusyBox to the PHP web application. Copy and deploy the infinite-calls.yaml file.

Code Block
languagebashyml
titleSend infinite queries-calls.yaml
collapsetrue
$ kubectl run -i --tty load-generator --rm --image=busybox --restart=Never \
	-- /bin/sh -c "while sleep 0.01apiVersion: apps/v1
kind: Deployment
metadata:
  name: infinite-calls
  labels:
    app: infinite-calls
spec:
  replicas: 1
  selector:
    matchLabels:
      app: infinite-calls
  template:
    metadata:
      name: infinite-calls
      labels:
        app: infinite-calls
    spec:
      containers:
      - name: infinite-calls
        image: busybox
        command:
        - /bin/sh
        - -c
        - "while true; do wget -q -O- http://php-apache; done"
If you don't see a command prompt, try pressing enter.
OK!OK!OK!OK!OK!OK!OK!OK!...//php-apache; done"

Within a minute or so, we should see the higher CPU load by executing

...

Code Block
languagebash
titleGet deployment and HPA
collapsetrue
$ kubectl get deployment,hpa
NAME                             READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/infinite-calls   1/1     1            1           5m19s
deployment.apps/php-apache       7/7     7            7           2d1h71m
NAME                                             REFERENCE               TARGETS   MINPODS   MAXPODS   REPLICAS   AGE
horizontalpodautoscaler.autoscaling/php-apache   Deployment/php-apache   42%/50%   1         10        7          2d71m

We also take a look at the resource consumption of the Pods, to check how the system reacts. In the php-apache.yaml file, seen above, we set requests.cpu: 200m in the container specification. Subsequently, we entrusted the management of the deployment to the HPA, requiring that the CPU consumption of the Pods does not exceed, on average, the value of 100 milli-cores. The system actually respects these dictates. In fact, by performing an arithmetic average of the CPU consumption by the php-apache Pods below, we obtain a value of about 84 milli-cores. Compare this result with the TARGETS column of the get hpa command above: 84 milli-cores correspond to 42% of the 200 milli-cores required for Pods.

Code Block
languagebash
titleMetrics analysis
collapsetrue
$ kubectl top pod
NAME                              CPU(cores)   MEMORY(bytes)
load-generator           infinite-calls-69f758db46-hxssq    7m           0Mi2Mi
php-apache-d4cf67d68-26nnm        82m          8Mi
php-apache-d4cf67d68-5cxkh        103m         8Mi
php-apache-d4cf67d68-gn5l7        90m          8Mi
php-apache-d4cf67d68-j229m        74m          8Mi
php-apache-d4cf67d68-k9vqz        77m          8Mi
php-apache-d4cf67d68-tlssltlssl        90m          8Mi
php-apache-d4cf67d68-x76h2        75m          10Mi

Stop load

We will finish our example by stopping the user load. In the terminal where we created the container with busybox image, terminate the load generation by typing <Ctrl> + Cstopping the process, simply deleting the deployment/infinite-calls component or, if you want to reuse it for further testing, scale it to zero replicas. Then, we verify the result state. After : after a minute or so, re-run the two get commands used earlier. You should get that CPU utilization dropped to 0, and so HPA autoscaled the number of replicas back down to 1.