Being aware of the computational limits of a VM or, in this case, a cluster of VMs is very useful to know how far we can go without breaking them. Furthermore, knowing the maximum workload supported by a device allows us to adopt one with characteristics suitable for our purposes: if a cluster with a certain configuration can manage our applications very well, even under sustained effort, it is useless to spend excessive resources.
For this purpose load, endurance and stress tests reveal how the system responds in various situations. To be more specific, these three types of analysis are defined as:
- Load test. How the system responds to a sudden increase in requests.
- Endurance test. How the system survives a constant, moderate load for longer duration of times. It can also be referred to as a soak test, referring to the long time the software spends under test.
- Stress test. How the system responds under a heavy load, with an intent to find out the point at which system is stressed and ceases functioning.
To put the VMs under pressure, this tutorial puts a lot of demand on a PHP application running in a Kubernetes cluster. The aim is for the cluster to scale horizontally, when incoming requests exceed normal usage patterns.
The tests will be performed on a cluster consisting of 4 nodes (1 master and 3 worker) with the same flavor. The flavor will also be modified in turn, remaining the same between the VMs in the cluster, multiplying the CPU and RAM by a factor of 2: passing from medium (2 CPUs and 4GB RAM) to large (4 CPUs and 8GB RAM) and finally xlarge (8 CPUs and 16GB RAM). Finally, the clusters used for the tests are set to "factory settings", i.e. they will contain only the starting software of a typical k8s cluster just created. Before we continue, let's familiarize ourselves with a couple of tools suitable for our purposes: Metrics Server and Horizontal Pod Autoscaler.
Metrics Server
Metrics Server collects resource metrics (CPU and Memory) from Kubelets and exposes them in Kubernetes apiserver through Metrics API for use by HPA. Metrics API can also be accessed by kubectl top
, as we will see later, making it easier to debug autoscaling pipelines.
Metrics Installation
Latest Metrics Server release can be installed by running
# Components are installed in the kube-system namespace kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml
Metrics Server deployment will likely not be Ready
. If, analyzing the Pod logs, you see the error "unable to fully scrape metrics", then edit the deployment by inserting the flag
If everything went well, you can already try to run the commands kubectl top
, that allows you to see the resource consumption for nodes or pods
Limits and requests for CPU and Memory resources are measured, respectively, in cpu units and bytes. One cpu, in Kubernetes, is equivalent to 1 vCPU/Core for cloud providers (1 hyperthread on bare-metal Intel processors). You can express memory as a plain integer or as a fixed-point number using one of these suffixes: E, P, T, G, M, K. You can also use the power-of-two equivalents: Ei, Pi, Ti, Gi, Mi, Ki. For example, let's analyze the output of the kubectl top node
command on the k8s-master1.novalocal
node. This VM has 4 vCPUs and 8 GB of RAM. By doing some simple calculations (159/4000 CPU and 2800/8000 RAM), we obtain approximately the displayed percentages.
Horizontal Pod Autoscaler
Horizontal Pod Autoscaler (HPA) automatically scales the number of Pods in a replication controller, deployment, replicaSet or statefulSet based on observed CPU or Memory (RAM) utilization. Let's see how it works in the next example.
Run and expose php-apache server
To demonstrate HPA we will use a custom docker image based on the php-apache image. Apply the following file, to install a simple PHP web application in the Kubernetes cluster. Then, verify the pods were created.
The one just shown is a .yaml
file containing a classic configuration of a deployment and a service. The only novelty is represented by the resources
parameter, in the Container specifications. When you specify a Pod, you can optionally specify how much of each resource a Container needs. When you specify the resource request for Containers in a Pod, the scheduler uses this information to decide which node to place the Pod on. When you specify a resource limit for a Container, the kubelet enforces those limits so that the running container is not allowed to use more of that resource than the limit you set. The kubelet also reserves at least the request amount of that system resource specifically for that container to use. If the node where a Pod is running has enough of a resource available, it's possible (and allowed) for a container to use more resource than its request
for that resource specifies. However, a container is not allowed to use more than its resource limit
.
Create HPA
Now that the server is running, we will create the autoscaler using kubectl autoscale
. The following command will create a HPA, that maintains between 1 and 10 replicas of the Pods controlled by the php-apache deployment we created before. Roughly speaking, HPA will increase and decrease the number of replicas (via the deployment) to maintain an average CPU utilization across all Pods of 50% (since each pod requests 200 milli-cores), this means average CPU usage of 100 milli-cores.
$ kubectl autoscale deployment php-apache --cpu-percent=50 --min=1 --max=10
We may check the current status of autoscaler by running
Please note that the current CPU consumption is 0% as we are not sending any requests to the server.
Support for HPA in kubectl
HPA, like every API resource, is supported in a standard way by kubectl
. We can list autoscalers by kubectl get hpa
and get detailed description by kubectl describe hpa
. We can create a new autoscaler using kubectl create
command. In effect, instead of using kubectl autoscale
command to create a HPA imperatively, we can use a file to create it declaratively. Finally, we can delete an autoscaler using kubectl delete hpa
.
Increase load
Now, we will see how the autoscaler reacts to increased load. Once the PHP web application is running in the cluster and we have set up an autoscaling deployment, introduce load on the web application. Here we use a BusyBox image in a container and infinite web requests running from BusyBox to the PHP web application. Copy and deploy the infinite-calls.yaml
file.
Within a minute or so, we should see the higher CPU load by executing
Note
It may take a few minutes to stabilize the number of replicas. Since the amount of load is not controlled in any way it may happen that the final number of replicas will differ from this example.
Here, CPU consumption has increased to 305% of the request. As a result, the deployment was resized to 7 replicas
We also take a look at the resource consumption of the Pods, to check how the system reacts. In the php-apache.yaml
file, seen above, we set requests.cpu: 200m
in the container specification. Subsequently, we entrusted the management of the deployment to the HPA, requiring that the CPU consumption of the Pods does not exceed, on average, the value of 100 milli-cores. The system actually respects these dictates. In fact, by performing an arithmetic average of the CPU consumption by the php-apache
Pods below, we obtain a value of about 84 milli-cores. Compare this result with the TARGETS
column of the get hpa
command above: 84 milli-cores correspond to 42% of the 200 milli-cores required for Pods.
Stop load
We will finish our example by stopping the process, simply deleting the deployment/infinite-calls
component or, if you want to reuse it for further testing, scale it to zero replicas. Then, we verify the result state: after a minute or so, re-run the two get
commands used earlier. You should get that CPU utilization dropped to 0, and so HPA autoscaled the number of replicas back down to 1.