...
The origin of this problem is similar for both components. We present the solution for the scheduler, which will be easily applicable for the controllerManager as well. Going to the Status/Targets section of the Prometheus UI you should see a situation similar to the following
So let's connect to the control-plane and move to the /etc/kubernetes/manifests
folder. Inside we will find the kube-controller-manager.yaml
and kube-scheduler.yaml
files. Using a text editor, we edit the two files as follows (administrator permissions are likely required)
Code Block |
---|
language | yml |
---|
title | Modify kube-scheduler.yaml |
---|
collapse | true |
---|
|
apiVersion: v1
kind: Pod
metadata:
labels:
component: kube-scheduler
tier: control-plane
name: kube-scheduler
namespace: kube-system
spec:
containers:
- command:
- kube-scheduler
- --port=0 # <--- Enter the value 10251 for the scheduler
.
.
. |
...
Connection refused for KubeProxy
The metrics bind address of kube-proxy is default to127.0.0.1:10249
that prometheus instancescannotaccess to. You should expose metrics by changingmetricsBindAddress
field value to0.0.0.0:10249
if you want to collect them. To perform the modification, we access, in editing mode, the kube-proxy configmap
Code Block |
---|
language | yml |
---|
title | Edit cm kube-proxy |
---|
collapse | true |
---|
|
# We access the ConfigMap via the command "$ kubectl edit cm kube-proxy -n kube-system" and then
apiVersion: v1
data:
config.conf: |-
.
.
.
kind: KubeProxyConfiguration
metricsBindAddress: 0.0.0.0:10249 # <--- Edit this field
.
.
. |
You have to restart the pods otherwise they do not get the configuration. Therefore we perform a rollout of the kube-proxy DaemonSet
Code Block |
---|
language | bash |
---|
title | Restart ds kube-proxy |
---|
collapse | true |
---|
|
$ kubectl rollout restart ds kube-proxy -n kube-system
# You can check the status by
$ kubectl rollout status ds kube-proxy -n kube-system |