Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

The origin of this problem is similar for both components. We present the solution for the scheduler, which will be easily applicable for the controllerManager as well. Going to the Status/Targets section of the Prometheus UI you should see a situation similar to the followingSo let's connect to the control-plane and move to the /etc/kubernetes/manifests folder. Inside we will find the kube-controller-manager.yaml and kube-scheduler.yaml files. Using a text editor, we edit the two files as follows (administrator permissions are likely required)

Code Block
languageyml
titleModify kube-scheduler.yaml
collapsetrue
apiVersion: v1
kind: Pod
metadata:
  labels:
    component: kube-scheduler
    tier: control-plane
  name: kube-scheduler
  namespace: kube-system
spec:
  containers:
  - command:
    - kube-scheduler
    - --port=0	# <--- Enter the value 10251 for the scheduler
.
.
.

...

Connection refused for KubeProxy

The metrics bind address of kube-proxy is default to127.0.0.1:10249that prometheus instancescannotaccess to. You should expose metrics by changingmetricsBindAddressfield value to0.0.0.0:10249if you want to collect them. To perform the modification, we access, in editing mode, the kube-proxy configmap

Code Block
languageyml
titleEdit cm kube-proxy
collapsetrue
# We access the ConfigMap via the command "$ kubectl edit cm kube-proxy -n kube-system" and then
apiVersion: v1
data:
  config.conf: |-
.
.
.
    kind: KubeProxyConfiguration
    metricsBindAddress: 0.0.0.0:10249 # <--- Edit this field
.
.
.

You have to restart the pods otherwise they do not get the configuration. Therefore we perform a rollout of the kube-proxy DaemonSet

Code Block
languagebash
titleRestart ds kube-proxy
collapsetrue
$ kubectl rollout restart ds kube-proxy -n kube-system
# You can check the status by
$ kubectl rollout status ds kube-proxy -n kube-system