Connection refused for KubeScheduler and KubeControllerManager

The origin of this problem is similar for both components. We present the solution for the scheduler, which will be easily applicable for the controllerManager as well. Going to the Status/Targets section of the Prometheus UI you should see a situation similar to the following

Scheduler connection refusedSo let's connect to the control-plane and move to the /etc/kubernetes/manifests folder. Inside we will find the kube-controller-manager.yaml and kube-scheduler.yaml files. Using a text editor, we edit the two files as follows (administrator permissions are likely required)

Modify kube-scheduler.yaml
apiVersion: v1
kind: Pod
metadata:
  labels:
    component: kube-scheduler
    tier: control-plane
  name: kube-scheduler
  namespace: kube-system
spec:
  containers:
  - command:
    - kube-scheduler
    - --port=0	# <--- Enter the value 10251 for the scheduler
.
.
.

As for the controllerManager, you must enter the value 10252. Note that the number of ports mentioned here corresponds to the values of the ports present in the Endpoint column in the screenshot of the Prometheus UI. After this change the State should change from DOWN to UP.

--port

Moving to the official documentation we get some info regarding the --port parameter

--port int     Default: 10251

The port on which to serve HTTP insecurely without authentication and authorization. If 0, don't serve plain HTTP at all.

Since the default is port 10251, deleting or commenting the parameter would have the same result.

Connection refused for KubeProxy

If you get the following error it's because the metrics bind address of kube-proxy is default to 127.0.0.1:10249, that prometheus instances cannot access to. You should expose metrics by changing metricsBindAddress field value to 0.0.0.0:10249, if you want to collect them. 

Proxy connection refusedTo perform the modification, we access, in editing mode, the kube-proxy ConfigMap

Edit cm kube-proxy
# We access the ConfigMap via the command "$ kubectl edit cm kube-proxy -n kube-system" and then
apiVersion: v1
data:
  config.conf: |-
.
.
.
    kind: KubeProxyConfiguration
    metricsBindAddress: 0.0.0.0:10249 # <--- Edit this field
.
.
.

You have to restart the pods otherwise they do not get the configuration. Therefore we perform a rollout of the kube-proxy DaemonSet

Restart ds kube-proxy
$ kubectl rollout restart ds kube-proxy -n kube-system
# You can check the status by
$ kubectl rollout status ds kube-proxy -n kube-system
  • No labels