Kubernetes provides a Dashboard, to allow cluster management through a user interface, certainly more intuitive than the classic command line. You can use Dashboard to deploy containerized applications on a Kubernetes cluster, troubleshoot the containerized application, and manage cluster resources. You can use the Dashboard to get an overview of the applications running on the cluster, as well as to create or modify individual Kubernetes resources. For more information, consult the official documentation Kubernetes Dashboard.

Installation

The user interface is not distributed by default. Installation is very simple, just run the following command (check here the version)

Install Dashboard
$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.5.0/aio/deploy/recommended.yaml

Access

We need to make a small modification to a .yaml file, via the command

Entry in edit mode
$ kubectl -n kubernetes-dashboard edit svc kubernetes-dashboard

Once the above command has been launched, a .yaml file will appear. If not already present, make the following change (here an extract of the file already modified)

Insert NodePort
spec:
   clusterIP: 10.107.65.54
   externalTrafficPolicy: Cluster
   ports:
      - nodePort: 30000	# <--- pay attention to this field
        port: 443
        protocol: TCP
        targetPort: 8443
   selector:
        k8s-app: kubernetes-dashboard
   sessionAffinity: None
   type: NodePort		# <--- Enter NodePort (pay attention to upper/lowercase letters) in place of ClusterIP
status:
   loadBalancer:

The port value here is generated randomly in the range 30000-32767, after saving the modified file with type: NodePort (the default value should be ClusterIP). If you want, you can opt for another value, as long as it belongs to the aforementioned range of values, by relaunching the edit command (here we have chosen the 30000 port, easier to remember). There is no need to open the port on OpenStack, if you access the service through the Worker node FIP, because this range of ports is already open for WorkerNodes (see "Preliminary steps" in cap. 2). If you are using the Control-Plane FIP, you need to open the chosen port.

We can check if the changes made have had any effect on the service by running the command (note the value of the TYPE field, now equal to NodePort, and the port 30000)

Get the service
$ kubectl -n kubernetes-dashboard get services
NAME                        TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)         AGE
dashboard-metrics-scraper   ClusterIP   10.106.152.1   <none>        8000/TCP        3d
kubernetes-dashboard        NodePort    10.101.35.23   <none>        443:30000/TCP   3d

Account and token

By connecting to the browser at https://<node_FIP>:<port>, in our case https://131.154.97.163:30000 (note the adoption of the protocol for secure communication with https), we could access the dashboard. There is no need to activate the VPN. The credentials entry screen will appear. As you can see, there are two ways of accessing: via token or via a configuration file. Here we deal with the first mode. However, it is advisable to try the connection to the dashboard, to make sure that the procedure carried out so far is correct.

Let's find out now how to create a new user using the Kubernetes service account mechanism, which grants the created user the administrator permissions and access to the Dashboard, using the associated bearer token. We create the dashboard_adminuser.yaml file

dashboard_adminuser.yaml
apiVersion: v1
kind: ServiceAccount
metadata: 
  name: admin-user
  namespace: kubernetes-dashboard
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: admin-user
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kubernetes-dashboard

which we will then launch with the command kubectl apply -f dashboard_adminuser.yaml. Finally, we obtain the token (present in the last line of the following screen), which will be pasted on the Dashboard login screen, by launching the command

Token
# The command shows the description of the "admin-user" secret created, after having retrieved it from the list of secret present
$ kubectl -n kubernetes-dashboard describe secret $(kubectl -n kubernetes-dashboard get secret | grep admin-user | awk '{print $1}')
Name:         admin-user-token-g7c2g
Namespace:    kubernetes-dashboard
Labels:       <none>
Annotations:  kubernetes.io/service-account.name: admin-user
              kubernetes.io/service-account.uid: d485be64-eb17-40fc-b11e-6c35112d107aType:  kubernetes.io/service-account-tokenData
====
ca.crt:     1025 bytes
namespace:  20 bytes
token:     	<token>

Once pasted the token you enter the dashbord. The token is static, so it is recommended to save it somewhere, avoiding recovering it in the future. Below is a screenshot of the dashboard login screen.

dashboard login

As you can see in the image, there are 2 ways to access the dashboard: one through the token, which we have already talked about, and the other through the Kubeconfig, that is the Kubernetes configuration file that you have saved in $HOME/.kube/config. Before being used, the file needs to be modified: the dashboard needs the user in the Kubeconfig file to have either username and password or token, but config, which is itself a copy of the admin.conf file, only has client-certificate. You can manually edit the config file to add the token or using the method below

Add token in config
# Extract the token and insert it into the $TOKEN variable (pay attention to the namespace)
$ TOKEN=$(kubectl -n <namespace> describe secret $(kubectl -n <namespace> get secret | grep admin-user | awk '{print $1}')| awk '$1=="token:"{print $2}')
# Add the token in correspondence of the <user>
$ kubectl config set-credentials <user> --token="${TOKEN}"

If you have done this correctly, the config file will look like below (look at the token field at the end of the file). To look at the new configuration, launch the command kubectl config view

Config with token
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: DATA+OMITTED
    server: https://<IP>:<port>
  name: <cluster_name>
contexts:
- context:
    cluster: <cluster_name>
    user: <user_name>
  name: <name>
current-context: <current-context>
kind: Config
preferences: {}
users:
- name: <user_name>
  user:
    client-certificate-data: REDACTED
    client-key-data: REDACTED
    token: <token>
  • No labels