Kubernetes dashboard on k8s cluster

7 minute read

Kubernetes Dashboard is a general purpose, web-based UI for Kubernetes clusters. It allows users to manage applications running in the cluster and troubleshoot them, as well as manage the cluster itself.” It is super useful and helpful for quickly checking the status of the cluster, services and basically everything.

I especially recommend to have it installed on k8s cluster when you deploy a new service which gives some problems as you can quickly lookup all the details on the installation and monitor in real-time changes being made.

To speed the process up and to make sure the entire installation can be easily replayed we use a set of scripts available on gihub repository: k8s-scripts. While there is some documentation for the scripts and you can look in scripts source code to get more details, this guide expands on the details explaining various options and suggesting optimal settings.

Personal notes: My personal notes on how to setup stuff to make it easier to repeat next time.

Step 1: Prerequisites

This easily works on any k8s cluster and there really are no special requirements or dependencies. We only need what is normally needed for k8s cluster deployment and maintenance.

  • K8s on Oracle Cloud “Part 4” is completed. This is because we are using fluxcd with cluster state stored in Git repository. So, the fluxcd must be already installed on the cluster and environment prepared. It does not have to be on Oracle Cloud. Any k8s cluster should work equally well.

Step 2: Configuration

There really is not much to configure and the dashboard would work right away after running the installation script. However, there are some minor adjustments worth mentioning.

Version adjustment

k8s-scripts define versions for services which are up to date and tested at the when the project was last updated by it’s developers. These versions may become outdated over time, or perhaps you need/want to use a very specific version of the package.

To adjust the cert-manager package version look at the ~/.tigase-flux/envs/versions.env file and change value of the DA_VER property:

# K8s Dashboard
DA_VER="5.7.0"

To check what is the latest available version for the package run command:

~$ helm search hub --max-col-width 65 kubernetes-dashboard | grep "URL\|k8s-dash"
URL                                                              	CHART VERSION	APP VERSION	DESCRIPTION                                   
https://artifacthub.io/packages/helm/k8s-dashboard/kubernetes-...	5.7.0        	2.6.0      	General-purpose web UI for Kubernetes clusters

Adjust the versions.env file with the latest CHART VERSION.

K8s dashboard custom settings

If you look at the default values file for the k8s-dashboard: envs/k8s-dashboard-values.yaml you would see it is empty. So there really is not much to configure.

However, there might be some settings worth considering. You may want to disable metrics scrapper if it is not installed on your cluster and I also highly recommend setting dashboard session to a lontger timeout than a default or disable the timeout completely. To do this add following to the file:

metricsScraper:
  ## Wether to enable dashboard-metrics-scraper
  enabled: false

extraArgs:
  - --token-ttl=0

Step 3: Installation

The installation step is pretty simple. We just have to run a correct script from the k8s-scripts package: scripts/cluster-kubernetes-dashboard.sh. Before we run the script, I suggest to execute flux get all -A and/or flux get hr -A commands and then do yout again after installation to see the difference.

The installation script makes following changes:

  1. Installs the chart source repository
  2. Creates Helm release manifest for kubernetes-dashboard in the FluxCD’s git repository
  3. Stores your dashboard login token in ~/.tigase-flux/k8s-secrets file on your drive.

Now is the time to run the installation script:

~/temp/k8s-scripts$ ./scripts/cluster-kubernetes-dashboard.sh
Switched to context "xxxxxxx".
      Adding kubernetes-dashboard source at https://kubernetes.github.io/dashboard/
Switched to context "xxxxxx".
~/.tigase-flux/projects/cluster-name
[master 5654e2f] dash deployment
 2 files changed, 12 insertions(+)
 create mode 100644 infra/common/sources/kubernetes-dashboard.yaml
Enumerating objects: 12, done.
Counting objects: 100% (12/12), done.
Delta compression using up to 16 threads
Compressing objects: 100% (7/7), done.
Writing objects: 100% (7/7), 839 bytes | 839.00 KiB/s, done.
Total 7 (delta 3), reused 0 (delta 0), pack-reused 0
remote: Resolving deltas: 100% (3/3), completed with 3 local objects.
To https://github.com/a/cluster-name
   d3e0493..5654e2f  master -> master
► annotating GitRepository flux-system in flux-system namespace
✔ GitRepository annotated
◎ waiting for GitRepository reconciliation
✔ fetched revision master/5654e2f596b5413a3b1be87f14775147a873e690
Waiting for the system to be ready
   Deploying dash
Switched to context "xxxxxxxx".
Update service kustomization
~/.tigase-flux/projects/cluster-name
Update namespace kustomization
~/.tigase-flux/projects/cluster-name
Update common kustomization
~/.tigase-flux/projects/cluster-name
[master e89d8b3] dash deployment
 6 files changed, 60 insertions(+)
 create mode 100644 infra/common/k8s/dash/dash.yaml
 create mode 100644 infra/common/k8s/dash/dashboard-service-account.yaml
 create mode 100644 infra/common/k8s/dash/kustomization.yaml
 create mode 100644 infra/common/k8s/kustomization.yaml
 create mode 100644 infra/common/k8s/namespace.yaml
Enumerating objects: 15, done.
Counting objects: 100% (15/15), done.
Delta compression using up to 16 threads
Compressing objects: 100% (12/12), done.
Writing objects: 100% (12/12), 1.60 KiB | 1.60 MiB/s, done.
Total 12 (delta 1), reused 0 (delta 0), pack-reused 0
remote: Resolving deltas: 100% (1/1), completed with 1 local object.
To https://github.com/a/cluster-name
   5654e2f..e89d8b3  master -> master
► annotating GitRepository flux-system in flux-system namespace
✔ GitRepository annotated
◎ waiting for GitRepository reconciliation
✔ fetched revision master/e89d8b3ad1d183e8914278cc49d0971b2f951ac9
Waiting for the system to be ready

Seems to be successful, let check it out:

/.tigase-flux$ flux get hr -A
NAMESPACE      	NAME          	REVISION	SUSPENDED	READY	MESSAGE 
cert-manager   	cert-manager  	v1.8.0  	False    	True 	Release rec...
flux-system    	sealed-secrets	2.1.8   	False    	True 	Release rec...
ingress-nginx  	ingress-nginx 	4.1.2   	False    	True 	Release rec...
k8s            	dash          	5.7.0   	False    	True 	Release rec...

and kubectl:

~/temp/k8s-scripts$ kubectl get pods --namespace k8s
NAME                                         READY   STATUS    RESTARTS   AGE
dash-kubernetes-dashboard-5c565c69d4-nckbg   1/1     Running   0          23m

Step 4: Connecting to dashboard

K8s dashboard provides sensitive information and access to sensitive data on your cluster. Therefore we do not want to expose dahsboard on ingress to the entire world or make it easily accessible in any way. Instead, we will just redirect traffic from the dashboard pod on the cluster directly to our machine and use localhost to browse our cluster.

The following command retrieves k8s dashboard pod name:

export POD_NAME=$(kubectl get pods -n k8s \
    -l "app.kubernetes.io/name=kubernetes-dashboard,app.kubernetes.io/instance=dash" \
    -o jsonpath="{.items[0].metadata.name}")

Example:

~$ export POD_NAME=$(kubectl get pods -n k8s \
    -l "app.kubernetes.io/name=kubernetes-dashboard,app.kubernetes.io/instance=dash" \
    -o jsonpath="{.items[0].metadata.name}")
~$ echo $POD_NAME
dash-kubernetes-dashboard-5c565c69d4-nckbg

Now, when we have the pod name we can redirect traffic from the pod to our localhost using kubectl. On a separate terminal run the following command:

kubectl -n k8s port-forward $POD_NAME 8443:8443

Example:

~$ kubectl -n k8s port-forward $POD_NAME 8443:8443
Forwarding from 127.0.0.1:8443 -> 8443
Forwarding from [::1]:8443 -> 8443

The first output line says how to access the k8s dashboard. Open a browser and in the address copy data from the first line: “127.0.0.1:8443”.

If you cannot access the page in Chrome or Brave browser because the browser won’t let you for security reasons go to link: chrome://flags/#allow-insecure-localhost, enable the option and reluanch browser.

After you open the address you will get login page looking like this:

alt

You need a token, which is stored on the cluster. Token should be automatically stored for you in file ~/.tigase-flux/k8s-secrets. If it is not there, or if it is outdated and it does not work you can retrieve the token again using command:

~$ kubectl -n k8s get secret \
    $(kubectl -n k8s get sa/dashboard-admin \
    -o jsonpath="{.secrets[0].name}") \
    -o go-template="{{.data.token | base64decode}}"

eyJhbGciOiJSUzI1NiIsImtpZCI6Imx6TU95Y2VXVjVEU181SlF2UTVXX2kxTWYwV1FpNGp...

As you can see, you get a long output of numbers and letters. Copy/paste the entire thing to the login form and click “Sign in” and… you are in.

Uninstallation

To uninstall and remove the kubernetes dahsboard from your cluster simply run the installation script again with --remove parameter. Please note, as the scripts use fluxcd, all they do really is modyfying git repository and forcing fluxcd cluster reconciliation.

~$ ./scripts/cluster-kubernetes-dashboard.sh --remove
Switched to context "xxxxxx".
    Preparing to remove: dash
    Removing: infra/common/k8s
Update service kustomization for infra/common/ in ~/.tigase-flux/projects/cluster-name
~/.tigase-flux/projects/cluster-name
    Removing: infra/common/sources/dash.yaml
Update service for infra/common/sources in ~/.tigase-flux/projects/cluster-name
~/.tigase-flux/projects/cluster-name
[master 70227d6] Removing dash deployment
 7 files changed, 61 deletions(-)
 delete mode 100644 infra/common/k8s/dash/dash.yaml
 delete mode 100644 infra/common/k8s/dash/dashboard-service-account.yaml
 delete mode 100644 infra/common/k8s/dash/kustomization.yaml
 delete mode 100644 infra/common/k8s/kustomization.yaml
 delete mode 100644 infra/common/k8s/namespace.yaml
Enumerating objects: 12, done.
Counting objects: 100% (12/12), done.
Delta compression using up to 16 threads
Compressing objects: 100% (6/6), done.
Writing objects: 100% (7/7), 754 bytes | 754.00 KiB/s, done.
Total 7 (delta 3), reused 1 (delta 1), pack-reused 0
remote: Resolving deltas: 100% (3/3), completed with 3 local objects.
To https://github.com/arturhefczyc/cluster-name
   e89d8b3..70227d6  master -> master
► annotating GitRepository flux-system in flux-system namespace
✔ GitRepository annotated
◎ waiting for GitRepository reconciliation
✔ fetched revision master/70227d63d0d93f3483552a9a694944f06f65d3fa

It may takes a few seconds or even minutes after runnning this script for the service to dissapear from the cluster. Eventually, when you check the cluster the dashboard is no longer there:

~$ flux get hr -A
NAMESPACE      	NAME          	REVISION	SUSPENDED	READY	MESSAGE
cert-manager   	cert-manager  	v1.8.0  	False    	True 	Release rec...
flux-system    	sealed-secrets	2.1.8   	False    	True 	Release rec...
ingress-nginx  	ingress-nginx 	4.1.2   	False    	True 	Release rec...