K8s on Oracle Cloud [Part 4]: Basic config and FluxCD setup

18 minute read

Once we have our k8s cluster setup we need some basic services for our applications such as cert manager, ingress, logging, monitoring and so on…

We will be using FluxCD to maintain our cluster, therefore, before deploying any services we, first, need to prepare configuration and setup FluxCD on the cluster. This guide walks through setting it up. It should work for any k8s cluster as the settings base on kubernetes api only.

To speed the process up, add automation and to make sure the entire installation can be easily replayed we use a set of scripts available on gihub repository: k8s-scripts. While there is some documentation for the scripts and you can look in scripts source code to get more details, this guide expands on the details explaining various options and suggesting optimal settings.

Personal notes: My personal notes on how to setup stuff to make it easier to repeat next time.

Step 1: Prerequisites

First, the guide is prepared on/for Linux with Ubuntu distribution. While it should all work on any Linux ditribution and on MAC OS, there is no guarantee and some tools might need a different procedure to install them.

There are a few basic requirements and tools before you can get all services installed using this guide. They are mostly described in the README of the k8s-scripts repository:

  1. A k8s cluster ready to use and ‘empty’ that is without any additional services installed yet.
  2. kubectl - installed and configured to connect to your cluster.
  3. pwgen - is a command line tool for password generation. It is used to automatically generate and configure passwords for various k8s services. The goal is to have filly automated and secure system. Scripts offer option to manually provide passwords as well.
  4. kubeseal - is a command line too to manage sealed secrets for your k8s cluster
  5. fluxcd - the k8s-scripts use fluxcd to manage and maintain the k8s cluster.
  6. kustomize - fluxcd uses kustomize tool for guess what… customizing manifest files.
  7. git - a command line tool to manage code repository. This is necessary becuase fluxcd keeps the k8s cluster state in git repository. Making changes to the k8s cluster using fluxcd is done through making changes to the git repository which is periodically synced by fluxcd with the running k8s cluster and services on it.
  8. yq - a command line tool for YAML and JSON files manipulation. It allows to change files content from scripts offering fully automated way to prepare necessary configuration and manifest files.
  9. bash - a command line shell. Technically you can use any shell you like, however k8s-scripts are written for Bash shell and all the examples would be also running within bash shell.
  10. git repository to store the k8s cluster state by flux.
  11. S3 object storage configured and ready to use to store cluster backups. You need S3 to know S3 URL and access/secret keys. The cluster is being deployed on Oracle Cloud and you can keep backups on object storage within Oracle Cloud but it is recommended to have backups elsewhere with adifferent provider.
  12. Copy of the k8s-scripts repository
  13. ~/.tigase-flux folder as configuration and operation directory
  14. Copy of the entire k8s-scripts/envs folder to the ~/.tigase-flux/ folder
  15. Adjusted envs configuration - this will be explained in depth in further down in the guide.

Please note:
While the FluxCD can use and store k8s cluster data in any git repository, at the moment of wriging this post k8s-scripts are prepared to use GitHub only. For sure it might be improved in the future to allow configuration for plain git repositories, so check the configuration files for plain git options.
If you prefer to use plain git right now this could be done by modifications to scripts/flux-bootstrap.sh file to replace flux bootstrap github with flux bootstrap git and relevant options.

Let’s start with getting copy of the k8s-scripts repository:

/temp$ git clone https://github.com/tigase/k8s-scripts
Cloning into 'k8s-scripts'...
remote: Enumerating objects: 217, done.
remote: Counting objects: 100% (217/217), done.
remote: Compressing objects: 100% (155/155), done.
remote: Total 217 (delta 121), reused 142 (delta 58), pack-reused 0
Receiving objects: 100% (217/217), 63.79 KiB | 593.00 KiB/s, done.
Resolving deltas: 100% (121/121), done.

~/temp$ ll
total 12
drwxrwxr-x  3 t t 4096 May  5 20:41 ./
drwxr-x--- 61 t t 4096 May  5 20:40 ../
drwxrwxr-x  5 t t 4096 May  5 20:41 k8s-scripts/

~/temp$ cd k8s-scripts/

~/temp/k8s-scripts$ ll
total 60
drwxrwxr-x 5 t t  4096 May  5 20:41 ./
drwxrwxr-x 3 t t  4096 May  5 20:41 ../
drwxrwxr-x 2 t t  4096 May  5 20:41 envs/
drwxrwxr-x 8 t t  4096 May  5 20:41 .git/
-rw-rw-r-- 1 t t 34523 May  5 20:41 LICENSE
-rw-rw-r-- 1 t t  2365 May  5 20:41 README.md
drwxrwxr-x 2 t t  4096 May  5 20:41 scripts/

There is a script which helps to check if all requirements are met: scripts/scripts-env-init.sh. If you run the script it will tell if you anything is missing:

$ ./scripts/scripts-env-init.sh --check
Config directory /home/t/.tigase-flux does not exist!

Ok, we need to create .tigase-flux folder:

~/temp/k8s-scripts$ mkdir ~/.tigase-flux
~/temp/k8s-scripts$ ./scripts/scripts-env-init.sh --check
Environment directory /home/t/.tigase-flux/envs does not exist!

We are now one step further, now the .tigase-flux/envs folder is missing. Looking at the README file we have to copy envs from the repository to the ~/.tigase-flux folder:

~/temp/k8s-scripts$ cp -vr envs/ ~/.tigase-flux/
'envs/cert-man_values.yaml' -> '/home/t/.tigase-flux/envs/cert-man_values.yaml'
'envs/cluster.env' -> '/home/t/.tigase-flux/envs/cluster.env'
'envs/k8s-dashboard-values.yaml' -> '/home/t/.tigase-flux/envs/k8s-dashboard-values.yaml'
'envs/loki-values.yaml' -> '/home/t/.tigase-flux/envs/loki-values.yaml'
'envs/longhorn-values.yaml' -> '/home/t/.tigase-flux/envs/longhorn-values.yaml'
'envs/mailu-values.yaml' -> '/home/t/.tigase-flux/envs/mailu-values.yaml'
'envs/nginx_values.yaml' -> '/home/t/.tigase-flux/envs/nginx_values.yaml'
'envs/onedev-mysql.yaml' -> '/home/t/.tigase-flux/envs/onedev-mysql.yaml'
'envs/onedev.yaml' -> '/home/t/.tigase-flux/envs/onedev.yaml'
'envs/prometheus-values.yaml' -> '/home/t/.tigase-flux/envs/prometheus-values.yaml'
'envs/ss_values.yaml' -> '/home/t/.tigase-flux/envs/ss_values.yaml'
'envs/tigase-server-mysql.yaml' -> '/home/t/.tigase-flux/envs/tigase-server-mysql.yaml'
'envs/tigase-server.yaml' -> '/home/t/.tigase-flux/envs/tigase-server.yaml'
'envs/versions.env' -> '/home/t/.tigase-flux/envs/versions.env'
'envs/weblate.yaml' -> '/home/t/.tigase-flux/envs/weblate.yaml'

~/temp/k8s-scripts$ ./scripts/scripts-env-init.sh --check
kubectl could not be found!

Ok, now the required envs folder is found but next check failed. Missing command line kubectl tool. Once we have all the tools installed and available in the PATH, running the script should output message:

~/temp/k8s-scripts$ ./scripts/scripts-env-init.sh --check
All seems to be OK and ready to go

And we are ready to go for the next step.

Step 2: Basic configuration

This section will walk through all the mandatory settings and some more significant non-mandatory, to give a good understanding how to prepare environment to setup all systems customized for specific needs.

envs/versions.env

This file contains basic metadata for services being installed. Among other information is service chart version. It can be adjusted to more recent version numbe. However, the whole installation is tested and proven to work on versions which are in the repository file. Therefore, I suggest to first go through installation on existing versions and once this is all setup and working, adjust versions/upgrade as needed.

envs/cluster.env

This is the main file for the entire cluster setup and this is the place where you need to make your first changes:

~/temp/k8s-scripts$ cat envs/cluster.env 
### Generic

PROJECTS_DIR="${CONFIG}/projects"

COLORED_OUTPUT=true

# Run: kubectl config get-contexts
# and copy your context name here
K8S_CONTEXT=""

### Cluster

CLUSTER_NAME="YOUR_CLUSTER_NAME"

### Repository

export GITHUB_USER="YOUR_GITHUB_USER"
export GITHUB_TOKEN="YOUR_GITHUB_TOKEN"

CLUSTER_REPO=${CLUSTER_NAME}
REPO_BRANCH="master"

### Repository and folders structure for the cluster data
CLUSTER_REPO_DIR="${PROJECTS_DIR}/${CLUSTER_REPO}"
INFRA="infra"
BASE="common"
APPS="apps"
BASE_DIR="${INFRA}/${BASE}"
APPS_DIR="${INFRA}/${APPS}"

### Other settings
DEF_INTERVAL=1h
FLUX_NS="flux-system"
SSL_EMAIL="EMAIL_FOR_LETSENCRYPT"
SSL_STAG_ISSUER="letsencrypt-staging"
SSL_PROD_ISSUER="letsencrypt"
ROUTE53_ACCESS_KEY=""
ROUTE53_SECRET_KEY=""

### Longhorn backup settings
export LH_S3_BACKUP_ACCESS_KEY=""
export LH_S3_BACKUP_SECRET_KEY=""

### OneDev backup settings
export ONEDEV_MYSQL_S3_BACKUP="false"
export ONEDEV_MYSQL_S3_BACKUP_ACCESS_KEY=""
export ONEDEV_MYSQL_S3_BACKUP_SECRET_KEY=""
export ONEDEV_MYSQL_S3_BACKUP_SCHEDULE="1 5 * * *"
export ONEDEV_MYSQL_S3_BACKUP_ENDPOINT=""
export ONEDEV_MYSQL_S3_BACKUP_BUCKET=""
export ONEDEV_MYSQL_S3_BACKUP_PREFIX=""
export ONEDEV_MYSQL_S3_BACKUP_EXPIRE_IN="7"
export ONEDEV_DOMAIN=""

As we can see there is quite a few settings to adjust. Most of them can stay at default values, some of them are not mandatory but some are essential to a correctly working system. Here is a walk through all the settings.

Some settings are highlighted this way. These are variables that must be configured by a user.

Variables highlighted this way are not mandatory to change but highly recommended.

  • CONFIG - env variable, by default it is set to ${HOME}/.tigase-flux. However it can be set to any value of a folder name where you want to keep all the configuration files and k8s clusters repositories.
  • PROJECTS_DIR - again, by default it points to inside of the CONFIG folder but can be set to anything else. This is the folder where all the k8s clusters git repositories copies are stored
  • COLORED_OUTPUT - all the scripts during execution print out information about progress, warning, errors or execution status. For readabilty the messages are colored. However colors may not work on all kinds of terminals. If it causes a problems, it can be disabled using this variable.
  • CLUSTER_NAME - this can be set to anything, it does not even need to match k8s cluster name on Oracle Cloud or on any other cloud. But should clearly identify the k8s cluster. In the git repository some folders and files will be named using this variable.

  • GITHUB_USER - this is obviously your github user. Under this account k8s cluster repository is stored.

  • GITHUB_TOKEN - your GitHub authentication token with access to repository.

  • CLUSTER_REPO - this k8s cluster repository name in git. By default it is set to the same name as the CLUSTER_NAME. It can be changed to something else, if for example a single repository stores configuration for multiple clusters.
  • REPO_BRANCH - git repository branch used to reconcile k8s cluster state with repository content. It should be set to a correct branch name under which a production content is stored.
  • CLUSTER_REPO_DIR, INFRA, BASE, APPS, BASE_DIR, APPS_DIR - are all related to the repository folders naming and structure. They do not really need to be changed unless there s a strong preference for a different naming/structure convention.

    Maybe it is now a good time to briefly describe how cluster data is organized with the repository.

    This is pretty much typical FluxCD repository wit typical folder structure:

    • repo_root is your git repository root folder
    • clusters is folder within which definitions for different clusters are stored
    • $CLUSTER_NAME_N are a different k8s clusters for which state is stored in this repository
    • $INFRA is the top folder for all the cluster services configuration and manifests
    • $APPS is folder for the cluster applications configuration and metadata.
    • app_N is a list of applications configured and running on the cluster
    • $BASE is folder for the cluster basic and common services configuration and metadata. Services are further grouped within namespaces to logically structure services, like monitoring, logging, backups, etc…
    • sources is a list of chart repositories used by this cluster
    • ns_1, ns_2, ns_3, … namespace names of running services and services themnsleves are inside with namespapce folders.

    Now, there is separation between “apps” and “common services” but the distinction is really fluid. Formally speaking everything running on k8s cluster is just a service. However, there is a set of typical services which most likely run on every single cluster, like ingress, cert-manager, monitoring, etc… whereas ‘apps’ are more cluster specific services deployed to interact with users, like email server, website, etc… Separating them makes easier to reproduce the environment for different clusters. It also allows to share common configuration between different k8s clusters.

  • DEF_INTERVAL - time interval for fluxcd to check changes in the git repository for cluster reconcilation. It can be set to anything from minutes to days. It’s best to select a reasonable interval suitable for the cluster systems. I would suggest to select least frequent possible to avoid unnecessary traffic and load on the system. Please note every single service is configured separately for the changes check. If you have 20 services running on a cluster and all of them are configured with interval 1 minute, your cluster will be checking git repository 20 times within each minute. 1h is probably most frequent you can reasonably set. You can always force cluster reconcilation if necessary.
  • FLUX_NS is a dedicated namespace to keep all flux essential services with elevated priviledges. It is best to keep the default value.
  • SSL_EMAIL this is a mandatory property which needs to be set to a correct email of the person responsible for the cluster. It is used during connection with Letsencrypt to obtain SSL certificates for services on your cluster. Without settings this to a correct email address cert-manager will not be able to obtain SSL certiifcates.

  • SSL_STAG_ISSUER, SSL_PROD_ISSUER these variables keep SSL issuer identification in the cluster. ‘STAG’ is for stagging/testing purposes. It does generate a valid SSL certificate for testing purposes. It should be used during test and preliminary deployment phase. ‘PROD’ is SSL certificates issuer for production use. No need to change these.
  • ROUTE53_ACCESS_KEY, ROUTE53_SECRET_KEY are AWS Route53 access credentials and are used by cert-manager for DNS01 challenge to obtain domain certificates. It is also used by email installation scripts to update domain and hostname IP address on Route53. At the moment this is all used by Mailu.io deployment scripts. Without DNS01 challenge method the system cannot obtain SSL certificates for email domains.

  • LH_S3_BACKUP_ACCESS_KEY, LH_S3_BACKUP_SECRET_KEY are Longhorn backup credentials for S3 object storage.

  • ONEDEV_... are all settings related to OneDev App deployment. No need to worry about those unless you plan on deploying OneDev App. If you do plan on it, it is worth setting these up to have all the data backed up. It is separate from Longhorn as OneDev App uses MySQL which in turn does not run efficiently on Longhorn volumes.

Step 3: Deploying FluxCD

As the description on the k8s-scripts README says, if everything is prepared correctly we could just run scripts/cluster-bootstrap.sh script and it would automatically install all the basic services one by one. However, in this guide we will be walking through the whole process by installing each service individually, explaining each step and possible configuation options for each service.

Flux bootstrap

scripts/flux-bootstrap.sh is the first script we have to run to put our cluster under fluxcd control and create foundation for our cluster git repository.

Please note, after running this command a repository folder will be created on your local disk, a new repository on github will be created if it does not exist and content from your local repository folder will be committed and pushed to GitHub.

As we already should have all the cluster settings prepared, we can run the script right away. However, before we do this, let’s check the current cluster state to see what changed after running the script:

~/temp/k8s-scripts$ kubectl get all -A
NAMESPACE     NAME                                       READY   STATUS    RESTARTS      AGE
kube-system   pod/coredns-5f5f66466b-542hd               1/1     Running   0             26h
kube-system   pod/coredns-5f5f66466b-55w4l               1/1     Running   0             25h
kube-system   pod/coredns-5f5f66466b-t6jkq               1/1     Running   0             26h
kube-system   pod/csi-oci-node-2zq2z                     1/1     Running   1 (26h ago)   26h
kube-system   pod/csi-oci-node-5kssj                     1/1     Running   1 (26h ago)   26h
kube-system   pod/csi-oci-node-qkwsb                     1/1     Running   1 (26h ago)   26h
kube-system   pod/kube-dns-autoscaler-6ff58db97d-456xg   1/1     Running   0             25h
kube-system   pod/kube-flannel-ds-c9jvq                  1/1     Running   1 (26h ago)   26h
kube-system   pod/kube-flannel-ds-jmj64                  1/1     Running   1 (26h ago)   26h
kube-system   pod/kube-flannel-ds-zxbq8                  1/1     Running   1 (26h ago)   26h
kube-system   pod/kube-proxy-5tgbr                       1/1     Running   0             26h
kube-system   pod/kube-proxy-lhg79                       1/1     Running   0             26h
kube-system   pod/kube-proxy-nqppb                       1/1     Running   0             26h
kube-system   pod/proxymux-client-2q8j9                  1/1     Running   0             26h
kube-system   pod/proxymux-client-9d6z5                  1/1     Running   0             26h
kube-system   pod/proxymux-client-vmbdv                  1/1     Running   0             26h

NAMESPACE     NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)                  AGE
default       service/kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP                  7d19h
kube-system   service/kube-dns     ClusterIP   10.96.5.5    <none>        53/UDP,53/TCP,9153/TCP   7d19h

NAMESPACE     NAME                                      DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR                       AGE
kube-system   daemonset.apps/csi-oci-node               3         3         3       3            3           <none>                              7d19h
kube-system   daemonset.apps/kube-flannel-ds            3         3         3       3            3           <none>                              7d19h
kube-system   daemonset.apps/kube-proxy                 3         3         3       3            3           beta.kubernetes.io/os=linux         7d19h
kube-system   daemonset.apps/nvidia-gpu-device-plugin   0         0         0       0            0           <none>                              7d19h
kube-system   daemonset.apps/proxymux-client            3         3         3       3            3           node.info.ds_proxymux_client=true   7d19h

NAMESPACE     NAME                                  READY   UP-TO-DATE   AVAILABLE   AGE
kube-system   deployment.apps/coredns               3/3     3            3           7d19h
kube-system   deployment.apps/kube-dns-autoscaler   1/1     1            1           7d19h

NAMESPACE     NAME                                             DESIRED   CURRENT   READY   AGE
kube-system   replicaset.apps/coredns-5f5f66466b               3         3         3       7d19h
kube-system   replicaset.apps/kube-dns-autoscaler-6ff58db97d   1         1         1       7d19h

We can see here fairly basic “empty” k8s cluster.

And also check the flux status on the cluster:

~/temp/k8s-scripts$ flux get all -A
~/temp/k8s-scripts$ 

Flux output is empty, which is correct, because flux is not installed ont he cluster yet.

Now, let’s run “the command”:

~/temp/k8s-scripts$ ./scripts/flux-bootstrap.sh 
► connecting to github.com
► cloning branch "master" from Git repository "https://github.com/a/cluster-name.git"
✔ cloned repository
► generating component manifests
✔ generated component manifests
✔ committed sync manifests to "master" ("b9250feb480dd965da3879e05a6a2f2636c331e8")
► pushing component manifests to "https://github.com/a/cluster-name.git"
✔ installed components
✔ reconciled components
► determining if source secret "flux-system/flux-system" exists
► generating source secret
► applying source secret "flux-system/flux-system"
✔ reconciled source secret
► generating sync manifests
✔ generated sync manifests
✔ committed sync manifests to "master" ("3a7eb1b37d019d30993cf88b1d22b36898e2db64")
► pushing sync manifests to "https://github.com/a/cluster-name.git"
► applying sync manifests
✔ reconciled sync configuration
◎ waiting for Kustomization "flux-system/flux-system" to be reconciled
✔ Kustomization reconciled successfully
► confirming components are healthy
✔ helm-controller: deployment ready
✔ kustomize-controller: deployment ready
✔ notification-controller: deployment ready
✔ source-controller: deployment ready
✔ all components are healthy
Cloning into 'cluster-name'...
remote: Enumerating objects: 1737, done.
remote: Counting objects: 100% (7/7), done.
remote: Compressing objects: 100% (7/7), done.
remote: Total 1737 (delta 1), reused 6 (delta 0), pack-reused 1730
Receiving objects: 100% (1737/1737), 323.92 KiB | 1.12 MiB/s, done.
Resolving deltas: 100% (644/644), done.
[master ecd4e63] Initial deployment
 6 files changed, 48 insertions(+)
 create mode 100644 clusters/cluster-name/apps.yaml
 create mode 100644 clusters/cluster-name/common.yaml
 create mode 100644 infra/apps/kustomization.yaml
 create mode 100644 infra/common/kustomization.yaml
 create mode 100644 infra/common/sources/chartmuseum.yaml
 create mode 100644 infra/common/sources/kustomization.yaml
Enumerating objects: 17, done.
Counting objects: 100% (17/17), done.
Delta compression using up to 16 threads
Compressing objects: 100% (10/10), done.
Writing objects: 100% (14/14), 1.29 KiB | 1.29 MiB/s, done.
Total 14 (delta 2), reused 10 (delta 2), pack-reused 0
remote: Resolving deltas: 100% (2/2), done.
To https://github.com/a/cluster-name
   3a7eb1b..ecd4e63  master -> master
► annotating GitRepository flux-system in flux-system namespace
✔ GitRepository annotated
◎ waiting for GitRepository reconciliation
✔ fetched revision master/ecd4e63d6493f9356d05afe548c1cd7b3541b487

No errors or warnings, deployment was successful. You can check now content of your GitHub repository to see what has changed.

We can also check the fluxcd status on the cluster:

~/temp/k8s-scripts$ flux get all -A
NAMESPACE  	NAME                     	READY	MESSAGE                                                          	REVISION                                       	SUSPENDED 
flux-system	gitrepository/flux-system	True 	Fetched revision: master/ecd4e63d6493f9356d05afe548c1cd7b3541b487	master/ecd4e63d6493f9356d05afe548c1cd7b3541b487	False    	

NAMESPACE  	NAME                      	READY	MESSAGE                                                                           	REVISION                                                        	SUSPENDED 
flux-system	helmrepository/chartmuseum	True 	Fetched revision: d5596b841d07f4ca4ec3e7098ad5d5f5bae528c556cf28aa3584ecf0a5a8d43a	d5596b841d07f4ca4ec3e7098ad5d5f5bae528c556cf28aa3584ecf0a5a8d43a	False    	

NAMESPACE  	NAME                     	READY	MESSAGE                                                          	REVISION                                       	SUSPENDED 
flux-system	kustomization/apps       	True 	Applied revision: master/ecd4e63d6493f9356d05afe548c1cd7b3541b487	master/ecd4e63d6493f9356d05afe548c1cd7b3541b487	False    	
flux-system	kustomization/common     	True 	Applied revision: master/ecd4e63d6493f9356d05afe548c1cd7b3541b487	master/ecd4e63d6493f9356d05afe548c1cd7b3541b487	False    	
flux-system	kustomization/flux-system	True 	Applied revision: master/ecd4e63d6493f9356d05afe548c1cd7b3541b487	master/ecd4e63d6493f9356d05afe548c1cd7b3541b487	False    	

This time output is not empty. We can see several sections of the flux system to be installed. Each item on the list has an associated git repository revision number and you can check if this matches your current revision in repository.

Similarly we can check kubectl status on the cluster:

~/temp/k8s-scripts$ kubectl get all -A
NAMESPACE     NAME                                           READY   STATUS    RESTARTS      AGE
flux-system   pod/helm-controller-779b58df6b-8w59r           1/1     Running   0             47s
flux-system   pod/kustomize-controller-5db6bfc56d-jpnrz      1/1     Running   0             47s
flux-system   pod/notification-controller-7ccfbfbb98-6grkk   1/1     Running   0             47s
flux-system   pod/source-controller-565f8fbbff-ptfh8         1/1     Running   0             46s
kube-system   pod/coredns-5f5f66466b-542hd                   1/1     Running   0             26h
kube-system   pod/coredns-5f5f66466b-55w4l                   1/1     Running   0             25h
kube-system   pod/coredns-5f5f66466b-t6jkq                   1/1     Running   0             26h
kube-system   pod/csi-oci-node-2zq2z                         1/1     Running   1 (26h ago)   26h
kube-system   pod/csi-oci-node-5kssj                         1/1     Running   1 (26h ago)   26h
kube-system   pod/csi-oci-node-qkwsb                         1/1     Running   1 (26h ago)   26h
kube-system   pod/kube-dns-autoscaler-6ff58db97d-456xg       1/1     Running   0             25h
kube-system   pod/kube-flannel-ds-c9jvq                      1/1     Running   1 (26h ago)   26h
kube-system   pod/kube-flannel-ds-jmj64                      1/1     Running   1 (26h ago)   26h
kube-system   pod/kube-flannel-ds-zxbq8                      1/1     Running   1 (26h ago)   26h
kube-system   pod/kube-proxy-5tgbr                           1/1     Running   0             26h
kube-system   pod/kube-proxy-lhg79                           1/1     Running   0             26h
kube-system   pod/kube-proxy-nqppb                           1/1     Running   0             26h
kube-system   pod/proxymux-client-2q8j9                      1/1     Running   0             26h
kube-system   pod/proxymux-client-9d6z5                      1/1     Running   0             26h
kube-system   pod/proxymux-client-vmbdv                      1/1     Running   0             26h

NAMESPACE     NAME                              TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                  AGE
default       service/kubernetes                ClusterIP   10.96.0.1       <none>        443/TCP                  7d19h
flux-system   service/notification-controller   ClusterIP   10.96.203.206   <none>        80/TCP                   48s
flux-system   service/source-controller         ClusterIP   10.96.129.209   <none>        80/TCP                   48s
flux-system   service/webhook-receiver          ClusterIP   10.96.131.167   <none>        80/TCP                   48s
kube-system   service/kube-dns                  ClusterIP   10.96.5.5       <none>        53/UDP,53/TCP,9153/TCP   7d19h

NAMESPACE     NAME                                      DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR                       AGE
kube-system   daemonset.apps/csi-oci-node               3         3         3       3            3           <none>                              7d19h
kube-system   daemonset.apps/kube-flannel-ds            3         3         3       3            3           <none>                              7d19h
kube-system   daemonset.apps/kube-proxy                 3         3         3       3            3           beta.kubernetes.io/os=linux         7d19h
kube-system   daemonset.apps/nvidia-gpu-device-plugin   0         0         0       0            0           <none>                              7d19h
kube-system   daemonset.apps/proxymux-client            3         3         3       3            3           node.info.ds_proxymux_client=true   7d19h

NAMESPACE     NAME                                      READY   UP-TO-DATE   AVAILABLE   AGE
flux-system   deployment.apps/helm-controller           1/1     1            1           48s
flux-system   deployment.apps/kustomize-controller      1/1     1            1           48s
flux-system   deployment.apps/notification-controller   1/1     1            1           48s
flux-system   deployment.apps/source-controller         1/1     1            1           48s
kube-system   deployment.apps/coredns                   3/3     3            3           7d19h
kube-system   deployment.apps/kube-dns-autoscaler       1/1     1            1           7d19h

NAMESPACE     NAME                                                 DESIRED   CURRENT   READY   AGE
flux-system   replicaset.apps/helm-controller-779b58df6b           1         1         1       48s
flux-system   replicaset.apps/kustomize-controller-5db6bfc56d      1         1         1       48s
flux-system   replicaset.apps/notification-controller-7ccfbfbb98   1         1         1       48s
flux-system   replicaset.apps/source-controller-565f8fbbff         1         1         1       48s
kube-system   replicaset.apps/coredns-5f5f66466b                   3         3         3       7d19h
kube-system   replicaset.apps/kube-dns-autoscaler-6ff58db97d       1         1         1       7d19h

As expected we can now see several items on lists for flux-system namespace.

From now on, the k8s cluster state and running services are controlled be a content of the git repository. To change the cluster configuration, add/remove/upgrade any service, we change content of the git repository. Add/remove/modify manifest and configuration files.

And this is precisely what we will be doing for the rest of the guide. All the other scripts in the k8s-scripts collection just modify the git repository for the cluster and push the changes to GitHub. After that you can wait until fluxcd checks the changes in the repository and runs reconsilation or you can force reconcilation to have the changes applied sooner. All the scripts force reconcilation so we do not have to wait.

Uninstallation

To completely remove fluxcd from the k8s cluster along with all services installed by FluxCD all you need to do is to run simple command:

~/temp/k8s-scripts$ flux uninstall
? Are you sure you want to delete Flux and its custom resource definitions? [y/N] y█
► deleting components in flux-system namespace
✔ Deployment/flux-system/helm-controller deleted 
✔ Deployment/flux-system/kustomize-controller deleted 
✔ Deployment/flux-system/notification-controller deleted 
✔ Deployment/flux-system/source-controller deleted 
✔ Service/flux-system/notification-controller deleted 
✔ Service/flux-system/source-controller deleted 
✔ Service/flux-system/webhook-receiver deleted 
✔ NetworkPolicy/flux-system/allow-egress deleted 
✔ NetworkPolicy/flux-system/allow-scraping deleted 
✔ NetworkPolicy/flux-system/allow-webhooks deleted 
✔ ServiceAccount/flux-system/helm-controller deleted 
✔ ServiceAccount/flux-system/kustomize-controller deleted 
✔ ServiceAccount/flux-system/notification-controller deleted 
✔ ServiceAccount/flux-system/source-controller deleted 
✔ ClusterRole/crd-controller-flux-system deleted 
✔ ClusterRoleBinding/cluster-reconciler-flux-system deleted 
✔ ClusterRoleBinding/crd-controller-flux-system deleted 
► deleting toolkit.fluxcd.io finalizers in all namespaces
✔ GitRepository/flux-system/flux-system finalizers deleted 
✔ HelmRepository/flux-system/chartmuseum finalizers deleted 
✔ HelmRepository/flux-system/sealed-secrets finalizers deleted 
✔ HelmChart/flux-system/flux-system-sealed-secrets finalizers deleted 
✔ Kustomization/flux-system/apps finalizers deleted 
✔ Kustomization/flux-system/common finalizers deleted 
✔ Kustomization/flux-system/flux-system finalizers deleted 
✔ HelmRelease/flux-system/sealed-secrets finalizers deleted 
► deleting toolkit.fluxcd.io custom resource definitions
✔ CustomResourceDefinition/alerts.notification.toolkit.fluxcd.io deleted 
✔ CustomResourceDefinition/buckets.source.toolkit.fluxcd.io deleted 
✔ CustomResourceDefinition/gitrepositories.source.toolkit.fluxcd.io deleted 
✔ CustomResourceDefinition/helmcharts.source.toolkit.fluxcd.io deleted 
✔ CustomResourceDefinition/helmreleases.helm.toolkit.fluxcd.io deleted 
✔ CustomResourceDefinition/helmrepositories.source.toolkit.fluxcd.io deleted 
✔ CustomResourceDefinition/kustomizations.kustomize.toolkit.fluxcd.io deleted 
✔ CustomResourceDefinition/providers.notification.toolkit.fluxcd.io deleted 
✔ CustomResourceDefinition/receivers.notification.toolkit.fluxcd.io deleted 
✔ Namespace/flux-system deleted 
✔ uninstall finished

This reverts the k8s cluster to an “empty” cluster, that is clustar with only basic k8s resources. There would be no flux and no services installed by flux on the cluster.

Please note. Uninstalling flux does not clean the git repository. All the content, manifest files, configuration is left untouched in the git repository, so you can revert the “uninstallation” by bootstraping flux with the repository.