ITKarma picture

Hello! Feature branch (aka deploy preview, review app) is when not only the master branch is deployed, but also each pull request to a unique URL. You can check if the code works in the production environment, the feature can be shown to other programmers or product engineers. While you are working in a pull request, each new commit, the current deploy for the old code is deleted, and the new deploy for the new code is rolled out. Questions may arise when you stumble pull request in the master branch. You no longer need the Feature branch, but Kubernetes resources are still in the cluster.


More about feature branch'h


One approach to making feature branches in Kubernetes is to use namespaces. In short, the production configuration looks like this:


kind: Namespace apiVersion: v1 metadata: name: habr-back-end... kind: Deployment apiVersion: apps/v1 metadata: namespace: habr-back-end spec: replicas: 3... 

For a feature branch, a namespace is created with its identifier (for example, the pull request number) and some kind of prefix/postfix (for example, -pr- ):


kind: Namespace apiVersion: v1 metadata: name: habr-back-end-pr-17... kind: Deployment apiVersion: apps/v1 metadata: namespace: habr-back-end-pr-17 spec: replicas: 1... 

In general, I wrote Kubernetes Operator (an application that has access to cluster resources), Github project link . It removes namespaces that belong to old feature branches. In Kubernetes, if you delete the namespace, other resources in that namespace are also deleted automatically.


$ kubectl get pods --all-namespaces | grep -e "-pr-" NAMESPACE ... AGE habr-back-end-pr-264... 4d8h habr-back-end-pr-265... 5d7h 

You can read here about how to embed feature branches in a cluster. and here .


Motivation


Let's look at a typical pull request life cycle with continuous integration (CDMY0CDMY):


  1. Push the new commit into the branch.
  2. On the build, linters and/or tests are launched.
  3. On the fly, Kubernetes pull request'a configurations are generated (for example, its number is substituted into the finished template).
  4. With kubectl apply, the configurations fall into the deploy cluster.
  5. Pull request merges into the master branch.

While you are working in a pull request, each new commit, the current deploy for the old code is deleted, and the new deploy for the new code is rolled out. But when the pull request merges into the master branch, only the master branch will be build. It turns out that we already forgot about pull request, and its Kubernetes resources are still in the cluster.


How to use


Install the project with the command below:


$ kubectl apply -f https://raw.githubusercontent.com/dmytrostriletskyi/stale-feature-branch-operator/master/configs/production.yml 

Create a file with the following contents and install via CDMY1CDMY:


apiVersion: feature-branch.dmytrostriletskyi.com/v1 kind: StaleFeatureBranch metadata: name: stale-feature-branch spec: namespaceSubstring: -pr- afterDaysWithoutDeploy: 3 

The namespaceSubstring parameter is needed to filter the namespace for pull requests from other namespace. For example, if the cluster has the following namespace: CDMY2CDMY, CDMY3CDMY, CDMY4CDMY, CDMY5CDMY, then the candidates for deletion will be CDMY6CDMY, CDMY7CDMY.


The afterDaysWithoutDeploy parameter is needed to remove old namespacees. For example, if namespace was created CDMY8CDMY ago, and CDMY9CDMY is specified in the parameter, this namespace will be deleted. It works in the opposite direction, if namespace was created CDMY10CDMY ago, and the parameter specifies CDMY11CDMY, this namespace will not be deleted.


There is one more parameter, it is responsible for how often to scan all namespaces and check for days without deployments - checkEveryMinutes . By default, it is equal to CDMY12CDMY.


How it works


In practice, you will need:


  1. Docker for working in isolated environments.
  2. Minikube will raise the Kubernetes cluster locally.
  3. kubectl is the command line interface for managing the cluster.

Raise the Kubernetes cluster locally:


$ minikube start --vm-driver=docker minikube v1.11.0 on Darwin 10.15.5 Using the docker driver based on existing profile. Starting control plane node minikube in cluster minikube. 

Tell CDMY13CDMY to use the local default cluster:


$ kubectl config use-context minikube Switched to context "minikube". 

Download the configurations for the production environment:


$ curl https://raw.githubusercontent.com/dmytrostriletskyi/stale-feature-branch-operator/master/configs/production.yml > stale-feature-branch-production-configs.yml 

Since production configurations are configured to check old namespaces, but they are not in our newly raised cluster, we replace the environment variable CDMY14CDMY with CDMY15CDMY. With this value, the CDMY16CDMY parameter is not taken into account and namespaces are not checked for days without deployments, only for the occurrence of a substring (CDMY17CDMY).


If you are on CDMY18CDMY:


$ sed -i 's|false|true|g' stale-feature-branch-production-configs.yml 

If you are on CDMY19CDMY:


$ sed -i "" 's|false|true|g' stale-feature-branch-production-configs.yml 

Install the project:


$ kubectl apply -f stale-feature-branch-production-configs.yml 

Check that the CDMY20CDMY resource has appeared in the cluster:


$ kubectl api-resources | grep stalefeaturebranches NAME ... APIGROUP ... KIND stalefeaturebranches... feature-branch.dmytrostriletskyi.com... StaleFeatureBranch 

Check that the operator has appeared in the cluster:


$ kubectl get pods --namespace stale-feature-branch-operator NAME ... STATUS ... AGE stale-feature-branch-operator-6bfbfd4df8-m7sch... Running... 38s 

If you look at his logs, he is ready to process the resources of CDMY21CDMY:


$ kubectl logs stale-feature-branch-operator-6bfbfd4df8-m7sch -n stale-feature-branch-operator... "msg":"Operator Version: 0.0.1"}...... "msg":"Starting EventSource",..., "source":"kind source:/, Kind="}... "msg":"Starting Controller",...}... "msg":"Starting workers",..., "worker count":1} 

Install the ready CDMY22CDMY (ready-made configurations for modeling cluster resources) for the CDMY23CDMY resource:


$ kubectl apply -f https://raw.githubusercontent.com/dmytrostriletskyi/stale-feature-branch-operator/master/fixtures/stale-feature-branch.yml 

In the configurations, it is indicated to look for namespaces with the substring CDMY24CDMY once in CDMY25CDMY.:


apiVersion: feature-branch.dmytrostriletskyi.com/v1 kind: StaleFeatureBranch metadata: name: stale-feature-branch spec: namespaceSubstring: -pr- afterDaysWithoutDeploy: 1 checkEveryMinutes: 1 

The operator has responded and is ready to check namespaces:


$ kubectl logs stale-feature-branch-operator-6bfbfd4df8-m7sch -n stale-feature-branch-operator... "msg":"Stale feature branch is being processing.","namespaceSubstring":"-pr-","afterDaysWithoutDeploy":1,"checkEveryMinutes":1,"isDebug":"true"} 

We install CDMY26CDMY containing two namespace (CDMY27CDMY, CDMY28CDMY) and their CDMY29CDMY, CDMY30CDMY, CDMY31CDMY, and so on:


$ kubectl apply -f https://raw.githubusercontent.com/dmytrostriletskyi/stale-feature-branch-operator/master/fixtures/first-feature-branch.yml -f https://raw.githubusercontent.com/dmytrostriletskyi/stale-feature-branch-operator/master/fixtures/second-feature-branch.yml... namespace/project-pr-1 created deployment.apps/project-pr-1 created service/project-pr-1 created horizontalpodautoscaler.autoscaling/project-pr-1 created secret/project-pr-1 created configmap/project-pr-1 created ingress.extensions/project-pr-1 created namespace/project-pr-2 created deployment.apps/project-pr-2 created service/project-pr-2 created horizontalpodautoscaler.autoscaling/project-pr-2 created secret/project-pr-2 created configmap/project-pr-2 created ingress.extensions/project-pr-2 created 

Check that all the resources above have been successfully created:


$ kubectl get namespace,pods,deployment,service,horizontalpodautoscaler,configmap,ingress -n project-pr-1 && kubectl get namespace,pods,deployment,service,horizontalpodautoscaler,configmap,ingress -n project-pr-2... NAME ... READY... STATUS ... AGE pod/project-pr-1-848d5fdff6-rpmzw... 1/1 ... Running... 67s NAME ... READY... AVAILABLE... AGE deployment.apps/project-pr-1... 1/1 ... 1 ... 67s... 

Since we turned on CDMY32CDMY, the namespaces CDMY33CDMY and CDMY34CDMY, therefore, all other resources will have to be deleted immediately without considering the CDMY35CDMY parameter. This can be seen in the operator’s logs:


$ kubectl logs stale-feature-branch-operator-6bfbfd4df8-m7sch -n stale-feature-branch-operator... "msg":"Namespace should be deleted due to debug mode is enabled.","namespaceName":"project-pr-1"}... "msg":"Namespace is being processing.","namespaceName":"project-pr-1","namespaceCreationTimestamp":"2020-06-16 18:43:58 +0300 EEST"}... "msg":"Namespace has been deleted.","namespaceName":"project-pr-1"}... "msg":"Namespace should be deleted due to debug mode is enabled.","namespaceName":"project-pr-2"}... "msg":"Namespace is being processing.","namespaceName":"project-pr-2","namespaceCreationTimestamp":"2020-06-16 18:43:58 +0300 EEST"}... "msg":"Namespace has been deleted.","namespaceName":"project-pr-2"} 

If you check the availability of resources, they will be in CDMY36CDMY status (deletion process) or already deleted (command output is empty).


$ kubectl get namespace,pods,deployment,service,horizontalpodautoscaler,configmap,ingress -n project-pr-1 && kubectl get namespace,pods,deployment,service,horizontalpodautoscaler,configmap,ingress -n project-pr-2... 

You can repeat the process of creating CDMY37CDMY several times and make sure that they are deleted within a minute.


Alternatives


What can be done instead of an operator that works in a cluster? There are several approaches, all of them are imperfect (and their shortcomings are subjective), and everyone decides what is best suited for a particular project:


  1. Delete feature branch during the continuous integration of master branch build.


    • To do this, you need to know which pull request refers to the commit that is building. Since the feature branch namespace contains the pull request'a identifier — its number or branch name, the identifier will always have to be specified in commit'e.
    • The master branch builds are falling. For example, you have the following steps: download the project, run tests, assemble the project, make a release, send notifications, clear the feature branch of the last pull request. If the build fails to send a notification, you will have to delete all resources in the cluster by hand.
    • Without the proper context, removing the feature branch in the master build is not obvious.

  2. Using webhooks ( example ).


    • Perhaps this is not your approach. For example, in Jenkins , only one type of pipeline supports the ability to save its configurations in source code. When using webhooks, you need to write your own script to process them. This script will have to be placed in the Jenkins interface, which is difficult to maintain.

  3. Write Cronjob and add a Kubernetes cluster.


    • Spending time writing and supporting.
    • The operator is already working in a similar style, documented and maintained.


Thank you for your attention to the article. Link to the project on Github .

.

Source