Manage Kubernetes Resources with Node.js

Talha Khaild
GitStacks

--

Normally, when you want to interact with the resources of a Kubernetes cluster you make use of kubectl, and you even integrate it as part of your pipelines in your CI and CD tool. However, there are scenarios where it is necessary to programmatically manage Kubernetes resources, such as Jobs, Deployments, etc. For this, there are libraries maintained both officially and by the community that allow you to communicate with the Kubernetes API Server in a simple way. In this article, I show you an example of how to do it with Node.js.

The code

In this article I am going to carry out four actions: create a Job type resource, list the pods associated with it, delete the Job, and its pods, and finally try to list the namespace secrets. The objective is that for all of them I will have permissions except for the last one, which must be denied through the mechanisms that Kubernetes offers me. With this, you can have a clear example.

In order to execute this code, I have two options:

I can use the kubeconfig file hosted in $ HOME / .kube / config. In this example, I have associated a cluster in AKS, with a user who has all the permissions, even for the secrets. If I run it it should have an output like the following:

If, on the other hand, I want to execute this code within a Kubernetes cluster, the ideal thing is to use a Service Account that in turn has the necessary permissions to be able to carry out, or not, these operations. For this second case, we will have to create additional resources in the cluster before executing this code.

Create a Service Account with the appropriate permissions

Here the idea is that this code runs as an application that is part of the cluster and that is why I want the Pod in charge of doing this work to have the minimum permissions to execute what is strictly necessary. To do this, the first thing I have to do is create a Service Account, a Role with the permissions that I want to grant, and a RoleBinding that associates the account with those permissions:

As you can see, in theory, I can list and delete pods and create and delete jobs. Anything other than this should be denied for the kube-client Service Account.

Once this is done I have created a Dockerfile with the above code with the help of Visual Studio Code, I have generated the image and uploaded it to Docker Hub in this case.

#Build the image
docker build -t 0gis0/kubenodejs .
#Push the image to Docker Hub
docker push 0gis0/kubenodejs

Finally, I have created a resource of type Pod to test both the code in the cluster and the permissions acquired through the associated Service Account:

As you can see, it is important that you make use of the serviceAccountName property to tell our pod that it must use the service account that we just created, which is the one with the permissions we need to run our client. If we now create this pod in our cluster:

kubectl apply -f k8s/pod.yaml

You will see that it runs correctly. If we retrieve the logs from this we will see that all the tasks have finished, just as in local, except the last one for which within the cluster we do not have permissions to carry it out:

If you enjoyed this blog post, you can encourage me to produce more content by buying me a coffee.

--

--

A full-stack developer and DevOps engineer. Available for technical writing gigs! Contact: talhakhalid101[at]pm.me