Persistently Storing loklak Server Dumps on Kubernetes
In an earlier blog post, I discussed loklak setup on Kubernetes. The deployment mentioned in the post was to test the development branch. Next, we needed to have a deployment where all the messages are collected and dumped in text files that can be reused.
In this blog post, I will be discussing the challenges with such deployment and the approach to tackle them.
Volatile Disk in Kubernetes
The pods that hold deployments in Kubernetes have disk storage. Any data that gets written by the application stays only until the same version of deployment is running. As soon as the deployment is updated/relocated, the data stored during the application is cleaned up.
Due to this, dumps are written when loklak is running but they get wiped out when the deployment image is updated. In other words, all dumps are lost when the image updates. We needed to find a solution to this as we needed a permanent storage when collecting dumps.
In order to have a storage which can hold data permanently, we can mount persistent disk(s) on a pod at the appropriate location. This ensures that the data that is important to us stays with us, even
when the deployment goes down.
In order to add persistent disks, we first need to create a persistent disk. On Google Cloud Platform, we can use the gcloud CLI to create disks in a given region –
gcloud compute disks create --size=<required size> --zone=<same as cluster zone> <unique disk name>
After this, we can mount it on a Docker volume defined in Kubernetes configurations –
... volumeMounts: - mountPath: /path/to/mount name: volume-name volumes: - name: volume-name gcePersistentDisk: pdName: disk-name fsType: fileSystemType
But this setup can’t be used for storing loklak dumps. Let’s see “why” in the next section.
Rolling Updates and Persistent Disk
The Kubernetes deployment needs to be updated when the master branch of loklak server is updated. This update of master deployment would create a new pod and try to start loklak server on it. During all this, the older deployment would also be running and serving the requests.
The control will not be transferred to the newer pod until it is ready and all the probes are passing. The newer deployment will now try to mount the disk which is mentioned in the configuration, but it would fail to do so. This would happen because the older pod has already mounted the disk.
Therefore, all new deployments would simply fail to start due to insufficient resources. To overcome such issues, Kubernetes allows persistent volume claims. Let’s see how we used them for loklak deployment.
Persistent Volume Claims
Kubernetes provides Persistent Volume Claims which claim resources (storage) from a Persistent Volume (just like a pod does from a node). The higher level APIs are provided by Kubernetes (configurations and kubectl command line). In the loklak deployment, the persistent volume is a Google Compute Engine disk –
apiVersion: v1 kind: PersistentVolume metadata: name: dump namespace: web spec: capacity: storage: 100Gi accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Retain storageClassName: slow gcePersistentDisk: pdName: "data-dump-disk" fsType: "ext4"
It must be noted here that a persistent disk by the name of data-dump-index is already created in the same region.
The storage class defines the way in which the PV should be handled, along with the provisioner for the service –
kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: slow namespace: web provisioner: kubernetes.io/gce-pd parameters: type: pd-standard zone: us-central1-a
After having the StorageClass and PersistentVolume, we can create a claim for the volume by using appropriate configurations –
apiVersion: v1 kind: PersistentVolumeClaim metadata: name: dump namespace: web spec: accessModes: - ReadWriteOnce resources: requests: storage: 100Gi storageClassName: slow
After this, we can mount this claim on our Deployment –
... volumeMounts: - name: dump mountPath: /loklak_server/data volumes: - name: dump persistentVolumeClaim: claimName: dump
Verifying persistence of Dumps
To verify this, we can redeploy the cluster using the same persistent disk and check if the earlier dumps are still present there –
$ http http://link.to.deployment/dump/ HTTP/1.1 200 OK Cache-Control: public, max-age=60 Content-Type: text/html;charset=utf-8 ... <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2 Final//EN"> … <h1>Index of /dump</h1> <pre> Name [gz ] <a href="messages_20170802_71562040.txt.gz">messages_20170802_71562040.txt.gz</a> Thu Aug 03 00:07:21 GMT 2017 132M [gz ] <a href="messages_20170803_69925009.txt.gz">messages_20170803_69925009.txt.gz</a> Mon Aug 07 15:40:04 GMT 2017 532M [gz ] <a href="messages_20170807_36357603.txt.gz">messages_20170807_36357603.txt.gz</a> Wed Aug 09 10:26:24 GMT 2017 377M [txt] <a href="messages_20170809_27974404.txt">messages_20170809_27974404.txt</a> Thu Aug 10 08:51:49 GMT 2017 1564M <hr></pre> ...
In this blog post, I discussed the process of deployment of loklak with persistent dumps on Kubernetes. This deployment is intended to work as root.loklak.org in near future. The changes were proposed in loklak/loklak_server#1377 by @singhpratyush (me).
Deployment loklak image – https://github.com/loklak/loklak_server/blob/e896ee9e77f818e5c1c872ab0e623af71fa396b8/kubernetes/images/master/Dockerfile.
Kubernetes Volumes – https://kubernetes.io/docs/concepts/storage/volumes/.
Dynamic Provisioning and Storage Classes in Kubernetes – http://blog.kubernetes.io/2016/10/dynamic-provisioning-and-storage-in-kubernetes.html.
Elasticsearch deployment in loklak – https://github.com/loklak/loklak_server/tree/e896ee9e77f818e5c1c872ab0e623af71fa396b8/kubernetes/yamls/development/elasticsearch.