Deploying loklak Server on Kubernetes with External Elasticsearch
Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications.
Kubernetes is an awesome cloud platform, which ensures that cloud applications run reliably. It runs automated tests, flawless updates, smart roll out and rollbacks, simple scaling and a lot more.
So as a part of GSoC, I worked on taking the loklak server to Kubernetes on Google Cloud Platform. In this blog post, I will be discussing the approach followed to deploy development branch of loklak on Kubernetes.
New Docker Image
Since Kubernetes deployments work on Docker images, we needed one for the loklak project. The existing image would not be up to the mark for Kubernetes as it contained the declaration of volumes and exposing of ports. So I wrote a new Docker image which could be used in Kubernetes.
The image would simply clone loklak server, build the project and trigger the server as CMD –
FROM alpine:latest ENV LANG=en_US.UTF-8 ENV JAVA_TOOL_OPTIONS=-Dfile.encoding=UTF8 WORKDIR /loklak_server RUN apk update && apk add openjdk8 git bash && \ git clone https://github.com/loklak/loklak_server.git /loklak_server && \ git checkout development && \ ./gradlew build -x test -x checkstyleTest -x checkstyleMain -x jacocoTestReport && \ # Some Configurations and Cleanups CMD ["bin/start.sh", "-Idn"]
[SOURCE]
This image wouldn’t have any volumes or exposed ports and we are now free to configure them in the configuration files (discussed in a later section).
Building and Pushing Docker Image using Travis
To automatically build and push on a commit to the master branch, Travis build is used. In the after_success section, a call to push Docker image is made.
Travis environment variables hold the username and password for Docker hub and are used for logging in –
docker login -u $DOCKER_USERNAME -p $DOCKER_PASSWORD
[SOURCE]
We needed checks there to ensure that we are on the right branch for the push and we are not handling a pull request –
# Build and push Kubernetes Docker image KUBERNETES_BRANCH=loklak/loklak_server:latest-kubernetes-$TRAVIS_BRANCH KUBERNETES_COMMIT=loklak/loklak_server:kubernetes-$TRAVIS_COMMIT if [ "$TRAVIS_BRANCH" == "development" ]; then docker build -t loklak_server_kubernetes kubernetes/images/development docker tag loklak_server_kubernetes $KUBERNETES_BRANCH docker push $KUBERNETES_BRANCH docker tag $KUBERNETES_BRANCH $KUBERNETES_COMMIT docker push $KUBERNETES_COMMIT elif [ "$TRAVIS_BRANCH" == "master" ]; then # Build and push master else echo "Skipping Kubernetes image push for branch $TRAVIS_BRANCH" fi
[SOURCE]
Kubernetes Configurations for loklak
Kubernetes cluster can completely be configured using configurations written in YAML format. The deployment of loklak uses the previously built image. Initially, the image tagged as latest-kubernetes-development is used –
apiVersion: apps/v1beta1 kind: Deployment metadata: name: server namespace: web spec: replicas: 1 template: metadata: labels: app: server spec: containers: - name: server image: loklak/loklak_server:latest-kubernetes-development ...
[SOURCE]
Readiness and Liveness Probes
Probes act as the top level tester for the health of a deployment in Kubernetes. The probes are performed periodically to ensure that things are working fine and appropriate steps are taken if they fail.
When a new image is updated, the older pod still runs and servers the requests. It is replaced by the new ones only when the probes are successful, otherwise, the update is rolled back.
In loklak, the /api/status.json endpoint gives information about status of deployment and hence is a good target for probes –
livenessProbe: httpGet: path: /api/status.json port: 80 initialDelaySeconds: 30 timeoutSeconds: 3 readinessProbe: httpGet: path: /api/status.json port: 80 initialDelaySeconds: 30 timeoutSeconds: 3
[SOURCE]
These probes are performed periodically and the server is restarted if they fail (non-success HTTP status code or takes more than 3 seconds).
Ports and Volumes
In the configurations, port 80 is exposed as this is where Jetty serves inside loklak –
ports: - containerPort: 80 protocol: TCP
[SOURCE]
If we notice, this is the port that we used for running the probes. Since the development branch deployment holds no dumps, we didn’t need to specify any explicit volumes for persistence.
Load Balancer Service
While creating the configurations, a new public IP is assigned to the deployment using Google Cloud Platform’s load balancer. It starts listening on port 80 –
ports: - containerPort: 80 protocol: TCP
[SOURCE]
Since this service creates a new public IP, it is recommended not to replace/recreate this services as this would result in the creation of new public IP. Other components can be updated individually.
Kubernetes Configurations for Elasticsearch
To maintain a persistent index, this deployment would require an external Elasticsearch cluster. loklak is able to connect itself to external Elasticsearch cluster by changing a few configurations.
Docker Image and Environment Variables
The image used for Elasticsearch is taken from pires/docker-elasticsearch-kubernetes. It allows easy configuration of properties from environment variables in configurations. Here is a list of configurable variables, but we needed just a few of them to do our task –
image: quay.io/pires/docker-elasticsearch-kubernetes:2.0.0 env: - name: KUBERNETES_CA_CERTIFICATE_FILE value: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt - name: NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace - name: "CLUSTER_NAME" value: "loklakcluster" - name: "DISCOVERY_SERVICE" value: "elasticsearch" - name: NODE_MASTER value: "true" - name: NODE_DATA value: "true" - name: HTTP_ENABLE value: "true"
[SOURCE]
Persistent Index using Persistent Cloud Disk
To make the index last even after the deployment is stopped, we needed a stable place where we could store all that data. Here, Google Compute Engine’s standard persistent disk was used. The disk can be created using GCP web portal or the gcloud CLI.
Before attaching the disk, we need to declare a volume where we could mount it –
volumeMounts: - mountPath: /data name: storage
[SOURCE]
Now that we have a volume, we can simply mount the persistent disk on it –
volumes: - name: storage gcePersistentDisk: pdName: data-index-disk fsType: ext4
[SOURCE]
Now, whenever we deploy these configurations, we can reuse the previous index.
Exposing Kubernetes to Cluster
The HTTP and transport clients are enabled on port 9200 and 9300 respectively. They can be exposed to the rest of the cluster using the following service –
apiVersion: v1 kind: Service ... Spec: ... ports: - name: http port: 9200 protocol: TCP - name: transport port: 9300 protocol: TCP
[SOURCE]
Once deployed, other deployments can access the cluster API from ports 9200 and 9300.
Connecting loklak to Kubernetes
To connect loklak to external Elasticsearch cluster, TransportClient Java API is used. In order to enable these settings, we simply need to make some changes in configurations.
Since we enable the service named “elasticsearch” in namespace “elasticsearch”, we can access the cluster at address elasticsearch.elasticsearch:9200 (web) and elasticsearch.elasticsearch:9300 (transport).
To confine these changes only to Kubernetes deployment, we can use sed command while building the image (in Dockerfile) –
sed -i.bak 's/^\(elasticsearch_transport.enabled\).*/\1=true/' conf/config.properties && \ sed -i.bak 's/^\(elasticsearch_transport.addresses\).*/\1=elasticsearch.elasticsearch:9300/' conf/config.properties && \
[SOURCE]
Now when we create the deployments in Kubernetes cluster, loklak auto connects to the external elasticsearch index and creates indices if needed.
Verifying persistence of the Elasticsearch Index
In order to see that the data persists, we can completely delete the deployment or even the cluster if we want. Later, when we recreate the deployment, we can see all the messages already present in the index.
I [2017-07-29 09:42:51,804][INFO ][node ] [Hellion] initializing ... I [2017-07-29 09:42:52,024][INFO ][plugins ] [Hellion] loaded [cloud-kubernetes], sites [] I [2017-07-29 09:42:52,055][INFO ][env ] [Hellion] using [1] data paths, mounts [[/data (/dev/sdb)]], net usable_space [84.9gb], net total_space [97.9gb], spins? [possibly], types [ext4] I [2017-07-29 09:42:53,543][INFO ][node ] [Hellion] initialized I [2017-07-29 09:42:53,543][INFO ][node ] [Hellion] starting ... I [2017-07-29 09:42:53,620][INFO ][transport ] [Hellion] publish_address {10.8.1.13:9300}, bound_addresses {10.8.1.13:9300} I [2017-07-29 09:42:53,633][INFO ][discovery ] [Hellion] loklakcluster/cJtXERHETKutq7nujluJvA I [2017-07-29 09:42:57,866][INFO ][cluster.service ] [Hellion] new_master {Hellion}{cJtXERHETKutq7nujluJvA}{10.8.1.13}{10.8.1.13:9300}{master=true}, reason: zen-disco-join(elected_as_master, [0] joins received) I [2017-07-29 09:42:57,955][INFO ][http ] [Hellion] publish_address {10.8.1.13:9200}, bound_addresses {10.8.1.13:9200} I [2017-07-29 09:42:57,955][INFO ][node ] [Hellion] started I [2017-07-29 09:42:58,082][INFO ][gateway ] [Hellion] recovered [8] indices into cluster_state
In the last line from the logs, we can see that indices already present on the disk were recovered. Now if we head to the public IP assigned to the cluster, we can see that the message count is restored.
Conclusion
In this blog post, I discussed how we utilised the Kubernetes setup to shift loklak to Google Cloud Platform. The deployment is active and can be accessed from the link provided under wiki section of loklak/loklak_server repo.
I introduced these changes in pull request loklak/loklak_server#1349 with the help of @niranjan94, @uday96 and @chiragw15.
Resources
- Docker Tutorial Series : Writing a Dockerfile – https://rominirani.com/docker-tutorial-series-writing-a-dockerfile-ce5746617cd.
- Introduction to YAML: Creating a Kubernetes deployment – https://www.mirantis.com/blog/introduction-to-yaml-creating-a-kubernetes-deployment/.
- Configure Liveness and Readiness Probes – https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/.
- Setting up HTTP Load Balancing with Ingress – https://cloud.google.com/container-engine/docs/tutorials/http-balancer.
- Persistent Volumes in Kubernetes – https://kubernetes.io/docs/concepts/storage/persistent-volumes/.
- Rolling updates with Kubernetes: Replication Controllers vs Deployments – https://ryaneschinger.com/blog/rolling-updates-kubernetes-replication-controllers-vs-deployments/.
- DNS Pods and Services – https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/.
You must be logged in to post a comment.