Utilizing Readiness Probes for loklak Dependencies in Kubernetes
When we use any application and fail to connect to it, we do not give up and retry connecting to it again and again. But in the reality we often face this kind of obstacles like application that break instantly or when connecting to an API or database that is not ready yet, the app gets upset and refuses to continue to work. So, something similar to this was happening with api.loklak.org.
In such cases we can’t really re-write the whole application again every time the problem occurs.So for this we need to define dependencies of some kind that can handle the situation rather than disappointing the users of loklak app.
Solution:
We will just wait until a dependent API or backend of loklak is ready and then only start the loklak app. For this to be done, we used Kubernetes Health Checks.
Kubernetes health checks are divided into liveness and readiness probes.
The purpose of liveness probes are to indicate that your application is running.
Readiness probes are meant to check if your application is ready to serve traffic.
The right combination of liveness and readiness probes used with Kubernetes deployments can:
Enable zero downtime deploys
Prevent deployment of broken images
Ensure that failed containers are automatically restarted
Pod Ready to be Used?
A Pod with defined readiness probe won’t receive any traffic until a defined request can be successfully fulfilled. This health checks are defined through the Kubernetes, so we don’t need any changes to be made in our services (APIs). We just need to setup a readiness probe for the APIs that loklak server is depending on. Here you can see the relevant part of the container spec you need to add (in this example we want to know when loklak is ready):
readinessProbe: httpGet: path: /api/status.json port: 80 initialDelaySeconds: 30 timeoutSeconds: 3
Readiness Probes
Updating deployments of loklak when something pushed into development without readiness probes can result in downtime as old pods are replaced by new pods in case of Kubernetes deployment. If the new pods are misconfigured or somehow broken, that downtime extends until you detect the problem and rollback.
With readiness probes, Kubernetes will not send traffic to a pod until the readiness probe is successful. When updating a loklak deployment, it will also leave old one’s running until probes have been successful on new copy. That means that if loklak server new pods are broken in some way, they’ll never see traffic, instead old pods of loklak server will continue to serve all traffic for the deployment.
Conclusion
Readiness probes is a simple solution to ensure that Pods with dependencies do not get started before their dependencies are ready (in this case for loklak server). This also works with more than one dependency.
Resources
- Code in loklak server: https://github.com/loklak/loklak_server .
- More about readiness and liveness probes at: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/
- Kubernetes Health checks: https://kubernetes-v1-4.github.io/docs/user-guide/production-pods/#liveness-and-readiness-probes-aka-health-checks
- Blog posts related to kubernetes: https://www.ianlewis.org/en/tag/kubernetes.
- Tutorial for creating and using Pods and probes: https://github.com/googlecodelabs/orchestrate-with-kubernetes/blob/master/labs/monitoring-and-health-checks.md