Active-Passive Jenkins Setup in Kubernetes

2.5k Views Asked by At

We are planning to setup Highly Available Jenkins setup in container platform using kubernetes. We are looking at setting up one Active master instance and another one in standby mode. Jenkins data volume is going to be stored in a global storage that is shared between the two master containers.

In case the active master instance is not available then requests should fail over to other master instance. And the agents should be communicating only with active master instance.

How do we accomplish Jenkins HA setup in active/passive mode in kubernetes. please provide your suggestions.

We would like to achieve as shown in the diagram from below link

https://endocode.com/img/blog/jenkins-ha-setup_concept.png

3

There are 3 best solutions below

6
StephenKing On

This contradicts with how one should IMHO run applications in Kubernetes. Active/passive is a concept for the past century.

Instead, configure a health check for the Jenkins Deployment. If that fails, Kubernetes will automatically kill the task and start a replacement (which will be available only a few seconds after detecting the active one being unhealthy).

0
arun On

If you are OK with an opinionated framework , then JenkinsX may help you. It comes by default with the features as required by you.

0
Raunak Jhawar On

There have been active considerations to emulate active/passive setup for containers, but note that as a product feature this really is not a must have and hence not built in. This very well may be implemented as an OOB feature integration wherein you have to craft your applications to at least do the following:

  1. General leader election (for controller selection and traffic routing, maybe a sidecar container to do elections and message routing)
  2. Make the liveness/readiness probe detection routines (and the failover logic) to patch all pods under failed paradigm to no longer be selected via any pod selector equation
  3. In the event of another failover, you will still have to ensure another patch of labels (and this time across the old and new pods) to update pods metadata aka labels

If you are looking for something bare minimal than, configuring liveness/readiness probes may just do the trick for you. As always, you should avoid getting into a practice of mass mutating pod labels with ad-hoc patches for role selection

https://github.com/kubernetes/kubernetes/issues/45300