Jump to Content
Google Cloud

Taming the herd: using Zookeeper and Exhibitor on Google Container Engine

April 27, 2016
Vic Iglesias

Senior Product Manager

Zookeeper is the cornerstone of many distributed systems, maintaining configuration information, naming and providing distributed synchronization and group services. But using Zookeeper in a dynamic cloud environment can be challenging, because of the way it keeps track of members in the cluster. Luckily, there’s an open source package called Exhibitor that makes it simple to use Zookeeper with Google Container Engine.

With Zookeeper, it’s easy to implement the primitives for coordinating hosts, such as locking and leader election. To provide these features in a highly available manner, Zookeeper uses a clustering mechanism that requires a majority of the cluster members to acknowledge a change before the cluster’s state is committed. In a cloud environment, however, Zookeeper machines come and go from the cluster (or “ensemble” in Zookeeper terms), changing names and IP addresses, and losing state about the ensemble. These sorts of ephemeral cloud instances are not well suited to Zookeeper, which requires all hosts to know the addresses or hostnames of the other hosts in the ensemble.

https://storage.googleapis.com/gweb-cloudblog-publish/images/zookeeper-2dw26.max-700x700.PNG

Netflix tackled this issue, and in order to more easily configure, operate, and debug Zookeeper in cloud environments, created Exhibitor, a supervisor process that coordinates the configuration and execution of Zookeeper processes across many hosts. Exhibitor also provides the following features for Zookeeper operators and users:

  • Backup and restore
  • A lightweight GUI for Zookeeper nodes
  • A REST API for getting the state of the Zookeeper ensemble
  • Rolling updates of configuration changes
Let’s take a look at using Exhibitor in Container Engine in order to run Zookeeper as a shared service for our other applications running in a Kubernetes cluster.

To start, provision a shared file server to host the shared configuration file between your cluster hosts. The easiest way to get a file server up and running in Google Cloud Platform is to create a Single Node File Server using Google Cloud Launcher. For horizontally scalable and highly available shared filesystem options on Cloud Platform, have a look at Red Hat Gluster Storage and Avere. The resulting file server will expose both NFS and SMB file shares.

  1. Provision your file server
  2. Select the Cloud Platform project in which you’d like to launch it
  3. Choose the following parameters for the dimensions of your file server
    • Zone — must match the zone of your Container Engine cluster (to be provisioned later)
    • Machine type — for this tutorial we chose a n1-standard-1 as throughput will not be very high for our hosted files
    • Storage disk size — for this tutorial, we chose a 100GB disk as we will not need high throughput nor IOPS
  4. Click Deploy
Next, create a Container Engine cluster in which to deploy your Zookeeper ensemble.

  1. Create a new Kubernetes cluster using Google Container Engine
  2. Ensure that the zone setting is the same as you use to deploy your file server (for the purposes of this tutorial, leave the defaults for the other settings)
  3. Click Create
Once your cluster is created, open up the Cloud Shell in the Cloud Console by
clicking the button in the top right of the interface.  It looks like:

https://storage.googleapis.com/gweb-cloudblog-publish/images/zookeeper-1479c.max-100x100.PNG

Now set up your Zookeeper ensemble in Kubernetes.

  • Set the default zone for the gcloud CLI to the zone you created for your file server and Kubernetes cluster:

Loading...

  • Download your Kubernetes credentials via the Cloud SDK:

Loading...

  • Create a file named exhibitor.yaml with the following content. This will define our Exhibitor deployment as well as the service that other applications can use to communicate with it:

Loading...

In this manifest we’re configuring the NFS volume to be attached on each of the pods and mounted in the folder where Exhibitor expects to find its shared configuration.
  • Create the artifacts in that manifest with the kubectl CLI
Loading...

Monitor the pods until they all enter the RUNNING state

Loading...

Loading...

  • Query the Exhibitor API using curl, then use jq to format the JSON response to be more human-readable

Loading...

  • After a few minutes, the cluster state will have settled and elected a stable leader.

You have now completed the tutorial and can use your Exhibitor/Zookeeper cluster just like any other Kubernetes service, by accessing its exposed ports (2181 for Zookeeper and 8181 for Exhibitor) and addressing it via its DNS name, or by using environment variables.

In order to access the Exhibitor web UI, run the kubectl proxy command on a machine with a browser, then visit this page.

To scale up your cluster size, simply edit the exhibitor.yaml file and change the replicas to an odd number greater than 3 (e.g., 5), then run “kubectl apply -f exhibitor.yaml” again. Cause havoc in the cluster by killing pods and nodes in order to see how it responds to failures.

https://storage.googleapis.com/gweb-cloudblog-publish/images/zookeeper-3a1cl.max-700x700.PNG
Posted in