Jump to Content
Google Cloud

Deploying Memcached on Kubernetes Engine: tutorial

November 17, 2017
Julien Phalip

Solutions Architect, Google Cloud

Memcached is one of the most popular open source, multi-purpose caching systems. It usually serves as a temporary store for frequently used data to speed up web applications and lighten database loads. We recently published a tutorial to learn how to deploy a cluster of distributed Memcached servers on Kubernetes Engine using Kubernetes and Helm.

https://storage.googleapis.com/gweb-cloudblog-publish/images/memachedu338.max-700x700.PNG

Memcached has two main design goals:

  • Simplicity: Memcached functions like a large hash table and offers a simple API to store and retrieve arbitrarily shaped objects by key. 
  • Speed: Memcached holds cache data exclusively in random-access memory (RAM), making data access extremely fast.
Memcached is a distributed system that allows its hash table’s capacity to scale horizontally across a pool of servers. Each Memcached server operates in complete isolation and is unaware of the other servers in the pool. Therefore, the routing and load balancing between the servers must be done at the client level.

The tutorial explains how to effectively deploy Memcached servers to Kubernetes Engine, and describes how Memcached clients can proceed to discover the server endpoints and set up load balancing.

The tutorial also explains how to improve the system by enabling connection pooling with Mcrouter, a powerful open source Memcached proxy. Advanced optimization techniques are also discussed to reduce latency between the proxies and Memcached clients.

Check out the step-by-step tutorial for all the details on this solution. We hope this will inspire you to deploy caching servers to speed up your applications!

Posted in