Jump to Content
Google Cloud

Guest post: Loot Crate unboxes Google Container Engine for new Sports Crate venture

July 12, 2017
Greg Brown

Director of DevOps, Loot Crate

[Editor’s note: Gamers and superfans know Loot Crate, which delivers boxes of themed swag to 650,000 subscribers every month. Loot Crate built its back-end on Heroku, but for its next venture  Sports Crate  the company decided to containerize its Rails app with Google Container Engine, and added continuous deployment with Jenkins. Read on to learn how they did it.]

Founded in 2012, Loot Crate is the worldwide leader in fan subscription boxes, partnering with entertainment, gaming and pop culture creators to deliver monthly themed crates, produce interactive experiences and digital content and film original video productions. In our first five years, we’ve delivered over 14 million crates to fans in 35 territories across the globe.

https://storage.googleapis.com/gweb-cloudblog-publish/images/loot-crate-18hi1.max-700x700.PNG

In early 2017 we were tasked with launching an offering to Major League Baseball fans called Sports Crate. There were only a couple of months until the 2017 MLB season started on April 2nd, so we needed the site to be up and capturing emails from interested parties as fast as possible. Other items on our wish list included the ability to scale the site as traffic increased, automated zero-downtime deployments, effective secret management and to reap the benefits of Docker images. Our other Loot Crate properties are built on Heroku, but for Sports Crate, we decided to try Container Engine, which we suspected would allow our app to scale better during peak traffic, manage our resources using a single Google login and better manage our costs. 

Continuous deployment with Jenkins

Our goal was to be able to successfully deploy an application to Container Engine with a simple git push command. We created an auto-scaling, dual-zone Kubernetes cluster on Container Engine, and tackled how to do automated deployments to the cluster. After a lot of research and a conversation with Google Cloud Solutions Architect Vic Iglesias, we decided to go with Jenkins Multibranch Pipelines. We followed this guide on continuous deployment on Kubernetes and soon had a working Jenkins deployment running in our cluster ready to handle deploys.

Our next task was to create a Dockerfile of our Rails app to deploy to Container Engine. To speed up build time, we created our own base image with Ruby and our gems already installed, as well as a rake task to precompile assets and upload them to Google Cloud Storage when Jenkins builds the Docker image.

Dockerfile in hand, we set up the Jenkins Pipeline to build the Docker image, push it to Google Container Registry and deploy Kubernetes and its services to our environment. We put a Jenkinsfile in our GitHub repo that uses a switch statement based on the GitHub branch name to choose which Kubernetes namespace to deploy to. (We have three QA environments, a staging environment and production environment).

The Jenkinsfile checks out our code from GitHub, builds the Docker image, pushes the image to Container Registry, runs a Kubernetes job that performs any database migrations (checking for success or failure) and runs tests. It then deploys the updated Docker image to Container Engine and reports the status of the deploy to Slack. The entire process takes under 3 minutes.

Improving secret management in the local development environment

Next, we focused on making local development easier and more secure. We do our development locally, and with our Heroku-based applications, we deploy using environment variables that we add in the Heroku config or in the UI. That means that anyone with the Heroku login and permission can see them. For Sports Crate, we wanted to make the environment variables more secure; we put them in a Kubernetes secret that the applications can easily consume, which also keeps the secrets out of the codebase and off developer laptops.

The local development environment consumes those environmental variables using a railtie that goes out to Kubernetes, retrieves the secrets for the development environment, parses them and puts them into the Rails environment. This allows our developers to "cd" into a repo and run "rails server" or "rails console" with the Kubernetes secrets pulled down before the app starts.

TLS termination and load balancing

Another requirement was to set up effective TLS termination and load balancing. We used a Kubernetes Ingress resource with an Nginx Ingress Controller, whose automatic HTTP-to-HTTPS redirect functionality isn’t available from Google Cloud Platform's (GCP) Ingress controller. Once we had the Ingress resource configured with our certificate and our Nginx Ingress controller running behind a service with a static IP, we were able to get to our application from the outside world. Things were starting to come together!

Auto-scaling and monitoring

With all of the basic pieces of our infrastructure on GCP in place, we looked towards auto-scaling, monitoring and educating our QA team on deployment practices and logging. For pod auto-scaling, we implemented a Kubernetes Horizontal Pod Autoscaler on our deployment. This checks CPU utilization and scales the pods up if we start getting a lot of traffic to our app. For monitoring, we implemented Datadog’s Kubernetes Agent and set up metrics to check for any critical issues, and send alerts to PagerDuty. We use StackDriver for logging and educated our team on how to use the StackDriver Logging console to properly drill down to the app, namespace and pod for which they wanted information.

Net-net

With launch day around the corner, we ran load tests on our new app and were amazed at how well it handled large amounts of traffic. The pods auto-scaled exactly as we needed them to and our QA team fell in love with continuous deployment with Jenkins Multibranch Pipelines. All told, Container Engine met all of our requirements, and we were up and running within a month.

https://storage.googleapis.com/gweb-cloudblog-publish/images/loot-crate-2tkcx.max-700x700.PNG

Our next project is to move our other monolithic Rails apps off of Heroku and onto Container Engine as decoupled microservices that can take advantage of the newest Kubernetes features. We look forward to improving on what has already been an extremely powerful tool.

Posted in