Google Cloud Platform Blog
Product updates, customer stories, and tips and tricks on Google Cloud Platform
In case you missed it in November: Google Cloud Platform Live unveils new products, and Google Compute Engine welcomes features
Wednesday, November 26, 2014
November has gone by like this:
Okay, maybe not that fast, but let’s rewind a bit and see what we’ve gotten up to this month.
Google Cloud Platform Live introduces Container Engine, Cloud Networking, Cloud Debugger, and more
What do you get when you get tens of thousands of developers from around the world joining in for one event? A
. On November 4,
Google Cloud Platform Live
featured keynotes, highlights from our customers, sessions covering topics from mobile apps to cloud computing, and some
(literally). We also announced new features that are now available, including:
Google Container Engine
The alpha release of
Google Container Engine
lets you move from managing application components running on individual virtual machines to launching portable Docker containers that are scheduled into a managed compute cluster for you. Want to learn more? Check out
Google Cloud Networking
We’ve been investing in networking for over a decade at Google to ensure that you always get the best experience. With Google Cloud Platform, we’re focused on bringing our customers that same scale, performance and capability. So with
Google Cloud Interconnect
, it’s now easier for you to connect your network to us.
Firebase = building faster mobile apps
During the event, we
how Firebase helps you
build mobile apps quickly
and also gave a sneak preview of some new features the team has been hard at work on. Read more on what Firebase and Google Cloud Platform are getting up to together
is available in beta and makes it easier to troubleshoot applications in production. Now you can simply pick a line of code, set a watchpoint and the debugger will return local variables and a full stack trace from the next request that executes that line on any replica of your service. There is zero setup time, no complex configurations and no performance impact noticeable to your users. Ready to get started? Check out more info
try it for yourself
To get a full recap of the event, check out our detailed
covering all our announcements. Or re-live the event with the
, or a
of the whole day.
Curated Ubuntu Images now available on Google Cloud Platform
Our customers spoke and we listened. Google Cloud Platform
announced this month
that we are now offering Ubuntu 14.04 LTS, 12.04 LTS and 14.10 guest images in beta - at no additional charge. We’d love to have you take these images for a trial run. Try it here and let us know what you think.
Compute Engine welcomes Autoscaling and Click to Deploy Percona XtraDB Cluster
This month we introduced two highly anticipated features to
Percona XtraDB Clusters
. Autoscaling allows customers to build more cost effective and resilient applications.
Click to Deploy Percona XtraDB Cluster
helps you simply launch a cluster preconfigured and ready-to-use in just a few clicks.
Free IPv6 address with Cloud SQL instances
It’s no secret that the Internet is quickly running out of IPv4 address space. Google has been at the
forefront of IPv6 adoption
-- the newer, larger address space -- and we are now assigning an immutable IPv6 address to each and every Cloud SQL instance, and making these addresses available for free. Full details can be found in
AffiniTech and Framestore: Saving money and creating great customer experiences through Google Cloud Platform
We heard first hand from our customers AffiniTech and Framestore on how they use our products. AffiniTech helps their customers make data-driven decisions, while Framestore creates unforgettable visual effects (think
). Both companies have seen incredible impacts to their bottom line and with their customers. But don’t just take our word-- hear it directly from
in their blog posts.
November, and this year, has given us a lot to be thankful for, so we want to say thank you to all of you who’ve helped us hack, hustle, and go on this journey. To all those outside of the US, we wish you a well-deserved thank you for your support and to those stateside, have a great Thanksgiving.
-Posted by Charlene Lee, Product Marketing Manager
Deploy Percona XtraDB Cluster on Google Compute Engine
Friday, November 21, 2014
high-availability for MySQL
can easily become complicated with so many options, including some that go a step further in combining high-availability with multi-master replication in a cluster format and more. Not to mention that besides
MySQL’s own replication
functionality, separate open source projects also include
, amongst others. It’s clear that
each option has pros and cons
, and selecting the best option for your specific use case can require a lot of research and testing.
That's why today we’re going to try making this process easier by introducing
Click to Deploy Percona XtraDB Cluster
to help you simply launch a cluster preconfigured and ready-to-use in just a few clicks. Percona is an open source company supporting MySQL that provides a convenient package called
Percona XtraDB Cluster
, which combines
with Galera replication software. Using Percona XtraDB Cluster, developers can set up and configure a multi-master MySQL cluster with
additional performance, scalability and diagnostic improvements
over standard MySQL. Plus, Percona software is open source and compatible with existing MySQL environments.
Along with Percona XtraDB Cluster, each server node includes
, a set of command-line tools for administering MySQL, giving developers more flexibility in their options for MySQL clustering. Our goal with this Click to Deploy application is to provide an environment to evaluate Percona XtraDB Cluster as a solid option in high-availability MySQL. We want to help illustrate how Google Compute Engine can be used on a team already using MySQL, so you can build your applications however way you like.
Learn more about running
Percona on Google Compute Engine
deploy a Percona MySQL cluster
today with our
free trial credit
. Please let us know what you think about this feature and the challenges with scaling MySQL in the cloud. You can also
for Consulting, Support, Managed Services, and Training. Deploy away!
-Posted by Chris Pomeroy, Program Manager
Framestore Frees Up Designers to Create Unforgettable Visual Effects, with Help from Google Compute Engine
Wednesday, November 19, 2014
Today’s guest blog comes from Steve MacPherson, chief technology officer for
, a visual effects production firm headquartered in London. Framestore’s visual effects work has been seen in films like “Avatar,” “Gravity,” and “Guardians of the Galaxy,” winning the company numerous Academy Awards, BAFTAs, and Cannes Lions awards.
If you saw “Gravity,” hopefully you enjoyed the movie’s depiction of space travel and weightlessness. It was a brilliant experience to take on such a daunting project, and very rewarding to be part of the collective effort.
A lot of planning and effort goes into every visual effect, as well as a fairly large amount of computing power. At peak times – when we’re rendering images on several projects at once for advertisers and film studios – we’ll consume the processing power of up to 15,000 Intel cores. Managing peak provisioning and matching resources to projects is central to keeping the production pipeline moving toward various deliveries. The challenge that returns regularly is when demand for resources conflicts with capacity – usually during periods when the stress of delivery is at its highest, and the focus is on realizing the creative goals of our clients in time for the immovable object that is a major release date.
Historically, this boiled down to a simple scenario: purchase additional equipment. A design maxim I've long held is that we've never built a machine room that doesn't eventually run out of space, cooling or power. This “computational load bubble” is a result of both the scale of modern studio films, and the fact that we run multiple films through the facility in parallel. This is the peak provisioning problem that we were looking to address for a number of years. Once films are delivered, the demands recede, and we have an excess capacity for some period of time until we reach the next set of deadlines.
For the past few years, we’ve kept a close eye on the potential for using external resources as an overflow valve. Google, through its
Google Compute Engine
, is the first company we've worked with that was able to combine raw resources with a team that understood our requirements in detail and a business model that helped us manage the economics. Within a day of firing up the network, we built our image inside the Compute Engine container.
At Framestore, we’ve developed a sophisticated in-house job submission system based on our render queue manager, fQ – it’s extremely efficient at juggling various job types to match them accurately to rendering nodes available at any given time. This workflow is central to the sustained high levels of efficiency for our render farm, giving us up to 95% of overall capacity for weeks on end.
Having Google Compute Engine on the back end opened a number of opportunities for us to siphon off a certain class of work during a period of peak production. The load reduction on our farm allowed us to be much more specific in how we prioritized our deliverables, ultimately leading to a much more focused and predictable delivery schedule – great for production and great for maintaining confidence with the studios.
Google gives us breathing room during periods of peak capacity, allowing our artists more flexibility around creative refinement. We do many iterations of an image or a visual effect, making minor technical tweaks and submitting it to the render farm to see if it works. During periods of high load, if all of our rendering is in-house, the creative team might have to wait more than a day to see results when the in-house farm is at capacity. This introduces stress and management overhead around which shots get priority.
By adding Compute Engine to our workflow and allowing our in-house capacity to focus on the studio work, everyone’s project gets computing time – and the creative team can get as imaginative as they want to, with fast views of new iterations.
The results: fewer bottlenecks, more creativity and more predictability, not to mention saving about £200,000 (more than $300,000 USD) on the cores we didn’t need to buy. We can now confidently move into final stages of production on our biggest projects, knowing we have a reserve of computational ability on tap. When you check out new movies this year and next like “Dracula Untold” and “Jupiter Ascending,” you’ll be looking at our visual effects work, created using all the computing power at our fingertips.
Autoscaling, welcome to Google Compute Engine
Monday, November 17, 2014
The true power of cloud computing is unlocked when developers can build resilient and cost efficient applications that use just the right amount of resources necessary at any given time. So the same team that designed the scaling infrastructure for products like Google Search and Gmail have brought a highly anticipated feature to
Google Compute Engine
intelligent horizontal Autoscaling
. Today we are releasing the service into Beta, which means it is now available for everyone to start using.
Autoscaling allows customers to build more cost effective and resilient applications. Using Compute Engine Autoscaling, you can ensure that exactly the right number of Compute Engine instances are available at any given time to handle your application’s workload. This saves you money when your application’s usage is low, and ensures your application is responsive when utilization is high.
The Compute Engine Autoscaler is able to intelligently and dynamically scale the number of instances in response to different load conditions by defining the ideal utilization level of a group of Compute Engine instances. This means that when the actual utilization of your service increases or decreases, Autoscaler will detect the change and adjust the number of running instances to match. Autoscaler can respond to a number of different metrics such as CPU load, QPS on a HTTP Load Balancer and metrics defined using the
One early customer of Compute Engine’s Autoscaler was
, the popular website-building service. Golan Parashi, Wix.com's Infrastructure Team Lead, commented how Google uses heuristics to determine how many instances to add at one time to hit demand, “reducing [our] expenses, while giving us confidence that Google will manage the appropriate number of machines, even when a spike occurs."
Autoscaler not only chooses the right number of instances but also adapts automatically based on how far the current state is from the desired target. This means Autoscaler performs well even in unexpected scenarios such as sudden traffic spikes. At Google Cloud Platform Live,
how an application could scale from zero to handling over 1.5 million requests per second using Autoscaler.
Here are some additional resources to get you up to speed on Compute Engine’s Autoscaler:
us automatically scale up to 1 million queries per second while on stage talking about Autoscaler at Google Cloud Platform Live
Learn more about
Autoscaling on Google Compute Engine
Learn more about
HTTP Load Balancing
We can’t wait to see what you build - and scale - next on our platform.
-Posted by Filip Balejko, Software Engineer
Affini-Tech Brings Affordable and User-Friendly Big Data Analysis to Retailers, Using Google Cloud Platform
Friday, November 14, 2014
Today’s guest blog comes from Vincent Heuschling, founder and CEO of
, a creator of data platforms that help businesses make data-driven decisions. Affini-Tech is based in Meudon, France and was founded in 2003.
Affini-Tech’s primary goal is helping our customers make data-driven decisions, regardless of the industry they work in. This means giving them easy workflows and web-based interfaces for analyzing and managing Big Data – all at a cost that makes sense for their businesses.
Google Cloud Platform
has become the foundation for everything we do.
When we launched Affini-Tech, we knew we needed a scalable solution for building applications and analyzing information. We explored a number of Cloud vendors - including doing an initial storage deployment on another large public cloud. However, after trying Cloud Platform and speaking with the Cloud Platform team, we decided to go all-in with Google. Google's pricing was the most competitive, but we also found it to be the best platform for our developers.
Google App Engine
Google Compute Engine
provided us with an integrated technology stack that worked better than anything else on the market, making it easier for us to build complex applications.
We use App Engine to build applications that help our customers to control their data collections, model data sets and filter and group data. We sell these applications to marketing companies that want to run their software on top of our platform. App Engine is flexible enough to allow developers at marketing companies to customize our stack for their own needs.
Cloud Platform also helps us generate data findings at a faster rate. We’re storing data in
Google Cloud Storage
, creating ephemeral Hadoop and Apache Spark clusters, then pushing the data into BigQuery for analysis. Ephemeral clusters provide a more efficient, flexible and cost-effective processing model than old-fashioned static clusters and take full advantage of the Cloud model of computing. The key enablers to using the ephemeral clusters are the
Google Cloud Storage connector
, which lets us directly access data on Cloud Storage using standard Hadoop interfaces, and
, which helps us automate cluster deployment. Our customers only have to “pay as they process,” which saves them money. Not to mention, setup takes less time. Traditional clusters can take days to install, whereas we can get ephemeral clusters up and running in just minutes.
We can pass these cost savings onto our customers, which makes our products and services more competitive. Many of our users are used to spending more than $250,000 to build a data analytics platform. We can often provide the same service for $2,000 per month. This saves our customers money and allows them to go deeper with their data analytics. This access allows our customers to create things like micro segments in their customer base so they can do better targeting for their marketing campaigns.
In a way, Google is helping to democratize data, since more businesses can afford to study it. If a customer is already using Google Apps – and many of them are – we can integrate our data platforms into Google Apps, making these tools even easier to use and understand.
As a small company, the support we receive from the Cloud Platform team is helping us think bigger. It enables us to build new tools and platforms that take advantage of big data. We plan to make a push for business beyond France and the retail sector – and we’re confident about our expansion, with Cloud Platform doing the heavy lifting.
- Contributed by Vincent Heuschling, founder and CEO of Affini-Tech
Cloud SQL instances now come with a free IPv6 address
Wednesday, November 12, 2014
The Internet is quickly running out of IPv4 addresses (the traditional IP addresses, e.g. 220.127.116.11), which has led to rationing strategies such as having to pay for reserved addresses. The industry response to this issue has been to develop a new, much larger, address space: IPv6 offers an abundance of available IPs (of the form 2001:4860:4864:1:329a:211f:1d19:a258). Google has been
at the forefront of IPv6 adoption
, and we are now assigning an immutable IPv6 address to each and every Cloud SQL instance and making these addresses available for free.
Everything is ready for you to benefit from IPv6: you can see the IPv6 address of your instance using the
command-line tool. With the current version of the tool, the command is:
gcloud sql instances describe
The response will be of this form (we’ve bolded the IPv6 address and shortened the output for readability).
currentDiskSize: 82.4 MB
maxDiskSize: 250.0 GB
Alternatively, you can see it in the Google Developers Console (in the Properties section on the instance overview page):
All new and existing Cloud SQL, instances will be assigned an immutable IPv6 address. Please note however that in order to use it you need to explicitly authorize your Cloud SQL instances to receive IPv6 traffic. You can do this using the same access control mechanism you use to control IPv4 traffic, by explicitly authorizing external IPv6 addresses that are allowed to connect to your instance. For details, see how you can
configuring access control for IP connections
If you currently connect to your instance over IPv4, you can continue to do so, or you can switch to connect over IPv6. If you make the change, you can then choose to release your IPv4 address and stop paying for it. To do so, follow the
instructions to edit the properties of a Cloud SQL instance
We understand how frustrating it is to optimize for artificial constraints, like a limited address space, which is why we’re bringing IPv6 to Google Cloud SQL so you can focus on your applications instead. For any question please join us on Stack Overflow using the
-- Posted by Easwar Swaminathan, Software Engineer
Building mobile apps faster with Firebase
Tuesday, November 11, 2014
Last week at
Google Cloud Platform Live
how Firebase could be used to build mobile apps quickly and also gave a sneak preview of some new features the team has been hard at work on.
One highlight onstage was showing how you could build a fully functional collaborative office planning application in about
. Building this app so quickly was possible because Firebase provided the common backend infrastructure needed to get up and running, allowing us to focus on the application’s front-end code and user experience.
We’ve been busy
We’re proud of what Firebase already can do to speed up your development time, but we’re working harder than ever to improve the platform. At last week's event, we announced a new Firebase feature called
enhanced query support
. You can now query data by any child key and have that query update in realtime as your data changes. These improvements make it easier for developers to sort and filter their data and will make many common use cases much easier to implement.
Last week, we also
a new type of integration with server-side code that makes it especially easy to connect Firebase with Google App Engine. This new feature, called Triggers, is simple to implement: in your Firebase’s rules, you define a Trigger specification which includes criteria for when the trigger should fire. When the criteria are met, an outbound HTTP request is sent to an external service you specify along with any data and headers you provide. Use triggers to submit a payment, send an SMS, ping a server for logging and analytics or invoke some code on an external server -- all without needing to write your own server-side code.
We used Triggers to connect the office planning application to an analytics backend running on
, and then used
to generate a report about how the furniture moved around the room -- all in just a few minutes with a couple lines of code. Our Triggers feature hasn’t launched yet, so stay tuned for our beta announcement soon.
And are going to get even busier...
We’re continuing to improve Firebase along every dimension, and a big part of that effort will go towards improved integration with the rest of Google Cloud Platform. Although developers building Firebase applications love the simplicity of focusing on client-side development, about half of our developers also run their own server-side code. These developers need to do computationally intensive tasks like video or image processing, perform complex analysis on their data, and keep proprietary business logic on trusted servers. With Firebase joining Google Cloud Platform, developers can power their entire product using a single platform.
This is just the beginning. We’re focused on finding all of the ways Firebase can work seamlessly with other Google Cloud Platform products to bring you the best developer experience possible. To get these updates first, please
follow us on Twitter
As always, we look forward to seeing what you build.
-Posted by Andrew Lee, Product Manager
Open Source + Hosted Containers: A recipe for workload mobility
Monday, November 10, 2014
the availability of the Google Container Engine Alpha, our new service offering based on
, the open source project we announced in June. One of the advantages of using Kubernetes and Docker containers as the underpinnings of Google Container Engine is the level of portability they offer our customers, as both are designed to run in multiple clouds.
We listened to our customers explain their needs for a multi-cloud strategy, with either mixed public and private deployments or in multiple public clouds, so we decided to focus on mobility as a design goal for our next generation computing service. We also wanted to make sure these advantages would benefit both developers needing to run their workloads in multiple clouds indefinitely, as well as those just getting started and looking to move to the cloud. That’s why Google Cloud Platform is an ideal environment for customers who are in the process of moving to the cloud, want to run only part of an application in the cloud, or need to run an application in multiple clouds. Here are some common hybrid cloud use cases we hear from our customers:
Develop and perform scale out testing in the cloud, but deploy to an on-premises production data center.
A huge benefit to many of our customers is being able to do high throughput scale out testing of an application on resources that are paid for by the minute because it reduces iteration time and improves team productivity. Because Google Compute Engine is billed in minute quanta, the incremental cost of accelerated scale out testing is low. You pay what you would for sequential testing-- it just happens on more cores and finishes much more quickly. This only works if the framework that runs your application is available in both your production and test environment. It also helps to have a management framework that makes it easy to deploy, orchestrate and wire together individual tests. Google Container Engine with Docker containers provide a framework that supports the easy deployment and management of an app, and also lets you easily integrate test management and orchestration frameworks.
Migrate a new piece of an application to the cloud, but have parts of it stay on-premises.
With Google Cloud Platform’s newly announced
direct peering and carrier interconnect
network features, it’s now easier to connect a part of an application deployed in the cloud to Google’s data centers with the on-premises parts. With 70+ peering locations in 33 countries, it’s possible to get unprecedented levels of throughput and low latency access to your cloud resources. Many of our customers also highly value a common toolchain and management paradigm, as it makes sense to build modern applications using the same tools, packaging format and management services, but let the pieces that need to stay on-premises remain there.
Burst to the cloud during peak load.
The cloud offers the ability to quickly and easily spin up a large number of VM instances that are charged on a per minute basis. Compute Engine instances tend to boot in around 30 seconds, giving our customers the ability to react quickly to unexpected demand spikes.
Kubernetes and Container Engine were designed from the ground up to meet the needs of those looking to benefit from application mobility. The following properties ensure our customers receive high levels of portability:
Docker has created a highly portable application container framework and is committed to the vision of making it run everywhere. The natural decoupling of application pieces from the OS and infrastructure environment is a really important ingredient in achieving high levels of portability.
Modular to the core.
To become broadly adopted, it was important to allow providers to adapt and extend pieces of the stack without invalidating the core API. We focused on rigorous and principled modularity in the design, and pretty much everything in Kubernetes can be unplugged and replaced by other technologies.
A key insight into what allows portability and mobility is the idea that different pieces of an application may be moved to a different cloud at different times. Kubernetes is built with a focus on micro-services based architecture and ensuring that the pieces of an application are not tied together. The beauty of Kubernetes is that its naturally decoupled model creates the feeling like the pieces are co-deployed. You don’t have to jump through hoops to get a decoupled deployment.
To achieve unprecedented levels of portability of applications, the community has pulled together to support integration from the start. Some of the biggest names in technology have stepped up to help bring Kubernetes to their technology stacks, including Microsoft, IBM, VMware, and HP. Beyond basic integration, a set of our partners have been working hand-in-glove with us on the core product to strengthen the platform, and add new capabilities and abstractions that offer even higher levels of portability. For example, Red Hat has contributed tirelessly to almost every component of the stack and has been instrumental in shaping and improving the overall production readiness of Kubernetes.
In addition, we have relied on CoreOS technologies in Kubernetes for some time, such as using
for distributed state management. Looking forward, they are working to deliver new technology to achieve high levels of portability for Kubernetes and have also started developing new capabilities for the platform, most prominently
. Because Kubernetes relies on virtualized networking capabilities, some of our earlier customers indicated that it was challenging to move to environments that were not running on the same virtualized network technologies that Google offers (Andromeda based). With Flannel, we now have a more portable network layer for Kubernetes.
CoreOS also just contributed code to ensure that Kubernetes works well on Amazon Web Services and have signed up to qualify our binary releases and ensure high levels of mobility between Google and Amazon. Alex Polvi, CEO of CoreOS said, “We really respect the architecture behind Kubernetes. CoreOS stands behind the project and is working to provide support across cloud and on-premises environments to encourage interoperability. You can run Kubernetes in any environment CoreOS supports, which includes AWS and all other major cloud providers.”
We openly invite others to join in our journey with this project. Our IRC channel is
, and the open source project is hosted on
. You can take Container Engine for a
free test drive
and get all the details you need to get started with our
-Posted by Craig McLuckie, Product Manager
Containers & Kubernetes
End printf debugging in the cloud with Google Cloud Debugger Beta
Friday, November 7, 2014
Every developer has experienced the stress of troubleshooting an issue in production. The process usually starts with a customer complaint or an exceeded threshold alert. You immediately jump into “fix it mode” and start searching through application logs trying to find any data that might hint at the underlying cause. Then you trace through the code for a while and realize that the information you need to debug isn’t logged, so you add more logic to capture variables, and redeploy. This ends up being a vicious, repeated cycle -- and you still haven’t identified the root issue.
Earlier this week at
Google Cloud Platform Live
, we launched the beta release of Cloud Debugger which makes it easier to troubleshoot applications in production. Now you can simply pick a line of code, set a watchpoint and the debugger will return local variables and a full stack trace from the next request that executes that line on any replica of your service. There is zero setup time, no complex configurations and no performance impact noticeable to your users.
Back when developers were building client applications that ran on single thread on a single processor on a single machine, it was much easier to troubleshoot what was going on. Developers would still start with a problem and a stare at the code, but they could reproduce the problem, set a breakpoint inspect the stack and local variables and quickly find the solution. Cloud Debugger brings this productive style of debugging to modern cloud production troubleshooting.
So why is this style of debugging so hard in the cloud? First, cloud based services often have highly interdependent systems. Stopping one process to debug changes the overall system which may make the problem harder to reproduce. Second, cloud services are often replicated across many virtual machines. It is impossible to know on which one to set a breakpoint. Finally, by definition, this is production traffic; you can’t just stop a service in production giving multiple customers a bad experience. The good news is we have solved each of these problems with Cloud Debugger.
After setting a watchpoint on the line in question, Cloud Debugger simultaneously debugs all instances of your service in production. Whether that is a single instance or 10,000 replicas. Cloud Debugger watches execution on all instances and as soon as one hits the condition the debugger stops watching on all other instances.
When the watchpoint is hit the locals and stack are returned. Cloud Debugger does not stop the thread, process or service it is debugging. The debugger pauses execution at the appropriate line of execution, snapshots the stack and local variables then returns execution to the normal flow. The overhead is minimal and limited.
overhead on services without active debugging
for having an active debugging session
to capture the stack and locals
Ready to get started?
Try it for yourself
-- there’s no set up required. All you need is a Java Managed VM based project with its source code in
or in a connected
repo. Stay tuned for support for other programming languages and environments. We’d love to hear your
and will be monitoring
- Posted by Brad Abrams, Group Product Manager
Cloud Networking: More connectivity choices, better performance, and lower prices
Wednesday, November 5, 2014
At Google, we’ve been investing in networking for over a decade to ensure that our customers always get the best experience. With Google Cloud Platform, we’re focused on bringing our customers that same scale, performance and capability.
Yesterday we announced that we’re making it easier for you to connect your network to us, with
Google Cloud Interconnect
. We’ve also doubled the TCP throughput performance of our network with Andromeda 1.5, and we’re substantially lowering egress prices to most Asia-Pacific countries.
Announcing Google Cloud Interconnect
Google Cloud Interconnect is a suite of connectivity options, enabling Cloud Platform customers to connect their network to Google. Today, we are announcing three options:
Carrier Interconnect enables you to connect your network to a service provider with a direct connection to Google. This connection helps provide higher availability and lower latency for your traffic as it travels from your systems to Google.
Our initial launch service providers are
This diverse set of service providers, along with our globally distributed network edge, will allow you to tailor your connectivity to your business needs. Learn more about
If you meet certain requirements, you can establish a direct peering connection between your business network and Google. Visit our
to find out how to establish a direct connection and exchange Internet traffic at any of our 70 points of presence across 30 countries.
Carrier Interconnect and Direct Peering customers receive special pricing for local Internet egress in regions that include cloud connections, at up to 50% discount off regular egress rates and with no additional per-port fees:
(alpha coming soon): We’re also excited to announce that our VPN service will be available in alpha next month. VPN seamlessly bridges the gap between your on-premises networks and Cloud Platform, providing encrypted tunnels directly into your virtual network, without being bound by the limitations of a single virtual machine or hardware appliance. Whichever way you choose to connect to Cloud Platform, you’ll be able to encrypt your traffic if you choose to do so. VPN will become generally available in Q1 2015.
Andromeda 1.5 Dramatically Improves Performance
In April 2014, we introduced
— Google’s network virtualization stack. Now we are rolling out Andromeda 1.5 across Cloud Platform, substantially increasing TCP throughput and connections per second limits. With Andromedia, we will continue to innovate and bring you improvements in throughput and latency without the need for new hardware.
Lower APAC Prices
We’re continuing to evolve our pricing structure, reducing Internet egress for APAC, excluding China and Australia, by up to 47%. Due to their particular costs, egress to China and Australia will now be priced separately from APAC. Australia Internet egress is dropping by 10% in the 0-1TB volume tier. China egress rates are staying at their current levels until March 1st, 2015, when new rates will go into effect to reflect higher costs in that region.
Envisioning a world of choice
This year you saw us launch advanced routing capabilities,
, and other core networking infrastructure. But that was just the beginning.
In addition to expanding the breadth of options we offer, we want to enable our partners to offer better service to their customers. Yesterday, Fastly announced a new offering,
, that interconnects with Cloud Platform. Fastly reported that in most use cases, Fastly’s Cloud Accelerator customers on Cloud Platform (vs. other cloud providers) benefit from a more than 4x increase in speed of response times for content requests.
We want to continue working to drive innovation in networking and bring you increased flexibility, features, and performance.
-Posted by Morgan Dollard, Product Manager
Unleashing Containers and Kubernetes with Google Container Engine
Tuesday, November 4, 2014
Linux container technologies are changing the way that people deploy and manage applications. Google has long relied on these technologies to run our internal workloads, and we are excited to see the growing community momentum around technologies like
Today we are announcing the alpha release of a new service:
Google Container Engine
. Powered by Kubernetes, Container Engine delivers a fully-managed cluster manager for Docker containers. Container Engine lets you get a Docker packaged application up and running quickly in a logical computing cluster. It frees you up from worrying about deploying to individual virtual machines and significantly reduces your operational burden while letting you develop in a far more agile fashion.
Container Engine has taken inspiration from the systems that run Google’s internal workloads. While these systems were originally built to operate at unprecedented levels of scale and efficiency, the patterns they introduced are relevant to everyone. Container Engine allows you to break your application into smaller, atomic units that can be easily organized, managed and wired together. This enables all parts of your application to effortlessly scale to any level you need. By adding the concept of a cluster (a large logical computer that stitches together lots of individual machines) and using Docker to package your application, Container Engine allows you to rapidly test and deploy your application. It takes care of scaling, monitoring and the health of your containers.
Google Container Engine groups together a set of
Google Compute Engine
VMs into a compute cluster that is managed for you. It lets you get the most out of your compute infrastructure and simplifies the development and deployment of distributed applications. Container Engine lets you:
spin up a Kubernetes cluster in minutes on fast-booting Google Compute Engine VMs
deploy a composite or distributed Docker packaged application quickly and easily
make containers accessible to one another, and the outside world with deep integration into the powerful
Andromeda based virtual network
easily manage and monitor the health of your applications
deploy services easily and make them discoverable to other parts of your application
organize your complex systems with a powerful label based management system
One of the unique things about Google Container Engine is that it was designed from the start with workload mobility in mind, making it easy to create portable applications that can easily be moved from one hosting environment to another. We understand that our larger customers live in a multi-cloud world (either on-premises and public cloud, or multiple public clouds). Because it is based on Kubernetes, and we have worked with a broad array of partners including Microsoft, IBM, Red Hat, and VMWare to make Kubernetes work everywhere, you are not locked into Google Cloud Platform. You can build your application in a local development environment, then deploy to Google Cloud Platform, or create applications that can be run in multi-cloud environments. The choice is yours.
This is an Alpha release and although it isn’t yet production-ready, Container Engine will continue to become richer and more powerful in the coming weeks and months. Because of the intense interest in containers and Kubernetes, we decided to open Container Engine up for everyone to give us feedback and help guide its development.
Take Container Engine for a
free test drive
and get the details you need with our
. If you want to participate in one of our early customer programs for Container Engine, please
sign up here
. These programs will directly connect you with our engineering team to help shape the future of the product, and give early support for production use. Stay tuned for more updates.
- Posted by Craig Mcluckie, Product Manager
Containers & Kubernetes
Google Cloud Platform Live: Introducing Container Engine, Cloud Networking and much more
Tuesday, November 4, 2014
. One of the things we are discussing is that the cloud of today is not yet where developers need it to be. The promise of cloud computing is only partly realized; too many of the headaches of on-premise development and deployment remain. We want to do better. Today, we get one step closer with some important updates to Cloud Platform:
Simple, Flexible Compute Options
Development in the cloud today is by and large a fragmented experience. You need to decide up front whether you want to work with virtual machines — and therefore build everything yourself, either from scratch or by wiring together open source components — or to adopt a managed platform, and give up the ability to control the underlying infrastructure. At Google, we think about compute in the cloud differently: as a continuum which allows you to pick and choose the level of abstraction that is right for your application, or even for a component of your application. Today, we’re happy to announce two important steps towards that vision.
Google Container Engine: run Docker containers in compute clusters, powered by Kubernetes
Google Container Engine
lets you move from managing application components running on individual virtual machines to launching portable Docker containers that are scheduled into a managed compute cluster for you. Create and wire together container-based services, and gain common capabilities like logging, monitoring and health management with no additional effort. Based on the open source Kubernetes project and running on Google Compute Engine VMs, Container Engine is an optimized and efficient way to build your container-based applications. Because it uses the open source project, it also offers a high level of workload mobility, making it easy to move applications between development machines, on-premise systems, and public cloud providers. Container-based applications can run anywhere, but the combination of fast booting, efficient VM hosts and seamless virtualized network integration make Google Cloud Platform the best place to run them.
Managed VMs in App Engine: PaaS - Evolved
App Engine was born of our vision to enable customers to focus on their applications rather than the plumbing. Earlier this year, we gave you a sneak peek at the next step in the evolution of App Engine —
— which will give you all the benefits of App Engine in a flexible virtual machine environment. Today, Managed VMs goes beta and adds auto-scaling support, Cloud SDK integration and support for runtimes built on Docker containers. App Engine provisions and configures all of the ancillary services that are required to build production applications — network routing, load balancing, auto scaling, monitoring and logging — enabling you to focus on application code. Users can run any language or library and customize or replace the entire runtime stack (want to run Node.js on App Engine? Now you can). Furthermore, you have access to the broader array of machine types that Compute Engine offers.
Google Cloud Interconnect: better network connectivity to support global architectures
A flexible, high performance and secure network is the backbone of any Internet-scale application or enterprise IT architecture. Today, we’re making it easier for you to get the benefits of Google’s worldwide fiber network by introducing three new connectivity options:
gives you a fast network pipe directly to Google in any of over 70 points of presence in 33 countries around the world
enables you to connect
to Google with our carrier partners including Equinix, IX Reach, Level 3, TATA Communications, Telx, Verizon, and Zayo
Next month, we will introduce
We’ll follow up with a deeper look at
Firebase: it’s easier to build mobile and web real-time applications
Two weeks ago, we announced that Firebase joined Google. Today, we are demonstrating a hint of what makes their platform so powerful. Users of today’s mobile apps are used to real-time flow of communication such as chat, presence, commenting and location. However, current developer tools make it cumbersome to manage the relationship between multiple devices, and the underlying database and storage layer in real time. Google Firebase makes this easier, which is why it powers over 60,000 applications. We’ll follow up with a deeper look at their technology.
Google Cloud Debugger: ending printf-style debugging
At Google I/O, we gave you a sneak peek at how Cloud Debugger makes it easier to troubleshoot applications in production. Today, this service is publicly available in beta. With Cloud Debugger there’s no more hunting through logs to guess at what is going on with your services. Now you can simply pick a line of code, set a watchpoint and the debugger will return locals and a full stack trace from the next request that executes that line on any replica of your service. There is zero setup time, no complex configurations and no performance impacts noticeable to your users.
Google Compute Engine Autoscaler
Today we are launching Compute Engine Autoscaler. It uses the same technology that Google uses to seamlessly handle huge spikes in load and gives developers the ability to dynamically resize a VM fleet in response to utilization and based on a wide array of signals, from QPS of a HTTP Load Balancer, to VM CPU utilization, or custom metrics from the Cloud Monitoring service.
Cloud Platform Free Trial
New customers can now sign up for a
and receive $300 in credits that you can spend on all Cloud Platform products and services. There are no ongoing commitments — we will never charge your credit card until you upgrade your account. With $300 you can run two n1-standard-2 VMs 24x7 for 60 days, store over 11TB of data, or process over 60TB of data with BigQuery.
Learn more about the free trial
and start building something for free.
A growing partner ecosystem
Our Partner Lounge at the SF event features
. Bitnami announced its
for Google Cloud Platform featuring almost 100 cloud images, enabling our users to deploy common open source applications and development environments on our infrastructure in one-click. Fastly announced a new offering called
, a collaboration with Google Cloud Platform that improves content delivery and performance at the edge.
Over the past months, thousands of new companies have moved to Cloud Platform and adopted it as their development platform of choice. Kevin Baillie took the stage to talk about how
is able to use thousands of Compute Engine cores to produce high-quality visual effects for Hollywood studios. We also spoke about Wix, one of the most popular consumer website builders, whose media services are built entirely on Cloud Platform. We’re happy to support the launch of their media services platform today. Finally, Office Depot moved its entire printing service from a hosted storage solution to one powered by Cloud Platform — helping them reduce cost, develop with greater agility and power their in-store and online printing service for over 2000 locations.
New price reductions: continued leadership in price-performance
As always, we have an enduring commitment to passing along the savings we receive from Moore’s Law to our users. That is why today we’re announcing price reductions on Network egress (47%), BigQuery storage (23%), Persistent Disk Snapshots (79%), Persistent Disk SSD (48%), and Cloud SQL (25%). These are in addition to the
10% reduction on Google Compute Engine
that we announced at the beginning of October and reflect our commitment to make sure you benefit from increased efficiency and falling hardware prices.
A personal note
I want to end on a personal note. I joined Google just two months ago, and during this time I’ve been floored by what our teams are doing to create the world’s best cloud. We are committed to not just evolving technology for technology’s sake but to staying focused on the user and delivering real value. From what I’ve seen at Google I don’t think the combination of world class technology, innovation and user-focus exists anywhere else in the world today. Today’s announcements are representative of that, and we have so much more in store.
-Posted by Brian Stevens, VP of Product Management
Containers & Kubernetes
Curated Ubuntu Images now available on Google Cloud Platform
Monday, November 3, 2014
We’ve heard consistent requests from our customers for curated, optimized Ubuntu images for Google Cloud Platform, which is why we are now offering Ubuntu 14.04 LTS, 12.04 LTS and 14.10 guest images in Beta at no additional charge. These are maintained by
, the company behind
, so they are always up to date and secure from first boot. These images include Canonical and Google authored optimizations to improve the performance on Google Compute Engine without breaking compatibility.
Canonical maintained images are continually tested and updated, following Ubuntu’s best-practices, and delivering these attributes for Google Cloud Platform:
Short instance boot times quickly scale server fleets
Ubuntu images are optimized to Google Cloud Platform environment
Up-to-date images deliver security from first boot with rapid, automated patching in case of vulnerabilities
Image deployment automation streamlines Ubuntu usage
Rapid and Flexible:
Mirror repositories within Google Compute Engine will enable customers to easily and quickly add software to customize their instances with near-zero download times
We’d love to have you take these images for a trial run. Try it
and give us your feedback.
- Posted by Martin Buhr, Product Manager
Big Data & Machine Learning
GCP Japan Blog
Big Data & Machine Learning
Containers & Kubernetes
Developer Tools & Insights
Security & Identity
Storage & Databases
Subscribe by email
Technical questions? Check us out on
our monthly newsletter
Official Google Blog
Official Android Blog
Lat Long Blog
Ads Developer Blog
Android Developers Blog