Google Cloud Platform Blog
Product updates, customer stories, and tips and tricks on Google Cloud Platform
The new Persistent Disk - faster, cheaper and more predictable for Google Compute Engine
Monday, December 2, 2013
High performing disk IO is critical to most virtual machine workloads, but today, developers face a big challenge. They want great performance, but at prices that are predictable. No one wants a big surprise when they see their bill.
Today, we are launching significant improvements to Persistent Disk (PD), unifying our block storage offerings. First, we are significantly lowering PD pricing to Scratch Disk levels, reducing costs 60% - 92%. In addition, PD performance caps are being raised such that they scale linearly with volume size up to 8x the previous volume limits for random reads and 4x the previous volume limits for random writes. Finally, persistent disks can now meet the price and performance requirements of scratch data with better reliability, functionality, and flexibility.
New lower and more predictable Persistent Disk price model
Due to advancements in how we store PD data, we are able to
cut PD prices significantly
without sacrificing reliability or performance.
Old Price
New Price
Space
$0.10 / GB / month
$0.04
/ GB / month
IO
$0.10 / million IOs
Included in price of the space
Our customers wanted more predictable pricing. So in addition to being lower, the new prices are also predictable and consistent from month to month due to the inclusion of IO charges.
For example, previously, if you created a 400 GB volume, the costs would vary depending on your IOPS. At a minimum, it would have been $40/month assuming no IO for the entire month. At a maximum, it would have been $197.80/month assuming 600 write IOs every second of the month.
With the new PD pricing, a 400 GB volume will cost $16 / month no matter how much IO your application performs for resulting in savings of ranging from 60% to 92%!
New Persistent Disk performance model
PD is introducing a new performance model that increases top volume IOPS caps 4x for random writes and 8x for random reads without sacrificing the performance consistency that has been PD’s distinguishing characteristic since its original release.
In the new model, PD performance caps scale linearly with the size of the volume — larger volumes can perform more IO than smaller volumes up to absolute limits described in the product documentation. This model is designed to:
Raise the overall caps:
The highest performing volumes now have the following limits
2000 random read IOPS (up from 250)
2400 random write IOPS (up from 600)
180 MB/s of streaming reads (up from 120 MB/s)
120 MB/s of streaming writes (same as previous limit)
Simplify scaling of IO
: As we’ve
publicly discussed
, PD volumes are striped across hundreds or even thousands of physical devices. With PD, we manage the RAID for you so you never have to stripe multiple small volumes together inside a VM to increase IO. You get the same performance for a single 1 TB volume as for 10 x 100 GB volumes
The new limits are discussed in detail in the
product documentation
.
PD is your new scratch disk
PD is now a great choice for storing scratch data. Your applications will run as well as using scratch disk in most cases, while at the same time, becoming more reliable and easier to manage. PD volumes remain available through planned maintenance and hardware failure. This is the key enabler for live migration and allows us to keep data centers up to date without customer disruption — a unique feature among public clouds.
PD volumes can be unmounted from one VM and remounted to another. This radically simplifies and speeds up the processes of upgrading applications and resizing VMs. In addition, PD volumes can be snapshotted to our global object store, Google Cloud Storage, for simple zone migration, backup and recovery, and disaster recovery. PD volumes can be up to 10 TB in size.
With the drop in PD price and performance improvements you can now use it, with all its benefits, to replace your scratch disk. To further improve scratch data costs, PD volumes are not restricted to predefined sizes - buy only as much space as you need to hold your data and access it with the right performance. Larger volumes can be mounted by smaller VMs if high CPU and memory are not required.
For more details, please see the
product documentation
and our
technical article
which includes helpful best practices. We hope you enjoy the new offering and look forward to your feedback at the
Compute Engine discussion mailing list
.
-Posted By Jay Judkowitz, Senior Product Manager
Free Trial
GCP Blogs
Big Data & Machine Learning
Kubernetes
GCP Japan Blog
Firebase Blog
Apigee Blog
Popular Posts
Understanding Cloud Pricing
World's largest event dataset now publicly available in BigQuery
A look inside Google’s Data Center Networks
Enter the Andromeda zone - Google Cloud Platform’s latest networking stack
New in Google Cloud Storage: auto-delete, regional buckets and faster uploads
Labels
Announcements
193
Big Data & Machine Learning
134
Compute
271
Containers & Kubernetes
92
CRE
27
Customers
107
Developer Tools & Insights
151
Events
38
Infrastructure
44
Management Tools
87
Networking
43
Open
1
Open Source
135
Partners
102
Pricing
28
Security & Identity
85
Solutions
24
Stackdriver
24
Storage & Databases
164
Weekly Roundups
20
Feed
Subscribe by email
Demonstrate your proficiency to design, build and manage solutions on Google Cloud Platform.
Learn More
Technical questions? Check us out on
Stack Overflow
.
Subscribe to
our monthly newsletter
.
Google
on
Follow @googlecloud
Follow
Follow