Google Cloud Platform Blog
Product updates, customer stories, and tips and tricks on Google Cloud Platform
New in Google Cloud Storage: auto-delete, regional buckets and faster uploads
Monday, July 22, 2013
We’ve launched new features in Google Cloud Storage that make it easier to manage objects, and faster to access and upload data. With a tiny bit of upfront configuration, you can take advantage of these improvements with no changes to your application code — and we know that one thing better than improving your app is improving your app transparently!
Today we’re announcing:
Object Lifecycle Management - Configure auto-deletion policies for your objects
Regional Buckets - Granular location specifications to keep your data near your computation
gsutil - automatic parallel composite uploads - Faster uploads with gsutil
Object Lifecycle Management
Object Lifecycle Management
allows you to define policies that allow Cloud Storage to automatically delete objects based on certain conditions. For example, you could configure a bucket so objects older than 365 days are deleted, or only keep the 3 most recent versions of objects in a versioned bucket. Once you have configured Lifecycle Management, the expected expiration time will be added to object metadata when possible, and all operations are logged in the
access log
.
Object Lifecycle Management can be used with
Object Versioning
to limit the number of older versions of your objects that are retained. This can help keep your apps cost-efficient while maintaining a level of protection against accidental data loss due to user application bugs or manual user errors.
Regional Buckets
Regional Buckets
allow you to co-locate your
Durable Reduced Availability
data in the same region as your
Google Compute Engine
instances. Since Cloud Storage buckets and Compute Engine instances within a region share the same network fabric, this can reduce latency and increase bandwidth to your virtual machines, and may be particularly appropriate for data-intensive computations. You can still specify the less-granular United States or European datacenter
locations
if you'd like your data spread over multiple regions, which may be a better fit for content distribution use cases.
gsutil - Automatic Parallel Composite Uploads
Gsutil version 3.34
now automatically uploads large objects in parallel for higher throughput. Achieving maximum TCP throughput on most networks requires multiple connections, and this makes it easy and automatic. The support is built using
Composite Objects
. For details about temporary objects and a few caveats, see the
Parallel Composite Uploads documentation
. To get started, simply use 'gsutil cp' as usual. Large files are automatically uploaded in parallel.
We think there’s a little something here for everyone: If you’re managing temporary or versioned objects, running compute jobs over Cloud Storage data, or using gsutil to upload data, you’ll want to take advantage of these features right away. We hope you enjoy them!
-Posted by Brian Dorsey, Developer Programs Engineer
Free Trial
GCP Blogs
Big Data & Machine Learning
Kubernetes
GCP Japan Blog
Firebase Blog
Apigee Blog
Popular Posts
Understanding Cloud Pricing
World's largest event dataset now publicly available in BigQuery
A look inside Google’s Data Center Networks
Enter the Andromeda zone - Google Cloud Platform’s latest networking stack
Getting your data on, and off, of Google App Engine
Labels
Announcements
193
Big Data & Machine Learning
134
Compute
271
Containers & Kubernetes
92
CRE
27
Customers
107
Developer Tools & Insights
151
Events
38
Infrastructure
44
Management Tools
87
Networking
43
Open
1
Open Source
135
Partners
102
Pricing
28
Security & Identity
85
Solutions
24
Stackdriver
24
Storage & Databases
164
Weekly Roundups
20
Feed
Subscribe by email
Demonstrate your proficiency to design, build and manage solutions on Google Cloud Platform.
Learn More
Technical questions? Check us out on
Stack Overflow
.
Subscribe to
our monthly newsletter
.
Google
on
Follow @googlecloud
Follow
Follow