Google Cloud Platform Blog
Product updates, customer stories, and tips and tricks on Google Cloud Platform
3D imagery rendering in the cloud with Industriromantik and Compute Engine
Friday, May 1, 2015
Today’s guest blogger is Fredrik Averpil, Technical Director at
Industriromantik
. Fredrik develops the custom computer graphics pipeline at Industriromantik,
a digital production company specializing in computer generated still and moving imagery.
As a small design and visualization studio, we focus on creating beautiful 3D imagery – be it high-resolution product images or TV commercials. To successfully do this, we need to ensure we have access to enough rendering power, and at times, we find ourselves in a situation where our in-house render farm's capacity isn’t cutting it. That’s where
Google Compute Engine
comes in.
By taking our 3D graphics pipeline, applications, and project files to Compute Engine, we expand and contract available rendering capacity on-demand, in bursts. This enables us to increase project throughput, deliver on client requests, and handle render peak times with ease while remaining cost efficient – with the added bonus of getting us home in time for supper.
Figure 1. We created and rendered these high resolution interiors using our custom computer graphics production pipeline.
The setup
We use the very robust
Pixar Tractor
as our local render job manager, as it’s designed for scaling and can handle a large number of tasks simultaneously. Our local servers – which serve applications, custom tools, and project files - are mirrored to Compute Engine ahead of rendering time. This makes cloud rendering just as responsive as a local render. By making Compute Engine instances run the Tractor client, they’ll seamlessly pop up in the Tractor management dashboard in our local office. To me, pouring 1600 cores worth of instances into your local 800-core render farm reminds you how powerful the technology is.
Figure 2. Google Compute Engine instances access the local office network through a VPN tunnel.
The basic setup of the file server is having an instance equipped with enough RAM to accommodate for good file caching performance. We use an
n1-highmem-4
instance as a file server to serve 50
n1-standard-32
rendering instances. Then we attach additional persistent disk storage (in increments of 1.5TB for high IOPS) to the file server instance to hold projects and applications. Using
ZFS
for this pool of persistent disks, the file server's storage can be increased on-demand, even while rendering is in progress. For increased ZFS caching performance, local SSD disks can be attached to the file server instance (feature in beta). It’s all really up to what you need for your specific project. Set up will vary based on how many instances you’re planning on using, and what kind of performance you’re looking for.
Operations on the file server and file transfers can be performed over SSH from a Google Compute Engine-authenticated session, and ultimately be automated through Tractor:
# Create folder on GCE file server running on public IP address 1.2.3.4 over SSH port 22
ssh -p 22 -t -t 1.2.3.4 -o UserKnownHostsFile=/dev/null -o CheckHostIP=no -o StrictHostKeyChecking=no -i /home/fredrik/.ssh/google_compute_engine "sudo mkdir -p /projects/projx/"
# Upload project files to GCE file server running on public IP address 1.2.3.4 over SSH port 22
rsync -avuht -r -L --progress -e "ssh -p 22 -o UserKnownHostsFile=/dev/null -o CheckHostIP=no -o StrictHostKeyChecking=no -i /home/fredrik/.ssh/google_compute_engine" /projects/projx/ 1.2.3.4:/projects/projx/
If you store your project data in a bucket, you could also retrieve it from there:
# Copy files from bucket onto file server running on public IP address 1.2.3.4 over SSH port 22
ssh -p 22 1.2.3.4 -t -t -o UserKnownHostsFile=/dev/null -o CheckHostIP=no -o StrictHostKeyChecking=no -i /home/fredrik/.ssh/google_compute_engine "gsutil -m rsync -r gs://your-bucket/projects/projx/ /projects/projx/"
Software executing on Compute Engine (managed by Tractor) accesses software licenses served from our local office via the Internet. And also, instances running the Tractor client need to be able to contact the local Tractor server. All of this can be achieved by using the beta of
VPN
, as seen in figure 2 above.
Since the number of software licenses cannot be scaled on-demand, like the number of instances, we take advantage of the fastest machines available: 32-core instances, which return a 97-98% speed boost from 16-core (awesome scaling!) when rendering with
V-Ray for Maya
, our primary choice of renderer.
When a rendering task completes, the files can be copied back home easily, again managed by Tractor, directly after a frame render completes:
# Copy files from Google Compute Engine file server "fileserver-1" onto local machine
gcloud compute copy-files username@fileserver-1:/projects/projx/render/*.exr /local_dest_dir
Figure 3. Tractor dashboard, showing queued jobs and the task tree of a standard render job.
Automation
Avoiding manual labour and micromanagement of Compute Engine rendering is highly recommended. This is also where Tractor excels: the automation of complex processes. Daisy-chaining tasks in Tractor, such as spinning up the file server, allocating storage, and transferring files makes large and parallel jobs a breeze to manage.
Figure 4. Tractor task tree.
In figure 4, the daisy-chaining of tasks is illustrated. When initiating a project upload to the Google Compute Engine file server, a disk is attached to the file server and added to the ZFS pool. Project files are uploaded as well as the specific software versions required. No files can be uploaded before the disk storage has been attached, so in this case, some processes are waiting for other processes to complete before initiating.
With Compute Engine and its per-minute billing approach, I’ve stopped worrying and started loving the auto-scaling of instances. By having a script check in with Tractor (using its
query Python API
) for pending tasks every once in a while, we can spin up instances (via the
Google Cloud SDK
) to crunch a render and quickly wind them down when no longer needed. Now that’s micromanagement done right.
Figure 5. High resolution exterior 3D rendering for Etaget, Stockholm.
For anyone who wants to utilize Compute Engine rendering but needs a turnkey, managed solution, I’d recommend checking out the beta of
Zync Render
, which utilizes the excellent Google Cloud Platform infrastructure. Zync Render has its own front end UI that manages the file transfer and provides the software licenses required for rendering so you don’t have to implement a Compute Engine specific integration. This makes that part of the rendering a whole lot easier. I’m keeping my fingers crossed that Zync Render will ultimately offer their software license server for Google Compute Engine users so that we can scale licenses along with any number of instances seamlessly.
Summary
I believe that every modern digital production company dealing with 3D rendering today, regardless of size, needs to leverage affordable cloud rendering in some shape or form in order to stay competitive. I also believe that key to success is to focus on automation. The
Google Cloud SDK
provides excellent tools to do exactly this by pairing the powerful
Google Compute Engine
together with an advanced and highly customizable render job manager, such as
Pixar’s Tractor
. For smaller companies or individuals who do not wish to orchestrate these advanced queuing systems themselves,
Zync Render
takes advantage of the Compute Engine infrastructure.
For additional computer graphics pipeline articles, tips and tricks, check out Fredrik’s blog at
http://fredrik.averpil.com
and for more information about Industriromantik, visit
http://www.industriromantik.se
Free Trial
GCP Blogs
Big Data & Machine Learning
Kubernetes
GCP Japan Blog
Firebase Blog
Apigee Blog
Popular Posts
Understanding Cloud Pricing
World's largest event dataset now publicly available in BigQuery
A look inside Google’s Data Center Networks
Enter the Andromeda zone - Google Cloud Platform’s latest networking stack
Getting your data on, and off, of Google App Engine
Labels
Announcements
193
Big Data & Machine Learning
134
Compute
271
Containers & Kubernetes
92
CRE
27
Customers
107
Developer Tools & Insights
151
Events
38
Infrastructure
44
Management Tools
87
Networking
43
Open
1
Open Source
135
Partners
102
Pricing
28
Security & Identity
85
Solutions
24
Stackdriver
24
Storage & Databases
164
Weekly Roundups
20
Feed
Subscribe by email
Demonstrate your proficiency to design, build and manage solutions on Google Cloud Platform.
Learn More
Technical questions? Check us out on
Stack Overflow
.
Subscribe to
our monthly newsletter
.
Google
on
Follow @googlecloud
Follow
Follow