Testing and proof of concept

  1. Home
  2. Testing and proof of concept

Go back to GCP Tutorials

In this tutorial we will learn and understand about running a proof of concept (PoC) to build a hybrid render farm on Google Cloud. This is a companion to the solution Building a hybrid render farm and is designed to facilitate testing and benchmarking rendering for animation, film, commercials, or video games on Google Cloud.

However, you can run a PoC for your hybrid render farm on Google Cloud if you narrow the scope of your tests to only the essential components. In contrast to architecting an entire end-to-end solution, consider the following purposes of a PoC:

  • Firstly, determine how to reproduce your on-premises rendering environment on the cloud.
  • Secondly, measure differences in rendering and networking performance between on-premises render workers and cloud instances.
  • Thirdly, determine cost differences between on-premises and cloud workloads.

Of lesser importance are the following tasks that you can postpone or even eliminate from a PoC:

  • Firstly, determine how assets are synchronized (if at all) between your facility and the cloud.
  • Secondly, determine how to deploy jobs to cloud render workers by using queue management software.
  • Then, determine the best way to connect to Google Cloud.
  • Lastly, measure latency between your facility and Google data centers.

Deploying an instance

For your PoC, you might want to recreate your on-premises render worker hardware. While Google Cloud offers a number of CPU platforms that might match your own hardware. But, the architecture of a cloud-based virtual machine is different from a bare-metal render blade in an on-premises render farm.

However, on Google Cloud, resources are virtualized and independent of other resources. Virtual machines (instances) are composed of the following major components:

  • Firstly, Virtual CPUs (vCPUs)
  • Secondly, Memory (RAM)
  • Then, Disks
    • Boot disk and guest OS
    • Additional storage disks
  • Lastly, NVIDIA Tesla GPUs (optional)
Create an instance

Firstly, in Cloud Shell, create your prototype render worker instance:

gcloud compute instances create [INSTANCE_NAME] \
–machine-type [MACHINE_TYPE] \
–image-project [IMAGE_PROJECT] \
–image-family [IMAGE_FAMILY] \
–boot-disk-size [SIZE]

Where:

  • Firstly, [INSTANCE_NAME] is a name of your instance.
  • Secondly, [MACHINE_TYPE] is either a predefined machine type or a custom machine type using the format custom-[NUMBER_OF_CPUS]-[NUMBER_OF_MB] where you define the number of vCPUs and amount of memory for the machine type.
  • Thirdly, [IMAGE_PROJECT] is the image project of that image family.
  • Next, [IMAGE_FAMILY] is an optional flag that specifies which image family this image belongs to.
  • Lastly, [SIZE] is the size of the boot disk in GB.

Building your default image

Unless you have custom software to test that requires things like a custom Linux kernel or older OS versions. Here, we recommend you start with one of our public disk images and add the software you’re going to use. However, if you choose to import your own image, you need to configure this image by installing additional libraries to enable your guest OS to communicate with Google Cloud.

Set up your render worker

  • Firstly, in Cloud Shell, on the instance you created earlier, set up your render worker as you would your on-premises worker by installing your software and libraries.
  • Secondly, stop the instance:

gcloud compute instances stop [INSTANCE_NAME]

Create a custom image

  • Firstly, in Cloud Shell, determine the name of your VM’s boot disk:

gcloud compute instances describe [INSTANCE_NAME]

However, the output contains the name of your instance’s boot disk:

mode: READ_WRITE
source:https://www.googleapis.com/compute/v1/projects/[PROJECT]/zones/[ZONE]/disks/[DISK_NAME]

Where:
  • Firstly, [PROJECT] is the name of your Google Cloud project.
  • Secondly, [ZONE] is the zone where the disk is located.
  • Thirdly, [DISK_NAME] is the name of the boot disk attached to your instance. The disk name is typically the same (or similar) to your instance name.
  • Secondly, create an image from your instance:

gcloud compute images create [IMAGE_NAME] \
–source-disk [DISK_NAME] \
–source-disk-zone [ZONE]

Where:
  • Firstly, [IMAGE_NAME] is a name for the new image.
  • Secondly, [DISK_NAME] is the disk from which you want to create the new image.
  • Lastly, [ZONE] is the zone where the disk is located.

Storing assets

  • Firstly, render pipelines can differ vastly, even within a single company. However, to implement your PoC quickly and with minimal configuration, you can use the boot disk of your render worker instance to store assets. Your PoC shouldn’t yet evaluate data synchronization or more advanced storage solutions.
  • Secondly, there are a number of storage options available on Google Cloud, but we recommend testing a scalable shared storage solution in a separate PoC.
  • Further, if you’re testing multiple render worker configurations and need a shared file system. Then, you can create a Filestore volume and mount it by using NFS to your render workers. Filestore is a managed file storage service that can be mounted to read/write across many instances, acting as a file server.
gcp cloud architect practice tests

Getting data to Google Cloud

To run a render PoC, you need to get your scene files, caches, and assets to your render workers. However, for larger (>10 GB) datasets, you can use gsutil to copy your data to Cloud Storage and then onto your render workers. And, for smaller (<10 GB) datasets, you can use the gcloud tool to copy data directly to a path on your render workers (Linux only).

Create a destination directory on your render worker

  • Firstly, in Cloud Shell, connect to your render worker by using SSH:

gcloud compute ssh [WORKER_NAME]

Where [WORKER_NAME] is the name of your render worker.

  • Secondly, create a destination directory for your data:

mkdir [ASSET_DIR]

Where [ASSET_DIR] is a local directory anywhere on your render worker.

Use gsutil to copy large amounts of data

If you’re transferring large datasets to your render worker, use gsutil with Cloud Storage as an intermediate step. However, if you’re transferring smaller datasets, you can skip to the next section and use the gcloud tool to transfer smaller amounts of data.

  • Firstly, on your local workstation, create a Cloud Storage bucket:

gsutil mb gs://[BUCKET_NAME_ASSETS]
Where [BUCKET_NAME_ASSETS] represents the name of the Cloud Storage bucket for your files or directories that you want to copy.

  • Secondly, copy data from your local directory to the bucket:

gsutil -m cp -r [ASSETS] gs://[BUCKET_NAME_ASSETS]
Where [ASSETS] is a list of files or directories to copy to your bucket.

  • Thirdly, connect to your render worker by using SSH:

gcloud compute ssh [WORKER_NAME]

  • Lastly, copy the contents of your bucket to your render worker:

gsutil -m cp -r gs://[BUCKET_NAME_ASSETS]/* [ASSET_DIR]

Testing and proof of concept GCP cloud architect  online course

Reference: Google Documentation

Go back to GCP Tutorials

Menu