Big Data

Movie animation firm's big data challenges present lessons to learn

The IT staff at a computer graphics animation firm have come up with smart ways to address archiving and backup challenges with its 3D data model files.

escapefromplanetearth.jpg
Image: Rainmaker Entertainment

In the US, movie box office revenues for animated films totaled $1.1 billion and increased by 89.5% between 2001 and 2011. Revenues from animated films have continued to climb since then — and this doesn't even account for the widespread animation that is taking place in other venues, such as video gaming.

This makes movie animation and rendering a major business enterprise for computer graphics animation firms like Rainmaker Entertainment, which is based in Vancouver, British Columbia, and is internationally recognized for its work on brands like Spider-Man and Popeye, and its original video for ReBoot, Escape from Planet Earth, and The Nutty Professor.

Rainmaker generates hundreds of millions of dollars in annual video sales. It employs several hundred employees who do everything from 3D modeling and lighting to feature films, animation, and IT. However, movie animation and rendering firms like Rainmaker can't accomplish any of this work without a well-developed strategy for managing chunks of big data that come in the form of 3D data model files that include various video modeling instructions in areas such as lighting, texture, viewpoint, and shading. Computer programs render these models into video images in a resource-intensive process. Images must then be received and stored continuously into production, and archived weekly into storage repositories.

All of this places unique big data management requirements on the IT staff, because big data is at the core of the film animation business.

First, they need to meet a rapidly expanding production schedule, which requires managing terabytes of big data in real time. When several hundred video animation engineers and artists are simultaneously working on shows, each individual requires real-time access to multiple renderings of the project he is working on. The object-based files in movie animation and rendering systems can be as much as 256 terabytes of storage, and in a daily movie production environment about 100 terabytes of big data is "in play" at any one time during the day.

To meet these big data production needs, the IT department at movie rendering and animation shops rely on storage automation that is capable of provisioning big data storage resources in real time in a way that is entirely transparent to the users who work on it.

IT also needs a robust data archiving methodology for big data.

The main concern with big data backups parallels what IT has had to manage for years with its transactional data: the backup must be accomplished in a batch time window that will end before the next day's real-time production begins. This backup task can be next to impossible in big data "real time" shops because of the sheer size of the data that must be backed up and back online when production is scheduled to start again.

In Rainmaker's case, there was a specific problem with the month-end backup of movie and animation object files to archives because the full monthly backups were consuming so much time that they were threatening to extend into production time. The solution for archiving was to divide a total 100 terabyte big data load into four weekly backup units of about 24 terabytes each — with the company keeping a schedule and tracking where it left off in backups. The backup process was performed incrementally so that there was a full production data backup at the end of each month.

The best practices for big data management in real-time and in batch archiving are still formative for companies in many other industry verticals; this is why the lessons learned from movie animation firms and other companies that have already had to get into the active management of big data can provide value for future planning. These are the lessons learned:

  • Storage automation that can act upon business rules pre-defined by IT for the real-time provisioning and scaling of big data resources can enable IT to deliver these resources to end users seamlessly and painlessly;
  • IT should define a workable big data archiving and backup strategy that does not interfere with when systems must be online for production, even if it means incrementally backing up data; and
  • Part of the data delivery strategy to end users also involves the deployment of an effective communications infrastructure with enough bandwidth to transport big data's heavier payloads.

About Mary Shacklett

Mary E. Shacklett is president of Transworld Data, a technology research and market development firm. Prior to founding the company, Mary was Senior Vice President of Marketing and Technology at TCCU, Inc., a financial services firm; Vice President o...

Editor's Picks

Free Newsletters, In your Inbox