Best Practices for M&E Studios to Address Growing Storage Requirements

0
2198
Storage

Author: Joe D’Amato, Senior Solutions Architect, CDW Canada

Over the last decade, there have been countless developments in the media and entertainment (M&E) industry around visual effects and animation. These exciting developments, such as the shift from 2K to 4K, and the availability of technology to support this, have pushed the boundaries for what studios are able to achieve. They have also presented new challenges, most notably, an overconsumption of storage resources, which can significantly drive up production costs.

In this blog, we take a look at how modern production practices are leading to increased RAM and storage requirements, how CDW is providing solutions that meet these growing requirements, and tips to limit the impacts of the expanding storage footprints in your workflows.

Surging RAM footprints

Today’s industry practices around RAM and memory requirements are more about brute size and less about finesse. Visual effects shots in big-budget action movies, which may capture setting details – such as characters, planets, explosions and spaceships – all within the same shot, create the need for significantly larger RAM and disk footprints. The tentpole shots that filmgoers see in these action movies may require 100+ gigabytes of RAM for a single frame. Standard video technology operates at a rate of 24 frames per second, with 10 seconds of footage equating to 240 frames. This means 100 gigabytes of RAM per frame significantly escalates the churn on the artist workstation or render node for a single job to complete. For comparison, a single job previously required between 5 and 10 gigabytes of RAM as recent as five years ago.

To accommodate these larger RAM jobs, studios were typically putting one job on a highly resourced machine, due to the RAM requirement, yet 36 cores of render power on said machine were generally underutilized, if not idle. The network was moderately unused because the entire machine had to be used to solely accommodate the RAM footprint. Employing large RAM machines with up to 512 gigabytes per machine was simply cost-prohibitive.

To address this challenge, CDW conducted a study with Intel to determine if adding Optane persistent memory, which has a lower cost point, would allow render machines to expand to 512 gigabytes of RAM in a cost-effective manner. We tested with real-time VFX workflows on Optane memory and found it worked just as effectively as RAM. This allowed CDW to create cost-effective 512-gigabyte render machines, which we now deploy to our clients.

This increase to 512 gigabytes is significant. The prior limitation was 192 gigabytes of RAM, which, to accommodate 100-plus gigabytes per frame, translates to a single frame per render machine. Ideally, studios want to put four frames on a machine, and therefore only utilize 25 percent of the required throughput if the RAM requirement hasn’t been built up. The need to accommodate larger RAM footprints is another reason why studios must consider leveraging cloud solutions. The cloud is often the only place where studios can find a single render machine with half of a terabyte of RAM without breaking the bank. This is why we are now offering 512GB machines in the CDW StudioCloud. The industry may see requirements for 1 terabyte of RAM on render machines within the next couple of years.

The growing need for disk space

Like RAM, storage footprints are also increasing. Tentpole shots now require between 20 and 50 terabytes per shot, which generate between 10 and 15 terabytes of data per day, much of which is transient and ultimately deleted. FX caches create temporary files that must be generated each day for those renders, which are then deleted because the size is too large, and value too low, to keep for one render. The repeated churn of rendering and deleting leads to increased footprints of 20-50 terabytes.

With large shots only requiring between 5 and 10 terabytes just five years ago, what’s driving this increase? It’s simply the progression in definition. For example, going from 2K to 4K quadruples the resource consumption, meaning a 10TB footprint now needs to be 40TB. This change has a multiplicative effect across the entire storage universe and will only continue to increase as the industry ushers in 8K, 16K and beyond.

The utilization of deep compositing, or arbitrary output variables (AOVs), is another factor that’s driving increased storage requirements. Production teams can take all the elements that lighting and effects did and maintain their changeability, so when they reach the very last stage, the compositing stage where everything is all put together, they can make easier alterations. Traditionally, when a director requests an iteration change, such as a brighter explosion, the shot would be sent back to the effects team to run through the effect’s caches, create new effects and bring the updated shot to compositing. Deep compositing enables such iterations to be made through a compositing tool and no longer needs to be sent back to the effects team. However, the deep-compositing data that’s packaged with the shot drives the disk requirement from 10TB to 40-50TB. The compositor in the final stage can swiftly tweak the settings at the twist of a dial, saving time and artist hours, but it requires a significant increase in disk space.

Fortunately, actions can be taken to limit the disk requirement of deep compositing. Certain aspects may be switched off, allowing production teams to only utilize what they need. Some folks who employ deep compositing may turn everything on to enable iterations for aspects such as depth, brightness and other FX qualities. Doing so only comes with a deeper storage footprint. Therefore, it’s imperative to only use deep compositing for the shots you need, and only switch on the necessary elements that need adjustment. Generally, computer graphics supervisors are keenly aware of these effects. Having the right technical team on hand, and aware of the business cost of flipping those switches, is critical to appropriately assessing and predicting the necessary disk requirements. For most studios, as expensive as storage can be, artists are more expensive. Many studios are willing to trade storage costs to free up artist and workflow time.

How to overcome storage footprint challenges

CDW has partnered with Dell Technologies for years to provide flexible compute and storage options that allow customers to pay as they go with monthly consumption-based fees. Flex on Demand, offered by Dell Technologies, has changed the game and enabled our largest customers to effectively address their disk space challenges.

Customers no longer have to deal with massive capital expenditures and financing impacts on their balance sheets and credit. Instead, they can easily provision Optane-enabled 512-gigabyte render machines to address RAM challenges, and Dell EMC PowerScale storage through Flex on Demand to solve their disk limitations. It’s the new way to leverage disk space.

Production Processes to Implement

As I discussed in my previous blog on leveraging hybrid cloud environments, traditional production practices are a significant hurdle to addressing the surging RAM and disk footprints. Technology solutions are available, and some industry processes must be updated to ensure the efficiency and success of implementing these solutions into workflows. A staged approach can be taken to begin limiting the impact of growing RAM and storage footprints.

The first stage is to develop a render optimization ethos, or framework, into production workflows. A small amount of well-applied labour can significantly reduce RAM and storage footprints from 50 percent to as much as 200 percent. This can also reduce resource consumption, by having artists spend less time on their work and iterating.

The next stage requires implementing a team – a render optimization task force – that can help drive alignment with the render optimization ethos. This has often proven most effective, as these task forces entail cross-department groups of team leads that can, for example, quickly determine why a job is requiring 100 gigabytes. The task force can work directly with render-wranglers/digital resource administrators, and identify jobs where the requirement may be reduced, driving a 100GB requirement down to 50GB or 25GB. It may be difficult at first, but must be built into your production model. Only a small number of people is needed to reduce a small number of very large jobs, which typically consume between 20 percent and 50 percent of a render farm. This will free up render farm traffic immensely.

Deviating from a standard approach can certainly be challenging, but the payoff is worth the effort. Taking a few small steps to implement updated workflows can make a world of difference, and help you avoid overtaxing your budget later.

Learn more about CDW’s solutions for the Media & Entertainment industry.

Watch this webinar to learn about Dell EMC PowerScale for Media and Entertainment workflows.

About the author: Joe D’Amato is senior solutions architect for media and entertainment at CDW Canada, where he analyzes and reports on industry trends, production needs, new service offerings and market opportunities, while also building relationships and opportunities in North America. Prior to joining CDW, Joe spent more than 20 years providing technical expertise for visual effects and animation companies, and the last 12 years working exclusively in the areas of render farms and storage for producing cinematic special effects. His expertise includes rendering, storage and other infrastructure issues.