How GPU Rendering is Changing VFX Practices in M&E

0
4728
GPU Rendering

Author: Joe D’Amato, Senior Solutions Architect, CDW Canada

In a recent blog post, I detailed how leveraging the cloud is the hottest discussion topic among studio executives across the media and entertainment (M&E) industry today. The next most pressing point of discussion is the increased utilization of GPU rendering, which, quite simply, has changed the game and even caused a bit of disruption among leading industry players.

In this article, we’ll take a look at the history of GPU rendering, how it’s being leveraged, where the technology is headed and some tips for implementing GPU rendering into your workflows.

The Redshift revolution

GPU rendering has taken the M&E industry by storm within the last few years, and this trend can be traced to one company: Redshift, which provides GPU-accelerated 3D software.

The ‘Redshift revolution’ was ushered in at a time when leading rendering solution companies, like Arnold and RenderMan, had to make their GPU and CPU renders look the same to seamlessly interweave into final frames. Redshift was the first company to take advantage of this and focus solely on leveraging GPU rendering. This strategy proved effective, as Redshift eventually dominated the market by running GPU rendering on NVIDIA cards and became the first entrant to get into the studios.

For those looking to implement GPU rendering into their workflows, there are two different methods that have proven effective.

Solution 1: Leveraging idle resources for GPU rendering

To understand the benefits of GPU rendering, it’s important to discuss its capabilities and limitations. One current limitation is image quality, which isn’t as high as that of a CPU. However, if you’re doing lower-end work for a commercial that doesn’t require the same image quality as a high-profile movie, you can use Redshift for the entirety, or do early stage rendering in Redshift. The renders don’t have to look perfect, but they allow the director to quickly see how the animation is going to look.

Previously, this rendering would be completed on a CPU farm, which is a finite resource. Instead, Redshift makes it possible to render on an idle resource, such as unused desktop computers during nighttime hours. In every studio, with a bevy of PC video cards sitting idle for 12 hours of the night, you now have a GPU farm with extra render resources – and  the render execution is fast. A 20-minute CPU render often takes 20 to 40 seconds in Redshift.

This is especially effective when conducting pre-delivery work on animation, character development and other work that must be siphoned from the CPU farm. It reduces the load on the CPU farm, increasing the available space for everyone else and freeing up those resources. Most notably, it provides additional resources that were effectively idle before – all of the studio’s desktop video cards become part of the render farm at night.

This is the most common use case we’ve seen. Studios already had desktops and PC cards. Doing some research and development showed that getting Redshift renders to run on those farms didn’t require a lot of investment because they already had a render farm sitting in their studios underneath all the desks. The studios found that some daytime iteration would need to happen and there wouldn’t be enough GPUs because artists would be working on those desktop PCs during business hours. Fortunately, you only need a few Dell servers with NVIDIA cards to allow daytime iterations to take place so everyone can do test frames. At night, those frames can swarm out across all the desktops, taking advantage of all the GPU rendering.

On most days, every studio has at least a half dozen machines that are idle due to workplace absences and can be added to the farm. Between those machines and a few dedicated GPUs, the studio can have a daytime farm to iterate test frames and hundreds of machines to use at night for full-frame range renders. That offload strategy of unloading to the GPU to free up the CPU farm for final renders has been the most game-changing application of Redshift.

Solution 2: Leveraging GPU rendering for final frames

The second use case for GPU rendering is its growing rate of adoption for producing final frames in cinematics and TV work. As the output of GPUs continues to improve, people are learning how to utilize GPU rendering for final frames. There have been examples of streaming cinema completed on Redshift, and Redshift is beginning to work its way into more high-profile movies. Companies that specialize in CPU rendering likely see this development as a threat, as they must compete with Redshift’s speediness and ability to closely resemble CPU renders at a fraction of the cost. That presents a problem for companies that work with, and alternate between, CPU and GPU renders.

We’ve already begun to see an increase in GPU rendering being leveraged for final frames. Studios that are commissioning work from VFX studios are hearing about the savings that Redshift allows, such as significantly reducing the time it takes to render a single frame from minutes to seconds. Because of the savings delivered by this increased speed, studios are beginning to work with VFX vendors to make sure that the cinematic look the director is seeking matches what Redshift can do. For example, one of Redshift’s limitations around image quality could mean the chrome in a movie won’t exactly capture the chrome appearance that the director wants. But if the client doesn’t mind the image capturing 85 percent of the desired image quality at a fraction of the cost, the client will undoubtedly consider this option to conserve budget.

Avoiding the pitfalls of GPU rendering adoption

While GPU rendering offers undeniable benefits, its implementation has not been without setbacks. If you’re implementing Redshift as your GPU solution, which is often implemented in conjunction with your CPU render or other tools that feed into it, make sure they’re well-integrated with your R&D team. Redshift is not a plug-and-play solution like Renderman or other more traditional company solutions that have been built to work with each other. Redshift is unique, and folks have had challenges weaving it into their pipeline.

For those considering implementing Redshift, it’s critical to conduct R&D and testing early on. Expect it to be problematic in ways you did not foresee and call out challenges that arise. Don’t try to implement three other new technologies at the same time as you deploy Redshift, as Redshift alone is enough for a show.

For those planning to use the new Arnold or Renderman GPU rendering, make sure to do testing on the time savings, because it may not afford the same time savings as Redshift. Because the CPU and GPU must match, there will be a price to be paid in speed, which means the GPU isn’t going to be 100 times faster than a CPU render. It will still offer a considerable improvement in speed, but make sure you benchmark expectations early before you plan for super-fast GPU rendering. While it will be an improvement in speed, it may not be as fast as you think.

Again, it’s critical to do testing early and prior to implementation, and use desktops as much as possible to drive down cost. Buying brand new GPU render nodes can become costly per node. Use your desktop farm to its maximum potential.

The benefits for studios – and how CDW can help

The ability to compromise the desired cinematic look to save money is something that’s getting baked into the client-vendor discussion. CDW is now deploying complete GPU rendering solutions the bring together GPU servers and high performance Dell EMC PowerScale NAS – either purchased or rented – to our clients. These servers are often CPU render boxes with GPU added in, which we offer as a best-of-breed solution that goes beyond the limitations of a dedicated device. These render machines enable both GPU or CPU renders, and the package includes a couple of nice high-core-count Cascade Lake machines, the newest processors from Intel, and multiple NVIDIA cards per box.

Another solution being explored is deploying a dedicated GPU render with NVLink to expand VRAM across multiple GPU for very large scenes or frames. Only a limited number of shops have seen success with the large 10-card arrays, as this requires a more sophisticated use case than Redshift. Such an application depends on the team resources you can devote to your DevOps and pipeline.

GPU rendering continues to gain momentum within the M&E industry. It’s driving significant change and continues to be a critical point of discussion among film studios. Time will only tell how sophisticated these rendering solutions will ultimately become and the payoff they will deliver to studios.

Learn more about CDW’s solutions for the Media and Entertainment industry and Dell EMC PowerScale for Media and Entertainment workflows.

About the author: Joe D’Amato is senior solutions architect for media and entertainment at CDW Canada, where he analyzes and reports on industry trends, production needs, new service offerings and market opportunities, while also building relationships and opportunities in North America. Prior to joining CDW, Joe spent more than 20 years providing technical expertise for visual effects and animation companies, and the last 12 years working exclusively in the areas of render farms and storage for producing cinematic special effects. His expertise includes rendering, storage and other infrastructure issues.