Research Blog – Rendering for Compositing

This research blog can’t really be applied to the work I have done for my specialization as I did the research after doing my project. Although I did have a really good resource when doing the rendering, as one of my facilitators talked me through the process in detail and gave good pointers. Although I have rendered before for compositing I have only done it for still images. It is amazing how much control you can attain in the compositing stage if you setup your renders correctly.


Render passes or elements are the raw parts of an image that are created by a render engine that ultimately comprise the final output, this data can be saved. With a compositing program these passes can be used to adjust the final image and make any adjustments without having to re-render. Also taking advantage of these elements can help create effects that are easier to make in the compositing program than within the 3D program.


Another iteration on this is to render separate elements of the 3d scene completely separate with their own lighting and shadow information. This allows adding missing elements without having to re render the entire scene or having elements that may be controlled with different values, this way creating a mask isn’t necessary and the image behind is still intact.

Beside the elements that comprise the final image, other elements can be added to add extra data for compositing. These elements contain data than can be read by the compositing software to create more accurate effects such as depth of field or motion blur. The more elements you have the better as it is preferable to have elements that aren’t used over missing elements and having to re render to attain certain elements. This is even more encouraged as adding new elements affect render times by a very small amount.



Zdepth is an added element that comprises of a black and white image, with white indicating the closest object to the camera and black the furthest. You can use this data to create a depth of field effect or a background fog in compositing that is accurate to the 3D scene. Although the effect  produces nice results it isn’t as accurate as doing a depth of field effect in render time, but this method allows for further control in post. Another useful pass is the velocity pass to create accurate motion blur without the crazy render times from doing it as an effect during render. It is a color map that indicates the areas that should be more affected by motion blur.


These are two of the most useful elements for compositing, they work in the exact same way, one can be applied to separate objects and the other can be applied to the separate materials. These maps are created by manually assigning numbers to either objects or faces, this allows for the creation of a map that has separate colours for each object or material. The uses of these maps are very flexible, creating separate layers for foreground elements and background elements, so that new elements can be placed in between. Also the individual Id can masked out so that its look can change, such as changing it colour or reflections without affecting the rest of the image. This works simlarly to the matte element where


Deep compositing involves having images that contain data for each pixel at different depths along the Z axis. Firstly this allows for far more accurate and reliable images that involve any kind of volumetric effect.  A big benefit of this workflow is that the data allows for the correct placement of separate elements in a comp to be placed without the need of holding mattes, saving the need to re render. Effects such as fog and depth of field are far more accurate with a deep workflow. It is unlikely that i will be using deep data for my own projects, as it is overkill on many cases and I still do not fully understand how it can be applied, but I still have it on my radar.



Certain file formats maintain the uncompressed data of each pixel after render so that they can be extracted in a program such as NUKE, giving the compositor the ability to reconstruct the beauty in the most accurate way. This data often has pixels that contain a high dynamic range, meaning they contain white values that are higher than one, this gives the compositing more color and value data to create higher quality images. The bits per channel are an important value when saving and choosing the file format, 8 bit floating often doesn’t contain enough data and thus can produce artifacts. I learnt this the hard way as all of the renders can random artifacts that made the image look aliased. A good setting to have is 16 bit floating point, where 32 bit is often overkill.


Compositing the Velocity Pass from 3ds Max using After Effects | Tim’s Portfolio. Retrieved 12 May 2016, from

O’Connor, J. (2010). Mastering Mental ray. Indianapolis, Ind.: Wiley.

Okun, J. & Zwerman, S. (2010). The VES handbook of visual effects (pp. 649-690). Amsterdam: Focal Press.

Rendering with the mental ray Renderer | 3ds Max | Autodesk Knowledge Network. (2016) Retrieved 12 May 2016, from

Sampling Quality Rollout (mental ray Renderer) | 3ds Max | Autodesk Knowledge Network. (2014) Retrieved 12 May 2016, from

The Art of Deep Compositing. (2014). fxguide. Retrieved 12 May 2016, from

The Core Skills of VFX (1st ed., pp. 69-73). London. Retrieved from


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s

%d bloggers like this: