Research Blog – Rendering
In this blog I will be talking about rendering, focusing on Arnold for Maya, although most of the theory is relevant to any modern physically based render engine. I will go into some of the basics of how rendering and render quality works, as well as well as addressing modern practices.
I have always been interested in rendering, mainly because it helps me display the hard work I have put into my models. Although lighting is closely related to rendering, I will be addressing it on a separate post, so I recommend you read that one as well. I will also discuss materials and materials in Arnold in one final blog post.
Most modern render engines are based of a Monte Carlo algorithm of ray ray tracing to create final images. Basically millions of rays that are shot from a camera into a scene, these ray usually several bounces to allow for the simulation of bounce lighting. These rays will eventually find their way back to a the relevant source of light and and informs the pixel information (colour) . The way these rays are calculated depends on the render engine you are using and each users settings. There is more complicated calculations that help make everything more efficient but mostly its the same basics across the industry.
Ray tracing attempts to simulate the real world behavior of light bouncing of different surfaces until each light ray runs out of energy. As many light bounces and calculations would occur outside of the cameras view, this isn’t an efficient method, so instead the rays are cast directly from the camera, basically an inverse of what happens in real life, but delivering accurate information for all the elements in view of the camera for that frame.
Now that we understand what ray tracing is, I can explain sampling. Samples are the number that determines how many rays are shot into the scene foe each pixel. With the rays shot the computer calculates and average of all these and returns a pixel that has influence from the samples for that pixel. The more rays that are shot per pixel return a more accurate image ( often gradually removing noise), but more rays does mean higher render times. The main sampling value is that for anti-aliasing, with a higher number returning more accurate results, as sometime two objects will be within one pixel. In Arnold the camera samples (Anti-Aliasing) not only affect the quality passes for the image but also it multiplies the the amount of ray that are shot out for each other type of ray (more on that in a second), so often if you want to reduce anti-aliasing in Arnold you should reduce the amount of samples for other types of rays.
Not all rays are the same, some rays are shot from the camera to detect a specific type of light interaction, such as selectivity or indirect illumination. In the case of Arnold the amount of rays for each interaction can be controlled by the user, reducing a type of ray that isn’t in a scene would be a way of reducing render times as removes rays that don’t affect the look of the final image. An example is having a scene that has no glass or refractive material, in that case rays that interpret refractive bounces are redundant and are only going to make the render slower with improved quality.
Have you ever heard of high dynamic images? These are images that contain more light information that the standard images found on computers. Most images on the internet exist in an sRGB color space, where they have a 2.2 gamma correction so they can be properly viewed on screens. They work on a screen but they aren’t realistic in how little light information there is in them and often have an inaccurate colour. HDRI images are used to light scenes due to all their light information, and they perfectly explain how a linear workflow works. These images exist in a linear space with gamma correction, they look weird when displayed but they are accurate. So the digital space where all geometry and lights exist is not gamma corrected as it has to precisely calculate lighting. So everything should be in linear space except for the inputs (textures) which could be in an incorrect colour space and create inaccurate images. So in the images below on the left the image has its inputs correct and its viewing corrected. And the one on the right is inaccurate as the inputs are still sRGB colour space and return a less vibrant image.
To assure an image is physically accurate we have to make sure the lights and all inputs in our scene are as close to reality as possible. Lights and materials are often correct, but inputs such as textures often have gamma correction and create an inaccurate final output. To compensate this images are linearized by applying the reverse curve to the gamma correction.
So a linear workflow can be viewed as two steps of color management.
The input- we ensure that all elements that affect the look of the scene are linear. This includes lights, images used in lights, materials and textures. Textures are interesting as most grey space and normal maps should be kept with gamma on as they are dependent on the 2.2 gamma correction to properly work. In the other hand any texture that contains colour information should be linearized. This allows everything in the scene to exist in a linear environment, with all light calculations being physically accurate.
The output- while its nice to have a physically accurate scene, we have to be able to display our results on a screen, to do this the 2.2 gamma is reapplied to the final display. The final output is often still linear so that compositing can be done easily, but the display is corrected.
Autodesk (2016, July 28). Rendering the future: A vision for Arnold Retrieved from https://www.youtube.com/watch?v=35morxCJOIQ
Pixar. (1986). Linear Workflow in RMS. Retrieved August 10, 2016, from https://renderman.pixar.com/view/LinearWorkflow
Pixar. (1986). Sampling. Retrieved August 10, 2016, from Render Man, https://renderman.pixar.com/resources/current/RenderMan/risSampling.html
Ray tracing (graphics) (2016). . In Wikipedia. Retrieved from https://en.wikipedia.org/wiki/Ray_tracing_(graphics)
Solid Angle. (2009). Gamma correction and linear Workflow. Retrieved August 30, 2016, from Solid Angle Support, https://support.solidangle.com/display/AFMUG/Gamma+Correction+and+Linear+Workflow
Solid Angle. (2009). Samples. Retrieved August 10, 2016, from Solid Angle Support, https://support.solidangle.com/display/AFMUG/Samples