Research Blog – Materials

Hi in this blog I will be discussing materials and how they work within the Arnold render engine. This blog relates to the research I did for my rendering blog, so some of the information included will make more sense after reading that blog.

Materials or shaders are a set of surface parameters that determines how lighting will interact with geometry. Determining how much of a light ray is refracted or absorbed, this also determines light interactions such as refraction or pure reflection. Materials are often controlled by values and maps that contain alpha, colour or grey scale information.

In the case of  Arnold it is a physically based render engine so its shaders and lights have simple roll outs that control values that equal real life lighting situations and surface behaviors.

Standard Shader

Image from Solid Angle

The standard shader is a multi purpose shader that will be used in most cases for rendering surfaces in Arnold, as it can be use to make anything from plastic to metal to skin. As most parameters in PBR , the parameters for this shader are quite simple but not necessarily straightforward, it uses a specular workflow, not a metallic one.

Beside the major roll outs I will discuss, there is also access to bump mapping, subsurface scattering and emission.


The diffuse rollout controls the colour and roughness of the base colour of the material, this is a fairly simple part of the material. The major note comes with keeping the material physically accurate the weight of this parameter and and the weight of the specular parameter shouldn’t exceed a value of 1 when combined, as the the material wouldn’t produce more light rays than it absorved, an inaccurate calculation.

Specular Reflection and Reflection

Image from Solid Angle

The reflections for this material are controlled by two separate roll outs (specular and reflection) and they seemingly do the exact same thing. The biggest difference between the two is that specular has way more paramaeters and can reflect lights.

Specular indicates how reflective the surface is, its colour and strength can be controlled. It is here where the roughness of the highlights and reflections is controlled. This is why specular should also be used for blurry highlights. There is one last parameter which fresnel, this indicates how close to the edge the fresnel effect will take place and whether it takes place at all.

The reflection roll out has a limited amount of parameters and is advised to only be used for mirror like surfaces, it does not reflect light sources, so it is usually recommended to use this in combination with the specular parameters.


Image from Solid Angle


Image from Solid Angle

Refraction helps to create glass like materials, it is highly dependent on normal direction and will often require extra ray depth for it to work correctly on more complex materials. It is recommended to turn off the opaque option on any object with a refractive material as it will allow it to have internal retractions and have a more accurate shadow.

The most important parameter is the IOR (Index Of Refraction) this determines how much light traveling through is affected by the materials wavelength, the higher it is the more distorted light will be shown passing through it. There is also an option to make this value affect the Fresnel at normal. Beside this you can control the colour and weight of the refraction. There are is another way of creating colour for the material, and that is transmittance, this is a colour that determines how much the material is affected by it depending on how think the object it is.



Solid Angle. (2009). Diffuse. Retrieved August 10, 2016, from Solid Angle Support,

Solid Angle. (2009). Reflection. Retrieved August 10, 2016, from Solid Angle Support,

Solid Angle. (2009). SSS layers. Retrieved August 10, 2016, from Solid Angle Support,

Solid Angle. (2009). Standard. Retrieved August 10, 2016, from Solid Angle Support,

Solid Angle. Specular. Retrieved August 10, 2016, from Solid Angle Support,




Research Blog – Lighting

Lighting Basics

Point Light


Area Light

Mesh Light

Photometric Light



Autodesk (2016, July 28). Rendering the future: A vision for Arnold Retrieved from

Pixar. (1986). Linear Workflow in RMS. Retrieved August 10, 2016, from

Pixar. (1986). Sampling. Retrieved August 10, 2016, from Render Man,

Ray tracing (graphics) (2016). . In Wikipedia. Retrieved from

Solid Angle. (2009). Gamma correction and linear Workflow. Retrieved August 30, 2016, from Solid Angle Support,


Next Trimester

In this blog post I will be talking about area of my craft I want to improve on and how to go about it. I specifically want improve my hard surface model production, my character modelling and shot creation (implementing matte paintings). I mostly plan to improve in my own practice by building a larger visual library by collecting viewing more specific images relevant to the fields I want to improve. Another big aspect of improving is time and having the time to prefect pieces and time to process new techniques and information. Finally as always viewing tutorials about techniques, workflows and new software is always necessary when taking on new challenges.

Hard Surface Modelling

Image result for mech 3d

I think I have been pushing my hard surface techniques in the past few months and I am becoming very proficient with techniques such as subdivide and straighten or establishing curvature, I will possibly attempt to implement Zbrush into my process as I will be taking a tutorial regardless for my character modelling. I will also continue to watch Youtube Channels such as Arrimus 3D to improve on small aspects of hard surface modelling.


Character Modelling

Image result for alessandro baldasseroni

Character by Alessandro Baldasseroni

This is the element of CG that I am most passionate about, as a future career path I want to become a character artist for high end cinematics. I have began venturing into this designing and modelling a hand full of characters, but I want to strip it back, so I want to make a simple female character that is fairly;y realistic and attempt to make her work as well as possible for production, I will be learning about human anatomy and proportions and take a step into the realm of 3d sculpting. I will start by watching tutorials on how to work with Zbrush and attempt to take life drawing classes. Also my attempt to improve my hard surface will help me with creating characters with armour.


Complete Images/Shots

Sea Creature - creature concept by Carlos Huante, matte painting concept by Tuomas Korpi

Shot by Beat Reichenbanch

One of the biggest problems I have is having finalized shots that look at a presentable level, so for this I want to learn how to create more complete shots. I plan to this by learning VFX techniques such matte painting and creating more efficient assets that only need to look good from a certain angle. I will be viewing matte painting techniques and learning and improving on my compositing techniques. I will start this by watching matte painting tutorials and watching a complete course on NUKE during the break.



I will admit it is pushing it to focus on all of these, but they all kind of go hand in hand, with better hard surface skills I will be able to make better characters, and I will be able to give my characters more context and a better showcase with more complete and pleasing images as a final output.

Research Blog – Rendering

In this blog I will be talking about rendering, focusing on Arnold for Maya, although most of the theory is relevant to any modern physically based render engine. I will go into some of the basics of how rendering and render quality works, as well as well as addressing modern practices.

I have always been interested in rendering, mainly because it helps me display the hard work I have put into my models. Although lighting is closely related to rendering, I will be addressing it on a separate post, so I recommend you read that one as well. I will also discuss materials and materials in Arnold in one final blog post.


Most modern render engines are based of a Monte Carlo algorithm of ray ray tracing to create final images. Basically millions of rays that are shot from a camera into a scene, these ray usually several bounces to allow for the simulation of bounce lighting. These rays will eventually find their way back to a the relevant source of light and and informs the pixel information (colour) . The way these rays are calculated depends on the render engine you are using and each users settings. There is more complicated calculations that help make everything more efficient but mostly its the same basics across the industry.


Ray tracing attempts to simulate the real world behavior of light bouncing of different surfaces until each light ray runs out of energy. As many light bounces and calculations would occur outside of the cameras view, this isn’t an efficient method, so instead the rays are cast directly from the camera, basically an inverse of what happens in real life, but delivering accurate information for all the elements in view of the camera for that frame.


Now that we understand what ray tracing is, I can explain sampling. Samples are the number that determines how many rays are shot into the scene foe each pixel. With the rays shot the computer calculates and average of all these and returns a pixel that has influence from the samples for that pixel. The more rays that are shot per pixel return a more accurate image ( often gradually removing noise), but more rays does mean higher render times. The main sampling value is that for anti-aliasing, with a higher number returning more accurate results, as sometime two objects will be within one pixel. In Arnold the camera samples (Anti-Aliasing) not only affect the quality passes for the image but also it multiplies the the amount of ray that are shot out for each other type of ray (more on that in a second), so often if you want to reduce anti-aliasing in Arnold you should reduce the amount of samples for other types of rays.

Image from Solid Angle

Not all rays are the same, some rays are shot from the camera to detect a specific type of light interaction, such as selectivity or indirect illumination. In the case of Arnold the amount of rays for each interaction can be controlled by the user, reducing a type of ray that isn’t in a scene would be a way of reducing render times as removes rays that don’t affect the look of the final image. An example is having a scene that has no glass or refractive material, in that case rays that interpret refractive bounces are redundant and are only going to make the render slower with improved quality.



Linear Workflow

Have you ever heard of high dynamic images? These are images that contain more light information that the standard images found on computers. Most images on the internet exist in an sRGB color space, where they have a 2.2 gamma correction so they can be properly viewed on screens. They work on a screen but they aren’t realistic in how little light information there is in them and often have an inaccurate colour. HDRI images are used to light scenes due to all their light information, and they perfectly explain how a linear workflow works. These images exist in a linear space with gamma correction, they look weird when displayed but they are accurate. So the digital space where all geometry and lights exist is not gamma corrected as it has to precisely calculate lighting. So everything should be in linear space except for the inputs (textures) which could be in an incorrect colour space and create inaccurate images. So in the images below on the left the image has its inputs correct and its viewing corrected. And the one on the right is inaccurate as the inputs are still sRGB colour space and return a less vibrant image.

To assure an image is physically accurate we have to make sure the lights and all inputs in our scene are as close to reality as possible. Lights and materials are often correct, but inputs such as textures often have gamma correction and create an inaccurate final output. To compensate this images are linearized by applying the reverse curve to the gamma correction.

So a linear workflow can be viewed as two steps of color management.



Image from Pixar

The input- we ensure that all elements that affect the look of the scene are linear. This includes lights, images used in lights, materials and textures. Textures are interesting as most grey space and normal maps should be kept with gamma on as they are dependent on the 2.2 gamma correction to properly work. In the other hand any texture that contains colour information should be linearized.  This allows everything in the scene to exist in a linear environment, with all light calculations being physically accurate.

The output- while its nice to have a physically accurate scene, we have to be able to display our results on a screen, to do this the 2.2 gamma is reapplied to the final display. The final output is often still linear so that compositing can be done easily, but the display is corrected.


Autodesk (2016, July 28). Rendering the future: A vision for Arnold Retrieved from

Pixar. (1986). Linear Workflow in RMS. Retrieved August 10, 2016, from

Pixar. (1986). Sampling. Retrieved August 10, 2016, from Render Man,

Ray tracing (graphics) (2016). . In Wikipedia. Retrieved from

Solid Angle. (2009). Gamma correction and linear Workflow. Retrieved August 30, 2016, from Solid Angle Support,

Solid Angle. (2009). Samples. Retrieved August 10, 2016, from Solid Angle Support,


Worldbuilders – Post Mortem

In this entry I will be analysing and reflecting upon the work I did for my Worldbuilders project and seeing how I can apply what I have learnt here to future projects.

Below is a near final version of the complete deliverable, it still need some work, but it is very close to what it should look like. It was published on Youtube and Vimeo.



To complete the project I had to hand a series of deliverables that helped me me consider more the planning stages of projects, including developing the look of a project and planning out specific shots.

Project Plan:

Concept Art:

Value_Thumbnail_Template_Student_02 (1)




Art Bible:

Shot Deconstruction:




Black Knight Character

The team worked and communicated really well, this was important as the team was the smallest, we were always on the same page and has tasks to complete, as with other projects sometimes there are people that have no clue what is going on. I was the lead so I was in charge of allocating tasks and keeping the project on track, I think the

In this project we did plenty of testing which allowed us avoid problems down the pipeline, this allowed us to know what was achievable and what wasn’t. This is a process I would like to replicate in other project to a further extent, specifically identifying rendering solutions, as this was one major aspect we overlooked when testing.


Ciri Character (Body)

The animatic was an aspect as well, as most of the Pre Production, that went extremely well. It was extremely successful at setting timing, as we had a near final cut and a strong pace to what would be the final deliverable.

On an individual basis I think I could have done better, as I was disconnected from the project not being very invested in it. I think in this project I pushed my modelling to another level and really went out of my comfort zone.



Rendering was the biggest issue as we didn’t test out rendering a scene of the size we had in Mental Ray. Some of the materials and textures incremented render times exponentially to the point where it was no longer viable to render out any of the shots in Mental Ray. This was purely through a lack of testing and a full knowledge of how to optimize renders within Mental Ray. With this in mind I will further test out render engines in future projects and ensure a few team members are familiar with the key aspects of said render engine and how to optimize it. Another step is to continue to render out shots further down the production line to have a closer estimate to the final render times, and rectify any issues early on.

Another element that went wrong was the environment, as the where gaps with the environment provided and there was no easy fix. My last resort was to remake the environment from scratch. This came from alack of communication between I and the team member in charge of making the environment and no real testing an implementation of the scene before final render time. A solution to this to implement different elements of the final scene early on test out the final shots to identify any geometry or shading issues.



Shot Planning

The design of specific shots was an excellent experience as it allowed the team to establish a stronger visual identity to the shots and establish stronger shots that are more informed by story. There was a need to create tension and excitement through the camera and cinematography as most of the action was in still moment in time. I think the planning for each of the shots went extremely well creating tension, but it was the execution that went wrong. The elements of this process I will take forward are the careful planning of shots by taking careful consideration pacing and flow of a scene. Wan t I want improve in this process is to possibly simplify shots and try to execute them as well as possible, by firstly implementing more post production processes and allowing time to refine.

Character Design

This was one of the most important processes as there was 3 characters in the project, we had to ensure they fit into the world and were of a high quality. Each team member was in charge of designing one character, all of the concept art was fairly low in quality as all of the team concentrated on the 3d side of things, I can’t speak much about other team members process when designing characters. But on a personal basis my method was to go between 2d and 3d several times to create the final design. I am growing comfortable with this workflow and will continue to use it. I think i could improve my design by going between 2d and 3d a few more times and taking into consideration how different elements will work when the character is animated, such as clipping armour.

Environment Design

I was as not heavily involved in this process although I did provide reference images and a rough idea of what the final environment should look like. I think this process could be improved with a stronger art direction, I plan on improving the art direction by having more focused reference and concept art to illustrate details of the environment as well as many different views of the environment.

Asset Creation

Related to the previous two processes I think the character end of things went fairly well, but the environment was a bit underdeveloped, I think its because the previous two processes were underdeveloped. In terms of technique and process it was all successful, it could be improved with better modelling skills and implementing more advanced techniques and software.



Strong Work Ethic

In this project I had an extremely strong work ethic, putting in long hours, this was partly due to my obsession with making characters. Although I think I maintained this work ethic, I think in the long run it was a bit to much as it made me be burnt out by the end of the project. In future projects I want to maintain a more balanced work to free time ratio allowing to have a clearer mind when I am actually working.

Positive Attitude

Although I was relaxed and positive during the project, I am disappointed as I did not focus on this project enough creatively as I was not invested in it, this a problem I plan to solve by  working on project and roles I am more passionate about.

Effective Communication Skills

In team my communication skills were very effective with everyone understanding their roles and tasks, and mostly agreeing with team members when it came to important decisions. Where I need to improve is in communicating the stage of the project to teacher and other key stakeholders, as this will be important in my career having to communicate with clients.

Time Management Abilities

This was my weakest aspect, as although I did put in time into this project, to much of it went to modelling. In future project I am planning on setting a hard cap on when modelling has stop, allowing to concentrate on equally important aspects in the production pipeline.

Problem Solving Skills

My problem solving skills were put to the test in the ending of this project, as mental ray was rendering to slowly for us to have a final deliverable, we explored many options as a team, with a final decision to switch to ART renderer. I am extremely happy with my problem skills in general, as I often find the solution to any technical problem. When it come to creative problems, I think I have to do more research on how to communicate things to audiences or create a better image through composition and creative choices.

Acting as a Team Player

I think I under performed here, although I did help my team members, I do think I assigned them less, and I should have assigned them more work. In future projects I plan to spread the workload more evenly.

Self Confidence

I pushed my skills in terms of modelling and character design in this project, and I was very comfortable with other elements of 3d, but I think I could have made decisions faster and be more creative in the way I approached my shots and design, as I kept everything fairly generic. In future project I will try and push for a stronger visual identity by iteration more and having more confidence in my abilities.

Ability to Take and Learn from Criticism

This was a big part as feedback sessions where important in building the final deliverable, with major changes being made from feedback from teacher and peers. I think where we were lacking was in taking feedback in terms of scope, as we didn’t really take to mind that we had three characters. I want to improve in my abilities to analyze the scope of a project and it chances of success and any feedback help in that evaluation.

Working Well under Pressure

This was important as it was a small team and there were many more responsibilities on each team member, I was successful in this aspect as I maintained a level head through the whole project, but when related to time management I did a pretty poor job. Similar to what I am going to do for time management I am going to attempt to set out a clear schedule for myself in future projects as well as having key milestones with key deliverables.


In conclusion I am happy with how I performed in this project although I wasn’t satisfied with the final result, it taught me a lot about my own practice both in terms of hard practical skills and how work in a team.

Cross Disciplinary Project – Served Cold

In the past few weeks I have been part of a studio 3 games project, they are making a a mobile game called Served Cold. The game consists of moving you character (an ice cube) through environmental puzzles so he can successfully reach the glass of coke. Things that I had to learn in this project was how to  create simple characters and animations, so that they are easy to read on a small screen. I also had to learn how to deliver the animations and assets to the game student, including how to create a sprite sheet. Finally I did this project with another animation student and we had to work together to keep the look consistent.


My major task was to create the protagonist for the game, the little ice cube, my design was mostly informed by the design Che did for her character, as we wanted them to look similar, we used the same sized brushes, values, we both used gradients in some places and even used the exact same eyes. This process was back and forth and with simple characters it was important to keep an eye on every element as any mistakes could be easily noticeable. An example of our process was I decided my ice cube should have highlights on the top left, so Che added highlight to the top left of her character.


For the animation (if you can call it that) I simply had to add the expressions the game students asked for, which included sleeping, an open mouth and an extra happy face. I was expecting to have to draw extra frames, but the game students assured it was enough, and it turns out they look great in game. The only expression that has an extra frame is the sleeping one, with one having the mouth open and the other one has the mouth closed, to make him look like he is snoring.



My second task was to create the environment, this could be done by creating a series of tile sets. For the wall we had to create 3 separate textures that would line up perfectly when tiled together, Che designed these, and I added the appropriate variations for my season summer and winter. The game is divided into season, which represent a new section of the game. This also had to be done for the floor, as there where more of these tiles, I had to create more variations or other wise it would look to plain and repeating.

There are also other obstacle in the game that allow for new mechanics and challenges, I had to create a movable block (a cake) and an animated arrow to indicate that if the player goes over it they are pushed the direction it points. These wehere easy to make, for the arrow I made a simple image and made a multiply layer that had an animated light moving past it, done in after effects.



In retrospect I think this is probably my best engagement with the game students I have ever had, this was in hand because the workload was minimum and easy to iterate on. Another are of success was how well me and Che lined up our art styles and what we where doing, constantly helping each other and giving feedback. The biggest part I think I could improve on is in helping implement assets in game, as sometimes the assets don’t look as good in game as you would expect, this includes adding extra effects and correct placing of items.

Here are some gifs of the game in action.

Shot Deconstruction – Witcher 2077

Hello in this entry I will be conducting a film deconstruction analysis to help progress through my world builders project, hopefully this analysis will help me improve on the final product and push the effectiveness of the shots allocated to me. Before I start the analysis I will tell you the story I’m adapting. I am doing the first chapter in the first book from the Witcher series, the passage comprises of a young Ciri riding on the back of a horse with another rider and they are riding way from from a black rider as the city around her is being destroyed by a foreign army. They eventually get shot down and she is pinned by the dead rider next to her. The black Rider approaches her and before she is killed by him she wakes up, revealing it was all a nightmare. The book is set in a fantasy setting, but we have decided to adapt it to a cyberpunk setting, creating a more interesting artistic challenge.


I will be performing my analysis on the trailer for the upcoming video game Cyberpunk 2077. Analyzing more accomplished filmmakers has its merits, but the time restrictions of a trailer are more relevant to my project, considering I have to convey an emotion and story in a very short time frame.

This trailer is similar to my project in many aspects stetting, story elements and even that most of the action take space in a frozen moment in time. The problem with having a frozen moment and minimal animation is that the story and action has to be created with he camera, making this analysis even more relevant, as capturing what works in this trailer can exponentially improve my sequence. Most of the shots have very dynamic camera movement with pans, zooms and quick cuts,this builds tension and and effectively reveals the whole scene and context in below 2 minutes. Another aspect that I have is the strong rim lights and very effective use of camera blur to separate the characters from the background. The last element that is very effective with the shots, is that they do not reveal the whole scene, through clever camera positioning they hide elements that they reveal in later shots, this builds tension and and mystery.

This slideshow requires JavaScript.

My first shot shows the threat (the black rider) aiming his gun at the protagonist (Ciri) and shooting an energy charge at her. Similar to the special forces guy that is behind the woman he seems like an alien force due to his face being covered, and his intentions of harm are clear by focusing the camera on his weapon. There are two shots one focusing on his weapon and the woman, indicating what his action is, and the next is focusing on his face showing his stoic expression. This is a similar fell I want to have for my shot, showing the black riders intentions and also focusing on his helmet showing how alien he is.

I originally had a profile shot of the black rider panning from the tip of his gun until he is in the centre of the frame, although effective at conveying the action it could be more stoic and dynamic. I have opted after this analysis to have a mid shot of the black rider, with camera moving along his gun toward his helmet, and having the reflection of the explosion on his visor. This shot along with the Cyberpunk shot, clearly conveys the action that is about to happen and makes him look alien and detached.

This slideshow requires JavaScript.

My second shot is an establishing shot (wide shot) revealing the outcome of him shooting at Ciri. There is no equivalent to this in the cyberpunk trailer,but the reveal of the officer being behind her has a similar role. The use of an element in front of the camera covering the guy is clever but I believe it doesn’t work as well in the context of my shot. This is because there aren’t many elements in the scene that I could use to hide parts of the shot, instead I have opted to use a camera move to reveal the whole scene.

My original shot was a slow panning wide shot of the biker getting of his bike on the top right of the frame and the crashed bike with Ciri at the bottom left. After the analysis I have opted to have a camera pan to the left to have only the rider in frame to having the crash site in frame.

This analysis has greatly improved the way I have a approached my shots and I believe I can more effectively communicate story and emotion in a short time.



CD Project Red,. (2013). Cyberpunk 2077 Teaser Trailer. Retrieved from

Guide to Film Analysis in the Classroom (1st ed.). Retrieved from

Sapkowski, A. (2009). Blood of elves. New York: Orbit.

Research Blog – Hard Surface Modelling


Post Mortem Blog – VFX

In this blog I will be reflecting on my specialization project which involved creating a short VFX sequence, as well as reflecting on how my research helped me in my own practice. Me and a few students attempted to create a 40 second VFX sequence that had a soldiers, robots, explosions. The project wen’t really well and smoothly for the majority of the project, but some time issues left us with an unfinished result. It is a bit disappointing as all the elements to finish are there, but we just needed and extra week or so to bring everything together.



I was heavily involved in the pre-production process assisting with the original idea, contributing shot ideas and assisting with the script. My first job was to assist with the storyboards, although I did assist with the pacing and deciding on shots, Curtis ultimately drew the storyboards. I wanted to get the ball moving so I went straight into creating an animatic as it was extremely important to identify passing to allow everyone else to identify their tasks and pitch it to the film students. My research into composition into film was very helpful especially in being able to build tension and where to place the characters in film, I was stopping myself from going to crazy with the cinematography, as I wanted most of the shots to be static to help with the VFX work, I didn’t want to make it harder than it already was. The animatic was pretty faithful to the storyboards but some of the movements and angles didn’t work very well.

I got a big amount of feedback from my facilitators, especially Brett who gave me feedback on pretty much every shot, it was mostly involving trimming down the timing so it felt sharper and it was easier to work with. I was fairly efficient on applying the feedback.

I was happy with how efficiently I was working at the beginning of the project, but I was originally meant to only work on art direction and consolidating the look of the film. Having to also do the animatic, which was originally the task of another student, affected my art direction work which meant I was slower in delivering a final reference list. I did successfully gather reference and help design the final robot, but I think I didn’t build a very polished look that was consistent in the time frame. I was still happy with the final design and how I gave feedback to keep things within the look.

I believe the pre-production stage was extremely successful, getting everything done efficiently and not slowing down other parts of the pipeline, I would be happy if all of my pre-production stages in other projects are this successful.


For production I started work on my door model, as I felt it was the most prominent digital element beside the robots (who Simon took before me). I enjoyed the challenge of making something that was futuristic but still fit in with the live action environment. I attempted to follow the best practice I researched for hard surface modelling, I gathered a large library of references, although I probably could have used more real life images looking back on it. From there I tried for the first time to follow a nondestructive pipeline, and I believe it was very successful and will continue to use it as it made my work much faster. I also saved any screws and vents into a separate file to start building a kit-bash library. I started with the blocked out shapes, mostly the dimensions as well as the major shapes of the doorway and the window slot. From there I added bigger details such as plating, and finally added smaller detail such as screws. I spent some time packing the UV’s as I wanted everything to fit into two 8k maps, the baking process was easy I didn’t need normals, the material Id’s were easy to assign as I had the low poly version lower in the modifier stack. The AO bake took a while but it was important to create masks for dirt and grime buildup. This was the first time where I used the masking in Quixel to my advantage, creating smaller details and having control over the textures. I was really happy with the results, although I did have a bit of unorthodox workflow to compensate for some mistakes I made in the process.

I also rigged the robot which was a fairly easy task, it only involved parenting the different parts to controls and there was no deformation. I was also present in the film shoot, I helped Brett give feedback on the takes. The production stage was a bit slower and less focused than pre-production as everyone was working on their separate stuff and I don’t think anyone but Brett fully understood how everything would come together. During this time everyone was working on other projects so everything stacked up to us being behind by a week.


I was heavily involved in preparing footage for the compositing stage, my first task was to do the rotoscoping for the first shot, the research I did wasn’t very handy as the rotoscoping method we used was very different to traditional rotoscoping methods. This process was longer and more taxing than I expected, as I didn’t want to rush it, if I did it would have ruined the whole shot. Although I wasn’t able to get it all done in time, the footage I did I was fairly happy with, as the live action elements blended in nicely beside the fact of some lighting discrepancies.My rotoscoping research did help understand how mattes work better and how they could be taken to the next level in bigger projects.

My final task was to prepare the scene shaders and lighting for rendering, at this point time was running out and we where slowed down by a major hurdle, there where some problems with the maps I had created for the door, creating different shading on each door, beside this the maps weren’t properly calibrated to work with mental ray as Quixel didn’t offer an export option for mental ray. Sitting next to Brett for this process was a great learning experience, as I got to see how he approached shaders, lighting and rendering. I saved everything with multiple passes: diffuse, shadows, ao, object id, zdepth. Although compositing was rushed all these separate passes where really useful.



This image is a visualization of how the different pieces of the pipeline connect together as you can notice VFX often starts before production which means VFX is developed before, during and after the shoot of the movie. 16T1 - VFX Spec Rapid Project - Sheet1

In my case I didn’t get a hand in the pipeline as Brett an already experienced 3D artists with plenty of experience with VFX  and composting. He organized a schedule and coordinated the separate people so that all the elements would come together at the end.


This is done before the creation of the VFX elements and is even done before or during attaining a new client. In here the software that is needed is chosen and also new tools needed for the specific needs of the studio are created. In my case this stage was limited to learning new software and VFX techniques, this also ties into tests where we had to learn what worked and didn’t work, the whole project could be considered a test as it is the first time I have had to do VFX work.


It often spans all the production stages, firstly by making low fidelity models for visualization. Then by creating the models that will be used for the final shots, aswell as medium fidelity models for animation work. In our case we also created a previs, and also created models before and after shooting.


During production the VFX supervisor will often be onset to provide feedback  for VFX setup and shots. They will also take high quality photographs of texture, props and any elements that will help with the production high fidelity CG elements. Other tools that are used to replicate the set and elements in it are HDRI image capture, which allows the CG artist to replicate closely the original lighting of the set into the CG scenes. Cyber scans are often done of important set elements and props, these aren’t used for the final assets but allow the modelers to replicate the elements more accurately. We perfectly replicated that on set as well, with all the VFX team being present on set and providing feedback, as well as taking reference pictures that helped with texturing and modelling. Some scans were done that also helped with modelling, although HDRI where not able to be captured.


This is when the footage from the shoot is selected and sent to the team, often as a HDRI image to maintain the most amount of data. Grading is also done once it is in the hands of the VFX house. In our case this is one of the stages we screwed up as our footage was already compressed and it made working with much harder, with tracking, grading and rotoscoping. Grading was done as soon as we got the footage.


This stage involves creating animation friendly rigs from the models provided, these are often refined during animation as they become more complex and change with the needs of the animation team. I was personally in charge of creating the one rig we needed, it was fairly simple as there where no deforming parts on the model, I made sure to ask the animator for any elements or features he wanted on the rig.


When the footage has been prepared it is often tracked to create a close replication of the footage as 3d tracking points, this aids with lining up cg elements with the original shot. On screen actors or props are sometimes also tracked if any cg elements need to be added to them. This was done for all the moving shots in our production and it greatly helped.


Any animated elements to be key-framed or cleaning up motion tracked animation. This not only involves character but any elements that move within the cg environment, although they often exclude any kind of simulation. This stage was simple to follow as we only had one animated element. This was aided by the tracked footage, as the CG camera is animated with the tracking data.


Any simulated cg elements are done with particles, rigid body dynamic or fluids. This stage is often highly technical and requires separate simulation packages. In our case we had to create muzzle flashes and explosions, so we had to use particles, these where rendered separately and composited in, but sometimes effects animation is rendered along with the rest of the cg elements.


With the on set photographs, several maps (diffuse, specular, normal, displacement) are created. These maps not only define the colours and reflections of the model but also define fine detail that isn’t advisable to model, such as pores or scratches. We used Quixel to create our textures, it provides easy creation of textures, but the problem was that the maps weren’t calibrated properly, so this impacted the look development stage.


This stage defines what the final cg elements will look like in the final output, calibrating and creating shaders to replicate the real equivalents of the objects, or if there is no real life counterpart for the shader a decision has to be made as to how it should look in the render. This stage was longer than we expected as the maps we used didn’t work properly with  mental ray, but we attained a look we where happy with after many alterations to the shaders.


With look dev completed, the lighting artist uses the HDRI captures to light up the scene as well as adding any lights to highlight any areas or elements. With rendering, the output files have to be decided as well as what quality setting will be necessary, this is important as rendering is a process that takes a long time especially in big VFX scenes as there are a lot of elements that are of a very high quality. This stage was different for us as we had no HDRI captures which meant we had to replicate the original lighting as close as possible, the rendering passes greatly aided the compositing stage.


This stage is crucial, but is often underestimated, this involves cutting out the desired elements of a shot from the undesired elements, so that they can be replaced with the cg footage. This process is extremely tedious but creates the seamless combination of real life elements with cg elements. I was involved in this process, and we used a different workflow to the one that is often used in professional productions.


This is the final stage and where all the separate teams works comes together to create a final image, it is the job of the compositor to seamlessly integrate the elements they are provided with, the more data they are provided with in the renders and footage the more control and better results they can attain, they can also fix and hide any mistakes where done earlier in the pipeline. This stage was extremely rushed in our pipeline, but the value of this stage cannot undervalued, as the final presentation of the film is extremely dependent on this stage.


In conclusion this was a big learning experience for me as I researched topics that where really helpful for my own practice and I familiarized myself with how a VFX pipeline works and how different it is to other types of projects I have worked on. I also got to work with a teacher hand in hand and observe how he approached things, this was really helpful and constructive. Finally although the sequence wasn’t finished and we only about 4 seconds of rushed comped work, I was really happy with how the team worked and I hope to replicate with more success in my next VFX project.

REFERENCE (n.d.).Andrew Whitehurst . Net. [online] Available at: [Accessed 1 May 2016]. (2015). The Visual Effects Pipeline. [online] Available at: [Accessed 1 May 2016].

pipeline, H. (2013). How to set up a VFX pipeline. [online] Creative Bloq. Available at: [Accessed 1 May 2016].

Research Blog – Hard Surface Modelling for VFX

In attempting to find specific best practices for modelling there wasn’t a big difference between hard surface modelling for games and film. The biggest difference is that most of the detail is done as geometry instead of baking it down into maps, and the obvious higher amount of resolution and geometry that is allowed. So I will mostly look at video game best practices as often the way high poly models are made are similar, as well as there being a higher amount of learning resources available for real time modelling. A great example of the difference between movie model and game models can be seen in this article on the new Ratchet and Clank movie.


One of the most  underestimated stages of modelling, collecting reference is crucial to understanding the object you are working on. Even if concept material is available it is always good to have real life reference as attempting to replicate real life surfaces, shapes and proportions will yield the best results. In many cases the objects being modeled have no real life counterpart, in this case it is key on having strong concept material, as well as collecting reference of objects or surfaces that are similar to the fictional object. Another good practice is to integrate real life elements into the model, this is aided by reference, so that the model can be more grounded and can feel more believable.Reference material can help decide what details should be geometry and what should be created with the help of maps. In the case of VFX most details are geometry to create the highest fidelity results, but small details and micro detail can be done with the use of displacement maps and normal maps.



When modelling even with great concept and references, it is ideal to create the rough major shapes of a model first to identify if it looks balanced and can work in 3d, this allows the artist to identify any problems with proportions and overlapping elements before any detail work is done. A good way of previewing what the final mesh will look like is to stack modifier on top such as chamfers or opensubdiv. Mastering caging, chamfering and subdivision modelling is key to being able to create more complex and smooth shapes. With big shapes made, it is advisable to add details gradually and making sure to balance the model with its detail, with focal areas that have more detail being contrasted with less complex areas.


It is advised to work in a non destructive way, allowing for any changes to be made to the model with more speed. Working in a non destructive way mean maintaining the geometry that is often lost when creating chamfers and subdividing, as these type of models are harder to work with. In 3ds max this can be done by tacking advantage of the modifier stack, unless having to export the model, the model should have the whole stack as it makes making changes way more flexible. On that note chamfering sharp edges is a big part of making hard surface models look realistic, allowing for more realistic highlights and results. Another step to making modelling faster and more efficient is the use of custom tools and scripts, these can allow tasks that would take hours to be done, to be completed in a few minutes.


One of the best ways of saving time when creating hard surface model, and any other kinds of models, is to save any small pieces or any geometry that could be reused into a separate file. This way when creating new models, certain details such as screws  can be reused, most 3d tutorials advocate for this and it is a fantastic way of saving you from creating redundant geometry.  A prime example although old is the work in the original gears of war, where a large part of the locust models where created from reusing parts from previous locust models. Saving any kind of work that could come in handy later is always advisable, such as saving base meshes or building a library of tillable textures and masks.



(2016). 3D World, (205), 38-39, 48-52.

Asset Workflow for Game Art: 3D Modeling – Treehouse Blog. (2015). Treehouse Blog. Retrieved 24 April 2016, from

Modeling Tips. Retrieved 24 April 2016, from

The Core Skills of VFX (1st ed., pp. 49-51). London. Retrieved from