Post Mortem Blog – VFX
In this blog I will be reflecting on my specialization project which involved creating a short VFX sequence, as well as reflecting on how my research helped me in my own practice. Me and a few students attempted to create a 40 second VFX sequence that had a soldiers, robots, explosions. The project wen’t really well and smoothly for the majority of the project, but some time issues left us with an unfinished result. It is a bit disappointing as all the elements to finish are there, but we just needed and extra week or so to bring everything together.
I was heavily involved in the pre-production process assisting with the original idea, contributing shot ideas and assisting with the script. My first job was to assist with the storyboards, although I did assist with the pacing and deciding on shots, Curtis ultimately drew the storyboards. I wanted to get the ball moving so I went straight into creating an animatic as it was extremely important to identify passing to allow everyone else to identify their tasks and pitch it to the film students. My research into composition into film was very helpful especially in being able to build tension and where to place the characters in film, I was stopping myself from going to crazy with the cinematography, as I wanted most of the shots to be static to help with the VFX work, I didn’t want to make it harder than it already was. The animatic was pretty faithful to the storyboards but some of the movements and angles didn’t work very well.
I got a big amount of feedback from my facilitators, especially Brett who gave me feedback on pretty much every shot, it was mostly involving trimming down the timing so it felt sharper and it was easier to work with. I was fairly efficient on applying the feedback.
I was happy with how efficiently I was working at the beginning of the project, but I was originally meant to only work on art direction and consolidating the look of the film. Having to also do the animatic, which was originally the task of another student, affected my art direction work which meant I was slower in delivering a final reference list. I did successfully gather reference and help design the final robot, but I think I didn’t build a very polished look that was consistent in the time frame. I was still happy with the final design and how I gave feedback to keep things within the look.
I believe the pre-production stage was extremely successful, getting everything done efficiently and not slowing down other parts of the pipeline, I would be happy if all of my pre-production stages in other projects are this successful.
For production I started work on my door model, as I felt it was the most prominent digital element beside the robots (who Simon took before me). I enjoyed the challenge of making something that was futuristic but still fit in with the live action environment. I attempted to follow the best practice I researched for hard surface modelling, I gathered a large library of references, although I probably could have used more real life images looking back on it. From there I tried for the first time to follow a nondestructive pipeline, and I believe it was very successful and will continue to use it as it made my work much faster. I also saved any screws and vents into a separate file to start building a kit-bash library. I started with the blocked out shapes, mostly the dimensions as well as the major shapes of the doorway and the window slot. From there I added bigger details such as plating, and finally added smaller detail such as screws. I spent some time packing the UV’s as I wanted everything to fit into two 8k maps, the baking process was easy I didn’t need normals, the material Id’s were easy to assign as I had the low poly version lower in the modifier stack. The AO bake took a while but it was important to create masks for dirt and grime buildup. This was the first time where I used the masking in Quixel to my advantage, creating smaller details and having control over the textures. I was really happy with the results, although I did have a bit of unorthodox workflow to compensate for some mistakes I made in the process.
I also rigged the robot which was a fairly easy task, it only involved parenting the different parts to controls and there was no deformation. I was also present in the film shoot, I helped Brett give feedback on the takes. The production stage was a bit slower and less focused than pre-production as everyone was working on their separate stuff and I don’t think anyone but Brett fully understood how everything would come together. During this time everyone was working on other projects so everything stacked up to us being behind by a week.
I was heavily involved in preparing footage for the compositing stage, my first task was to do the rotoscoping for the first shot, the research I did wasn’t very handy as the rotoscoping method we used was very different to traditional rotoscoping methods. This process was longer and more taxing than I expected, as I didn’t want to rush it, if I did it would have ruined the whole shot. Although I wasn’t able to get it all done in time, the footage I did I was fairly happy with, as the live action elements blended in nicely beside the fact of some lighting discrepancies.My rotoscoping research did help understand how mattes work better and how they could be taken to the next level in bigger projects.
My final task was to prepare the scene shaders and lighting for rendering, at this point time was running out and we where slowed down by a major hurdle, there where some problems with the maps I had created for the door, creating different shading on each door, beside this the maps weren’t properly calibrated to work with mental ray as Quixel didn’t offer an export option for mental ray. Sitting next to Brett for this process was a great learning experience, as I got to see how he approached shaders, lighting and rendering. I saved everything with multiple passes: diffuse, shadows, ao, object id, zdepth. Although compositing was rushed all these separate passes where really useful.
THE PIPELINE & IMPLEMENTATION
This image is a visualization of how the different pieces of the pipeline connect together as you can notice VFX often starts before production which means VFX is developed before, during and after the shoot of the movie.
In my case I didn’t get a hand in the pipeline as Brett an already experienced 3D artists with plenty of experience with VFX and composting. He organized a schedule and coordinated the separate people so that all the elements would come together at the end.
RESEARCH AND DEVELOPMENT
This is done before the creation of the VFX elements and is even done before or during attaining a new client. In here the software that is needed is chosen and also new tools needed for the specific needs of the studio are created. In my case this stage was limited to learning new software and VFX techniques, this also ties into tests where we had to learn what worked and didn’t work, the whole project could be considered a test as it is the first time I have had to do VFX work.
It often spans all the production stages, firstly by making low fidelity models for visualization. Then by creating the models that will be used for the final shots, aswell as medium fidelity models for animation work. In our case we also created a previs, and also created models before and after shooting.
ON SET REFERENCE
During production the VFX supervisor will often be onset to provide feedback for VFX setup and shots. They will also take high quality photographs of texture, props and any elements that will help with the production high fidelity CG elements. Other tools that are used to replicate the set and elements in it are HDRI image capture, which allows the CG artist to replicate closely the original lighting of the set into the CG scenes. Cyber scans are often done of important set elements and props, these aren’t used for the final assets but allow the modelers to replicate the elements more accurately. We perfectly replicated that on set as well, with all the VFX team being present on set and providing feedback, as well as taking reference pictures that helped with texturing and modelling. Some scans were done that also helped with modelling, although HDRI where not able to be captured.
This is when the footage from the shoot is selected and sent to the team, often as a HDRI image to maintain the most amount of data. Grading is also done once it is in the hands of the VFX house. In our case this is one of the stages we screwed up as our footage was already compressed and it made working with much harder, with tracking, grading and rotoscoping. Grading was done as soon as we got the footage.
This stage involves creating animation friendly rigs from the models provided, these are often refined during animation as they become more complex and change with the needs of the animation team. I was personally in charge of creating the one rig we needed, it was fairly simple as there where no deforming parts on the model, I made sure to ask the animator for any elements or features he wanted on the rig.
When the footage has been prepared it is often tracked to create a close replication of the footage as 3d tracking points, this aids with lining up cg elements with the original shot. On screen actors or props are sometimes also tracked if any cg elements need to be added to them. This was done for all the moving shots in our production and it greatly helped.
Any animated elements to be key-framed or cleaning up motion tracked animation. This not only involves character but any elements that move within the cg environment, although they often exclude any kind of simulation. This stage was simple to follow as we only had one animated element. This was aided by the tracked footage, as the CG camera is animated with the tracking data.
Any simulated cg elements are done with particles, rigid body dynamic or fluids. This stage is often highly technical and requires separate simulation packages. In our case we had to create muzzle flashes and explosions, so we had to use particles, these where rendered separately and composited in, but sometimes effects animation is rendered along with the rest of the cg elements.
With the on set photographs, several maps (diffuse, specular, normal, displacement) are created. These maps not only define the colours and reflections of the model but also define fine detail that isn’t advisable to model, such as pores or scratches. We used Quixel to create our textures, it provides easy creation of textures, but the problem was that the maps weren’t calibrated properly, so this impacted the look development stage.
This stage defines what the final cg elements will look like in the final output, calibrating and creating shaders to replicate the real equivalents of the objects, or if there is no real life counterpart for the shader a decision has to be made as to how it should look in the render. This stage was longer than we expected as the maps we used didn’t work properly with mental ray, but we attained a look we where happy with after many alterations to the shaders.
LIGHTING & RENDERING
With look dev completed, the lighting artist uses the HDRI captures to light up the scene as well as adding any lights to highlight any areas or elements. With rendering, the output files have to be decided as well as what quality setting will be necessary, this is important as rendering is a process that takes a long time especially in big VFX scenes as there are a lot of elements that are of a very high quality. This stage was different for us as we had no HDRI captures which meant we had to replicate the original lighting as close as possible, the rendering passes greatly aided the compositing stage.
This stage is crucial, but is often underestimated, this involves cutting out the desired elements of a shot from the undesired elements, so that they can be replaced with the cg footage. This process is extremely tedious but creates the seamless combination of real life elements with cg elements. I was involved in this process, and we used a different workflow to the one that is often used in professional productions.
This is the final stage and where all the separate teams works comes together to create a final image, it is the job of the compositor to seamlessly integrate the elements they are provided with, the more data they are provided with in the renders and footage the more control and better results they can attain, they can also fix and hide any mistakes where done earlier in the pipeline. This stage was extremely rushed in our pipeline, but the value of this stage cannot undervalued, as the final presentation of the film is extremely dependent on this stage.
In conclusion this was a big learning experience for me as I researched topics that where really helpful for my own practice and I familiarized myself with how a VFX pipeline works and how different it is to other types of projects I have worked on. I also got to work with a teacher hand in hand and observe how he approached things, this was really helpful and constructive. Finally although the sequence wasn’t finished and we only about 4 seconds of rushed comped work, I was really happy with how the team worked and I hope to replicate with more success in my next VFX project.
Andrew-whitehurst.net. (n.d.).Andrew Whitehurst . Net. [online] Available at: http://www.andrew-whitehurst.net/pipeline.html [Accessed 1 May 2016].
Cgspectrum.edu.au. (2015). The Visual Effects Pipeline. [online] Available at: http://www.cgspectrum.edu.au/blog/the-visual-effects-pipeline [Accessed 1 May 2016].
pipeline, H. (2013). How to set up a VFX pipeline. [online] Creative Bloq. Available at: http://www.creativebloq.com/audiovisual/how-set-vfx-pipeline-10134804 [Accessed 1 May 2016].