Some time ago, I found this “Larva” project on CG forums, and I found it very interesting, because the whole process was done in Blender! So thanks to my friend mookie for finding some time to prepare this article for us :)
The idea for this project came when I first saw an image of a head of a bluebottle fly larva made with scanning electron micrograph. Although most of my friends found it quite disguisting I just felt in love with this incredible, rather funny creature. I started to think where it lives, what it eats and soon the whole concept for my work was ready to go.
“Larva” was entirely made using Blender 2.5beta and its internal render engine
As like the rest of my projects “Larva” was made with the latest version of an open source programm Blender 2.5 that is slowly getting out of a beta stage. I use it mostly because of its great modelling, unwrapping, animation and post production tools. The first thing I created was a base mesh of a larva that I slowly developed until it reached a semi-complex shape. All the details were then sculpted within Blender with a help of Multiresolution modifier. It allowed me to switch between different subdivision levels and add – if necessary – extra details to my mesh in Edit Mode. In other words I was able to perform basic operations such as extrude or loop cut independently of all of my sculpting work. The creature was prepared in kind of a T-pose which let me speed up the process of rigging. Building an armature and skinning took me less than half an hour thanks to an automatic weight creation. I carefully set up a pose and made some final touches within Sculpt Mode.
The first few stages of modelling scene
Like always unwrapping in Blender was nice and easy. I disabled armature modifier and marked all the seams at the lowest multiresolution level possible. In order to minimize unnecessary streching of faces I applied the pose of my larva and unwrapped my model once again. At this stage I created different materials for different parts of my character such as tongue, eyes, small pimples or teeth. For each one of them I prepared a separate UV coordinates so that they wouldn’t have to share the same texture images.
Part of unwrapped body of the larva
In the next step I made all other objects from the scene – lunchbox, thermos and the whole environment. I unwrapped them carefully and started working on textures in Photoshop. In my workflow I always test my maps and shader settings in a simple studio instead of the project scene. Most of my models have at least three texture images that control their colour, specularity or surface bump. Although the phrase “the more the better” doesn’t fit every real-life-situation, it corresponds with the relathionship between number of textures and material quality perfectly.
Test renders of some of the objects from the scene
There are several ways that can make editing textures easier. My favourite one is baking Ambient Occlusion map and using it as a reference of which parts of the model are actually seen on the scene. Moreover such image can be mixed with other layers within Photoshop which is really handy when you want to add some grunge to your textures. Another method involves Blender’s Texture Paint Mode that allows us to paint directly on our model. This way we can make the whole map from the scratch, we can fix the look of our textures at seam borders or we can simply paint all the masks we need.
Textures used for a shader of a lunchbox; 1 – diffuse map, 2 – specularity map, 3 – bump map, 4 – normal map
Although the speed of Blender Internal render engine has been significantely improved since the 2.49b version the calculations of blurred mirror reflections still take a lot of time. Instead of switching them on I decided to enable diffuse ramps with an imput set to normal. It resulted in subtle lighting up the sides of object according to camera position and angle. Unfortunately such technique cannot be used with subsurface scattering. To develop a shader for the biggest pimple on the scene I needed slightly different approach.
Material and some texture settings for a thermos
First I made two materials – one for the skin and the other for the juicy zit. I switched to a Texture Paint Mode and made a simple black and white mask that I loaded as an Image texture. I called it “nodes_mix” and used it in Material Node editor as a Factor for a Mix Node. I attached both materials to the empty sockets so that the first appeared at black parts of my “nodes_mix” texture, while the second only at the white ones.
A simple material node setup for a pimple
The whole scene is lighten by ten lamps but only two are present at all the layers – Sun and Hemi. The rest illuminates only chosen objects such as the tip of a pimple or a cutlery. Instead of a real Global Illumination that Blender does not provide yet I used raytraced Ambient Occlusion set to add.
Selected lamps are the only ones that were used on every layer of the scene
Before I hit Render button I created four Render Layers on two different scenes (one with high and the other with low render settings). I decided to render seperately the whole scene, the hair in the foreground, dust in the air and Plane with a smoke texture. I took care that Combined, Z, Specular, AO and Object Index passes would be delivered for further post production.
Four Render Layers used in a project
Instead of polishing my raw render inside Photoshop I loaded all the Render Layers to Blender’s compositing node editor. Although the nodes system itself may seem complicated at a first glance, it really deserves a chance. Understanding the purpose of each type of node gives us infinite control over the look of our image. The fun I had with my post production resulted in this crazy setup I will now try to explain briefly.
The whole post production was made within Node System
The first few nodes of my setup are responsible of darkening chosen parts of my image. Because the skin’s subsurface scattering shader absorbed most of the shadows and Ambient Occlusion, I decided to boost their visibility using AO and Object Index passes. I mixed rendered image with AO pass using Multiply Node and Object Index as a Factor. This way I could reduce the operation to different areas such as eyes or lunchbox and thermos. In order to control mixing amount I added RGB Curves Node that darkened each Object Index pass and lowered the Factor strength at the same time. For even better results I changed the colour of AO for skin surface to a reddish one with a help of Color Balance node.
Controlling Ambient Occlusion influence with Object Index passes
Depth of field was created using Defocus node. I set its size and range with Map Value and Color Ramp nodes that used Z pass for all their calculations. This method lets us mark all the areas that need to remain sharp as black ones while the rest gets blurred accordingly to the brigtness of the image. The Z-Scale value of Defocus node controls the strength of the whole effect; I decreased Threshold to zero to avoid any artifacts at the edges of the models.
Creating depth of field using Defocus Node
To improve the final look of my image I reached for Gamma, RGB Curves, Color Balance and Hue Correct nodes. The last one was particulary useful as it let me control Hue, Saturation and Value of different tones of my image. The last effect I added was a subtle vignette based on an Blend/Spherical Texture. I set its colours using Ramp, I named it “winieta” and I loaded it to my Node editor. Because I wanted to reduce its range to the edges of my image I enlarged it using node’s Scale sliders. I fit it to the resolution of my render layers using Scale Node and softened it with a Blur Node. In the end I combined my image with “winieta” using Multiply node with a very low Factor value.
And again – final image
I hope You found my article interesting, in case of any problems feel free to write me at email@example.com. Good luck with Your own projects, explore 3D world and happy blending!