Categories
Term 4

Term 4

This term I will be focusing on animating my model.

Firstly, I will re-do the displacement map, to see if I can add any improvements where I am not happy with the character.

The first thing will be to create a human rig for the model. Following tutorials from YouTube I will be able to do this using the skeleton and joint function when rigging in Maya.

Next I will need to connect the skin with the joints then paint skin weights.

If possible I would want to rig the face, it would not need to be too complex, and I am contemplating using blendshapes only.

Animating:

Once the body is rigged and working, I am planning to animate a walk cycle on my character.

I have done walk-cycles before on Maya, so I will re-watch tutorials regarding this.

I will also want to animate the clothes on Marvelous Designer, as well as the ensure the hair moves correctly in Xgen. I will watch tutorials regarding this.

The Environment:

For the environment I have decided to go with a natural landscape, involving greenery and flowers etc. I am planning to take some pictures for inspiration, and following two artists’ work that I like.

Inspiration / Moodboard:

To Do:

  • Re-do displacement map (focus around eyes, mouth and knees)
  • Create correct hierarchy of joints for skeletal rig of model
  • Connect skin and joints – paint in the skin weights
  • Use Blendshapes to animate face slightly (Blink, mouth, cheeks etc.)
  • Animate walk cycle (how to animate clothes and hair?)
  • Start creating the environment (possibly using procedural effects?) using reference images of natural landscapes

Firstly I went back to my Zbrush model and re-sculpted the details, following better reference material.

I was working on closing the holes in the mesh for example within the mouth – it would make my UV’s better and more accurate. I’m not planning to animate the mouth so this won’t be an issue later.


At this point I imported my high-poly model into my Maya scene, as I want to see how the details render, if the proportions are accurate and if it will fit seamlessly with my existing hair and eyes.

I was going back and forth a lot with my Zbrush model and Maya, trying to get a realistic sculpt and testing out the renders.

As I want to animate this model, I needed to use the low-poly version with a displacement and normal map to show the details in the Arnold render. Again, this involved a lot of back and forth with Maya and Zbrush.

Getting the right displacement settings in Maya was very difficult, I was struggling with understanding this fully and was relying on various YouTube tutorials to figure out what would work best with my character. I found that displacement settings are very subjective to your own work – I couldn’t just directly follow one tutorial.

However, eventually I found a great tutorial that involved using aiMixer and aiLevels in the Hypershade with my displacement shader.

I followed a tutorial on creating human skin in Substance Painter. Following what I learned last term for the placement of underlying skin colour and inconsistencies.

I exported the base colour map to my Maya scene, and played around with the displacement and bump depth.

I’m happy with some of the detail on the face, but the skin needs more work.

I have an issue with the face shape. At first I redid the original model in Zbrush, using the sculpt and move tools to create the look I wanted. Then I re-exported the displacement and normal maps and input them all into my Maya scene.

This did not work well.

Instead I decided to manipulate the face shape in Maya.

I used a mix of editing the vertices with a soft brush and the sculpt tools to create a shape I liked. The chin and jawline needed to be smaller and much more defined in my opinion.

I also re-did the eyebrows and eyelashes in XGen, using the same base as my previous model meant that the process what a lot faster. I used vertices to move the base to fit the shape of this model, then refreshed the Xgen splines that I had already created. I moved them lower on the face. Its too heavy at the moment, so I’ll work in the density next.

My hard drive became corrupted. This meant that I could no longer access any of my files.


I managed to recover a majority of them using a software online.


I then needed to sculpt more details on the rest of the body. Firstly, using my own hand as reference I added the detail to my characters own.

Once done, I imported to Maya to see how it renders. The base colour isn’t accurate, I plan to add more variation and subsurface details later.


At this point I am happy with the head and hair.


Next I needed to start rigging my model. I watched a YouTube tutorial regarding this, we had also learnt the basics about creating Joints and Hierarchy in our first term.

I started by using the Create Joint tool, and placed the joints in the correct positions and order as needed for a human skeleton. Going between viewports to ensure the positions were within the body.

Following the tutorial I was watching I input the correct data into the script editor to ensure the joint positions were accurate for a human model. Once done, I selected the rig and my mesh and did Bind Skin.

I used the connection editor to create a driven key between the different joints and used NURBS curves to create handles for my rig, connecting them via parent controls. Throughout this process I ensured that each object was labelled correctly in the outliner, as the naming is imperative to organisation of the rig.

At this point, the rig was working, I needed to paint the skin weights according to the influence of each joint. This was quite difficult at first, once I spoke with my tutor he advised me how to use the skin weights brushes in an optimal way. He suggested using the Add tool only, not Replace.

He also showed me how to move skinned joints, as we realised the clavicle and shoulder positions weren’t correct.

Once I had finished painting the skin weights I tested out poses for my model, and how they render.

Furthermore, my tutor and I decided to use blendshapes to manipulate the finer details of my mesh. For example, the shape of the armpits and shoulders, we also put a set driven key on the animation – when her arms are raised the armpit blendshape will be activated.


At this point I needed to move onto my environment. At first I was advised to look into downloading assets for my scene. I looked into this but what I envisioned was too expensive. I decided to model it myself as was the initial plan.

The first thing I did was create a plane in Maya and roughly manipulate the shape to what I imagined for my scene. Using the move and scale tools on the faces and edges of the mesh.

Once happy with the size and overall shape I imported the model to Zbrush to sculpt in the detail. Primarily using the Standard and DamStandard brushes I sculpted mountains according to various reference photos for inspiration.

At this point I imported the .obj to Maya to see how it renders with the water plane in place. I just added an AiStandardSurface with the water preset at this point.

I did a few renders to see how the skin and model are looking, I also wanted to test out the dress from last term on my model.


I wanted to redo parts of the Xgen hair, in particular creating a parting down the middle. Again I watched a YouTube tutorial, it involved creating a mask on the head dupe mesh (which the Xgen guides are placed on) and painting either red or yellow on each side, so the guides know where to create the hair.

Again at this point I wanted to test out how the model moves and can be positioned, furthermore how this actually looks in the render.

Once happy, I went back to my Zbrush model of the environment, and used the paint tools to add colour as needed, which I could then export as a colourmap.

I also decided to use a displacement map for my mountains, so that the polygon count of my scene would not be too high. So I exported it at the lowest level and added displacement and aiNoise to create the detail and contours.


I found tutorials I would need for the later part of this project.


Now I needed to work on animating my model. After a lot of back and forth with the position of the skin joints and painting skin weights I started to pose and animate my model.

At first I blocked out the basic shapes and forms, using key frames.

Initially the idea was to create an animation of the character flicking hair out of her face. Very quickly I realised this would need more work and finessing than I had time for.

Then I decided to have her putting both hands on her hips instead, it seemed more achievable in the time frame. Next, starting to work on creating and animating cameras for my final shot.

I used the graph editor throughout this process. Ensuring the hands and arms were positioned and moving correctly and smoothly. I wanted the animation to look like slow-motion.

When animating the fingers, I needed to re-do the skin weights on the joint influences, I also realised there was a joint missing that needed to be added just before the fingertip.

At this point the animation was roughly in the right place. The idea is that the character is acting ‘idle’, so the cameras can pan over the clothes and face.


Now I needed to create the clothes. Last term I made a dress in Marvelous Designer, so I remembered vaguely how to use the software. Once again following a tutorial online, I created a ‘fitting suit’ of my imported character, which means the software can understand important points on the figures body. Therefore the clothes can be placed correctly.

At this point I came across an issue.

When working with MD I soon realised that scale is very important, as it works at human size. My Maya scene however was not at an accurate scale. It was almost 3x smaller than it should be. By this point I also couldn’t re-scale my whole scene, so I decided to re-scale my model when I imported into MD. Creating the fitting suit ensured the software could understand the proportions in relation to the clothes even after re-scaling.

I started by creating the pattern for the jeans. I haven’t made jeans from scratch before, but I’ve seen the pattern so I knew roughly the shapes needed.

After making the jeans the fit I wanted, I applied a denim texture to the fabric so it behaves with the correct properties.

I had already made a corset for my last term, I used the same pattern but again because of the scaling issue I had to re-do a lot of it. Particularly the back lacing had to be re-done, I watched the tutorial I used last time to remember exactly how to do it.

Once made I organised the patterns so that they would be imported as clean UV’s. I imported the model to Maya, and applied the denim texture I used on the dress.


I also at this point started working on the animation of the clothes, I had the animation of her body done, the only thing left was the fingers. So I exported the animated body as an Alembic file into my MD file, then reset the clothes on her body and played the animation.

Marvelous Designer automatically simulates the clothes according to the body animation.

Once happy with it I exported the clothes as an Alembic cache to be put back into my Maya scene. As stated earlier, the clothes were 3x too big so once back into Maya I just changed the scale to 0.333 and it fit my character.


I also started exploring how I would create the greenery in the scene.

I used the paint effects in Maya initially, however they don’t show up in the Arnold render view unless converted to polygons. This then needed to be applied with an aiStandardSurface. I realised this technique would not work as it was way too heavy within the scene.

At this point though I like the overall composition, again creating renders to experiment with the look and camera angles.


So the next option was to try to use MASH to distribute the greenery. Following a video on YouTube I learnt how to correctly set up a MASH system.

Firstly I needed to extract meshes from the original mountain, specifically in the area I want the grass. Then I would use Maya’s paint effects to create a small patch of grass on them.

Next converting these to polygons, and adding a ramp shader.

This grass then needed to be exported as an Arnold Standin (.ASS). With the Standin re-imported and selected I created a MASH network and therefore a MASH instanter. For distribution type I chose mesh and input the plane I had extracted for the grass earlier. Playing around with the density and randomness I covered the surface with greenery.

I did the same thing for the trees.

Then I wanted to explore creating flowers, again following the same steps I used for the grass. I found that with the flowers they took much more time to achieve the look I was after, I needed to apply different colours and textures to each individual part of the mesh.

After experimenting with what greenery was available in the Maya content browser, I found my scene was again getting quite heavy. I lowered the density of everything. At this point however, I am liking the overall render.

I’m starting to render different angles to figure out what frames work for my final animation.

At this point I wanted to re-do the water and animate it if possible. I followed a video, where I added a noise node to the water plane, and then played with the settings and depth to create the look of waves. Then keying the time section of the noise node so that there would be movement imitating water.

I also added slight transmission and a blue specular colour.


At this point I was starting to finalise my camera animations.


Next I followed a tutorial on how to connect the eyelashes to the eyelids, then I could proceed with making the blendshapes for the blink. I needed to create a set of joints across the edge where the eyelashes meet the eyelids and use a parent constrain to connect them.

Then I started animating a blink, I also wanted to create a smile, following references online I started working on this. We learnt how to use blend shaped for facial animation in term 1, so I referred back to those videos.

Once happy with the blendshapes, I started working on the detail of my original body animation. Again, following references I found online I tried to get accurate movement, using the graph editor and my rig controls.

Then I started working on the overall details missing on my character. I started by extracting the UV’s on my clothes mesh, so that I could texture each part accurately. I used the denim texture from last term, but played around with the scale of it in Photoshop, tiling the image so the size of the denim fit the UV’s.

I also used an image of the Dickies logo to put on the pocket.

Then I decided my character needed nails, I created a very simple shape of a nail from a polygon primitive cube.

Placing them in the correct position and scale for each nail, I then added a ramp colour. I parented each nail to the fingertip joint in my rig so they would follow the animation.

The nails could definitely do with more work, but at this point I knew what would be in frame of my final animation, so this level of detail would suffice.


Now I needed to create the final part of the texture on my models clothes. It is based on an outfit that I have already made.

I started by roughly drawing out the positioning of the dragon in substance painter, I did this last term so I knew the method would work. In substance painter drawing on the 3D object corresponds to the correct UV position.

I imported the new texture with the line drawing into my Maya scene to see if it matched up between corset and jean. This required a bit of back and forth between the two software.

Now that I knew the positioning, I used Procreate on my Ipad to sketch in the colour and detail on the UV’s.


Again, I started to finalise my animated cameras, using the graph editor to ensure the movement was smooth.


Now I wanted to attempt to animate the Xgen hair, this was quite ambitious as I don’t know if the scene can take it.

Again, I watched a few YouTube videos, firstly I needed to convert the Xgen guides to NURBS curves, and attach them to the hair (using the modifier AnimWire) and put it in colour view. These then needed to be animated using modifiers on the ‘HairSystem’ node. I started with the Dynamic Properties section, changing the drag and mass, as well as the stretch and compression resistance. I did all of this following a tutorial.

Next I changed the settings on the ‘Nucleus’ node, changing the solver attributes and adding wind speed and noise.

Once animated I needed to select all the NURBS curves and create an Alembic cache of the animation data. This would make it easier to play back the animation. In the AnimWire modifier I turned off live mode and referenced in the Alembic.

For some reason, this didn’t actually make the playback of the hair any faster or efficient. I did not have time to figure out why, but I could see that there was an animation on it.


At this point I was ready to render (I also had to because of time). I had created the animated cameras, my sequence was 1000 frames long, after testing out one frame at the correct render settings it took roughly 8 minutes. This would not render in time, with help from the technicians we tried to use the render farm but there was an issue with the file paths.

The solution was to use 5 external hard-drives and render out the sequence in series of 200 frames.

https://www.youtube.com/watch?v=Ay-dA6NZE6s&t=2s

So there was an issue with the animated hair and water. In my final render sequence I realised the animation I had applied to the hair was not playing. I didn’t have time to re-render for the showcase, however I managed to get it working for my animated poster. I also added a free water and birds sound to my animation, fading it in and out in alignment with the visuals.

The water I will re-animate at a later date. I also had to re-do a lot of the MASH greenery last minute, so with time constraints the flowers had to be cut.


Upon reflection of this term I think I was very ambitious with what I wanted to achieve, I am overall happy with my output but there is still a lot of elements of the final animation that I’d have liked to fix. Perhaps focusing more on the animation and finessing the details rather than building the environment also, or vice versa.

Regardless, I feel I have learnt a alot about how to create an animated character from scratch, which was my initial desire from this project.

Another thing I learnt was to leave enough time for technical problems with rendering before a deadline. Ideally I’d have liked to render my sequence a few times so that I could fix any issues.

Categories
Term 3

Nuke

This term we have primarily been learning about greenscreen. How to correctly shoot footage on it and how to edit it in Nuke.

We also spent a few classes going over the basics regarding cameras, tools and safety measures on set.

I was unable to attend class this week, so I did not have access to the footage of Carlotta, therefore I used what was already available to us in the Nuke file and re-did the shot.

Firstly, I filled in the correct camera settings in the CameraTracker node, then I needed to do a rough roto around the man and applied it as a mask. Then I tracked the points.

Then I needed to create a camera from the scene, which now has the correct movement.

I denoised the plate and added a keylight. I was having an issue with applying the denoise, but I managed to get it to work.

To remove the tracking markers I did another roto of the backround, then roughly tracked them, then we can merge(stencil) with the roto over the new footage.

Next we started working on the backdrop. I used the footage from our tutor, he explained how you can ‘breakout layers’ if you are importing from photoshop, to get the layout above.

This meant that the object in shot can be separated, we applied a premultiplication if the alphas were looking strange. Then he showed us how you create a card and start building the scene in 3D space, as the layers are separated we can have various cards at different positions.

Then compiling them all together with a scene node, and the scanline render.

Lastly the foreground footage needs to be lit to match the background, our tutor showed us the way he did this, using the hue and co,lour correct nodes.

He also introduced us to a lightwrap.

Categories
Term 3

Personal Project

My idea for the personal project follows creating a 3D CGI character from scratch. I want to learn the appropriate method to create a successful character that adheres to industry standard. This means I need to model the character efficiently so that it could be skinned and rigged for animation.

https://discover.therookies.co/2021/05/06/real-time-character-production-workflow-for-games/

https://discover.therookies.co/2020/06/10/complete-workflow-for-creating-a-stylized-3d-female-action-character/

Work flow; what are the steps involved in creating a 3D character?

•Create concept art / inspiration

•Zbrush blocking and Dynamesh

•Maya re-topology and projection

•Clothing (zbrush/marvelous)

•Hair

•Texturing (mari/substance)

•Detail, UV mapping and Painting

•Final pose and finish

Inspiration / Concept / Mood Board:

The inspiration for my character is derived from my small clothing brand. The target demographic is primarily young and female therefore driving what my model will look like.

The colour scheme follows a bright, pastel and edgy aesthetic, with nature and animals being a predominant theme. After compiling the mood board I have decided I want to go for realism with my model, rather than a low-poly stylized character.

In terms of clothing, she will be wearing a digital replica of my designs, this means I will need to model and paint these in 3D.

Reference Images:

I am using images of a friend to inspire my model, this will help me to create accurate features to achieve a realistic look.

I am working on sketching out concepts for the girl, finding a mix between my reference photos and the inspiration mood board.

Clothing: what they will look like, how I will model them

Method 1
Zbrush; using the extract tool to model the appropriate clothes
Method 2:
Marvelous Designer, creating the correct patterns for the clothing that can hang off the body of the modelled character.

Hair: what it will look like, how I will model it
Method 1:
Zbrush; using the specialist hair brush to model each strand
Method 2:
Maya XGEN; creating a more realistic texture of hair in Maya

Initial 3D Modelling using Zbrush:

I will create the model on the correct subdivision level, so that the polygon count does not get too high, whilst enabling me to model the finer details. This will make it easier when I animate my character.


I started by getting the general idea of proportions and features from reference images of some friends. I sculpted this out roughly, building up the detail slowly using brushes such as standard and dam-standard.

After getting the approximate likeness I could model the features as I wanted, this is because my character is not meant to be a replica of a specific person.

I am still sculpting the model, I have finished the overall look of the face and body, now I need to refine all the details, including slight muscles and skin pores.

I will use a specialised Zbrush tool to imprint the skin texture.



I also used free skin brushes to create subtle texture on the character.

There is a problem around the eyes, I will speak to my tutor about how I can resolve this. I also need advice on the overall sculpt and proportions of the character.


Next I created the UV’s in Zbrush.

Following the advice online, I used the Control Paint in the UV Master tab, so using both attract and protect I was able to paint where the seams should go for unwrapping. I checked the UV’s were accurate, then exported to Maya.


https://www.youtube.com/watch?v=iWEDWFMjA9E

Next I needed to re-topologise the character, using the quad-draw tool in Maya I followed a tutorial that explained the most efficient geometry needed to create an animated human model.

Now that the topology is finished, I imported the low-geometry mesh into Zbrush.


With a low-poly base model and a detailed high-poly model, I need to project the details of one onto the other.

Next I wanted to texture the eyes, I am going for realism so found an appropriate tutorial.

Once the eyes were textured, and the displacement and normal maps are imported into Maya, I rendered out some images to see what the model is looking like.

So far I am happy with the outcome, however there are some features and contours that I don’t think are quite what I was looking for. Ideally I would be able to go back to Zbrush and re-model these parts. Due to time constraints however I will continue with this model.

Next I started on the hair. I followed a few tutorials on YouTube, firstly starting with the eyebrows. This was relatively complex, I needed to learn how to set up a ‘collection’ and ‘description’ for the hair, then create guides at the right length and direction to look realistic. The tutorial showed me how to edit the modifiers, such as curl, clump and noise, to achieve a more realistic render.

I also worked on the upper and lower lashes, doing the same technique as before. I rendered out a few images, the hair adds much needed realism to the model. I still believe that I can edit these further, for example the eyebrows need to be longer.


I then started on the main head of hair. I am going for an easy style as this is my first time using XGen. Again using the guides, I placed them in the right formation to look like natural hair.

Overall the model is slowly coming together, however the hair needs a lot more work at the moment. The hair line is really wrong, and I haven’t added modifiers yet.

I have decided to change the hairstyle, this one doesn’t fit the aesthetic of my character mood-board.


Next I needed to texture the skin, I followed a good tutorial on YouTube that showed me how to create and paint the layers in Substance Painter. Firstly creating the blood layer and using different brushes to paint the variation of colour found in skin undertones.

I ten painted on the skin colour over the top, again varying the weight of the brush to create dis-colouration.

I then imported these textures into the Maya scene.

I still need to work on the hair, I will come back to this after I finish the clothes.


I started experimenting on Marvelous Designer, I used the ‘pants’ pre-pack. This gave me the basic shape of the trouser pattern. I want to widen the leg and tighten the waist, then create a waistband.


I am unsure what clothing I want to put my character in, originally I wanted to do a simple trouser and corset combination. However, I have decided to replicate a dress I made, as I already know the pattern for it.


The issue is that the skirt and sleeves are knitted, so I will need to figure out how to create this realistic texture.

I started by drawing on each segment of the corset pattern, then mirrored this to create the whole top. I also needed to create the lining, then sew all pieces together.

The skirt was pretty straight-forward, I drew out the shape of the pattern then created a waistband and sewed together. Then I made the sleeves following the correct shape.

I imported the dress as an OBJ into Maya, at this stage I just want to see the overall look of this style on my character.

Next I needed to create the lacing at the back of the corset. I did this following a tutorial online, using the eclipse tool and cut & sew to create holes for the grommets.

Then I created the lace, and sewed them to the correct parts. I also added a top stitch where needed, particularly where the boning would go normally.

Using the button tool, I created grommets and applied a metal texture.



Next I need to texture the dress, as it was made in Marvelous the 2D pattern essentially became the UV’s. I just rearranged them in Maya so that I can separate the textures.

Then I found a good reference pattern online for the knit, I edited it in Photoshop. Firstly tiling the texture and then brightening it to the correct colour.

I edited the photo to create a displacement map – making it black and white, then increasing the contrast.

I then added my black and white texture as the displacement map. Using the displacement shader I increased the scale for an accurate render. Then I increased the exposure of the original knit texture.

Next I wanted to draw on a rough stencil of the painted design. I decided painting directly onto the 3D corset top was the easiest option, as it would translate onto the UV’s in the correct position. I did this in Substance Painter.

Copying the reference images I roughly drew out where the cranes / flowers go. Then I exported the 2D map and imported it into my Maya scene.

The plan is now that I know the rough positioning of the painting, I can use my Ipad to digitally paint in more detail and it will be in the correct place.


I attempted the hair again, this is the most difficult aspect of the project so far and I have had many failed attempts.

I started again by using a lot more Xgen guides than before, I also decided to go for a simple straight hairstyle, due to time constraints I felt this was the best hairstyle to go with.

I then followed the tutorial regarding the modifiers such as noise, curl, clumping and cut.

I still think the hair needs more work.


Next I am working on the textures. I started by using a denim texture then tiling and colour-correcting it on Photoshop, then overlaying the painting.

Once imported into Maya I created a bump map.

Then I wanted to add fur on the knitted elements of the dress. I watched a quick YouTube video and created the fur using the ‘interactive groom generator’ then played around with the density and scale of the fur. I wanted it short and not too dense with the correct off-white colour

I painted the colour for the corset using Procreate on my Ipad.

Final Renders:

This has been a good personal project to attempt this term, I feel I have learnt the correct method of creating a character. However, I initially wanted to rig and animate the avatar, due to time-constraints this was definitely not possible – I am planning to continue this into next term.

I am happy with the overall look of the model, however I don’t think the level of detail on the skin is right, I know it is an issue with the displacement map so I may try to fix this later. Also, I did not finish the sculpt as accurately as I wanted, particularly around the legs and feet. The proportions don’t seem exactly right. On the whole I like it, however it looks more like a game character, rather than a realistic one (which was the initial idea).

I also think the hair could do with more work.

In terms of clothes, I like the overall look and detail. It was very useful learning a new software and practicing how to texture in Maya.

Categories
Term 3

Final Major Project

Studying the Effect of Para-Social Interaction in Custom Avatar-Based Games and the Implications for the Metaverse

The final title for my thesis discusses the ability to customise an avatar within virtual worlds, more specifically looking at how para-socal interaction will be influenced as a result of this.


1. What you’re researching? how you’re researching it?

& why it is important to research this subject?

At the moment I am researching 3D Character Design and Anatomy, particularly related to fantasy creatures and the development of unknown simulations of the body parts. How is this created and understood? How can we as artists replicate this through 3D rigging?

This is important in the advancement of character design, the understanding of real-life references to develop simulated ones, and how to correctly analyse and combine them to achieve realism for made-up characters.

2. Provide at least 5 keywords

Fantasy, Character, Archetypes, Anatomy,

3. Provide at least four sources, debates or texts in the subject area)and a short explanation of the relevance to your project proposal

https://core.ac.uk/download/pdf/187119935.pdf

https://core.ac.uk/download/pdf/147238513.pdf

https://www.researchgate.net/publication/336306246_Physical_Rigging_Procedures_Based_on_Character_Type_and_Design_in_3D_Animation


Character Design: How to develop an invented creature? Simulated characters in the Metaverse, the future of this design?

Anatomy of Fantasy Creatures; How is it derived and developed? How are character archetypes constructed? Is there a universal standard to follow?

3D vs 2D Character designing, differences and/or similarities? How is rigging involved in fantasy character design?

Categories
Collaborative Unit Term 2

Group Project

Starting this project I initially wanted a smaller group, as I knew it would be easier to manage with the workload and timings. I decided to collaborate with Carlotta as we both have similar interests within VFX. We got in contact with a MA Virtual Reality student as we were hoping to incorporate and learn more about VR this term. We found a student, Rita, who was happy to work with us. Our initial discussions consisted of agreeing on our aesthetic and theme, both myself and Carlotta originally wanted to create an advert as it ensures a clear purpose and narrative for our project. However, we are aware that compromise will be needed.

I wrote up a short description of our initial concepts for our production and Carlotta compiled images for our mood board.

After speaking with Rita she was on board with the aesthetic and ideas we were going for. She had the idea of incorporating the concept of wellness and mental health in our production, so the project developed to become a virtual space that a person can inhabit for the purpose of reliving stress and escapism.

Rita informed us that her tutor thinks another VR student would be needed, therefore after meeting we decided to combine our group with another. We are now working in a group of six; four VFX students and two from VR.

The concept is remaining largely the same, creating a VR experience within the overarching theme of Escapism.

The plan is to create 3 different worlds within virtual reality, an energising and colourful one, a calming and relaxing world, and an adventure experience. We will create these worlds in pairs, both me and Carlotta will work on the calming experience, adhering to our mood-board aesthetic.


After discussions with various members of the group, me and Carlotta had decided on a rough layout/idea for our room. Speaking with the VR students we were able to decide 3 interactive elements to our experience: breathing exercises with a flower, lighting candles and interacting with a gong.

We needed to ensure we were modelling and texturing our items in the correct way to be transferred into Unity.

I also was tasked with animating the face of a character within the VR experience: a penguin. I used blendshapes to create the correct expressions and movements for the script that Lauren had written.

Lauren wanted me to animate a hug on our penguin character, this is because he will be interacting with the user in VR. I had not rigged and skin painted the model, so I just did a very basic blendshape animation, using the vertices and faces to move the arms.

https://www.youtube.com/watch?v=GhqzQsd4o30

Another interactive element to the VR experience is with the penguin screaming. We want the user to be able to scream along with it, so I needed to animate this also.

The issue we faced is that when I exported the animation for Lauren to use in Unity the timing had to be perfect to fit the audio as she cannot edit it further.

This meant that I had to align the talking, pauses, and scream with the voice over. I primarily used the graph editor to do this.

Animating the penguin in Maya was not too complicated, the real issue that arose was with exporting it to Unity. The model would constantly flicker between hard and smooth shading, and we noticed the scream animation did not translate well.

https://youtu.be/-w4TGGBF3HI

We needed to ask for advice as to why this was happening. However, due to time constraints it could not be fixed before the deadline.


I modelled and textured the gong for our scene. I am unsure if the colour scheme is right at the moment, I will be able to change this as needed depending on the overall aesthetic of our room.

This is another interactive element to the experience, The handle needed to be made and exported separately, so the user can pick it up in VR.


I started working on modelling the candles for another interaction. The plan is to get the user to light them when in the ‘calm’ room.

I created colour maps for the candles, as I wanted a gradient on each of them.

We have an issue regarding the textures transferring from Maya to Unity. Me and Lauren sat together to figure out how to correctly transfer the models, we followed a YouTube tutorial. Importing the items to Unity was a success, however we found that some of the textures were not able to transfer, for example the frosted glass – I believe this could be because it is based on a preset in Arnold. We couldn’t find a solution online, so the VR students will ask their tutor for advice.

Another issue we encountered was that the models were in ‘smooth shader‘ in Maya, yet when opened in Unity they were blocky. Again, we couldn’t figure out how to solve this so Lauren will speak with her tutor.


I also needed to make a corinthian column for the scene. I started the leaf detail in Zbrush, then I imported it to Maya and did a duplicate special to create the repetition around the column.


Next I needed to join Carlotta in modelling furniture for the room.

I wanted to model an interesting sofa, I found a good one that I believe would fit the aesthetic. I initially started the model in Maya, I attempted a few methods trying to decide what the best technique would be.

I decided the easiest and quickest way would be to sculpt by eye in Zbrush, with the symmetry tool activated I was able to model the sofa relatively well.

I also modelled a vase in Zbrush, I found an interesting reference photo then sculpted it using DamStandard and Smooth tools.

I started the rug in Maya using a plane, then imported into Zbrush to add texture using GroomTurbulance.

After importing the furniture into a scene together I started fixing the UV’s on the sofa, in order to efficiently texture later. I created the UV in Zbrush, then cut and sewed the seams in Maya as needed.

I used Arnold pre-sets to texture the furniture, this was to get an idea of how the light would work on the models. I also wanted to initially experiment with colour scheme, but I will use my own textures to adhere to the palate from our mood board.

As transferring textures from Maya to Unity was an overarching problem we faced, Carlotta, Rita and I decided that the scene would be textured in Unity in order to fit the deadline.

Our room is completely modelled and the UV’s are all organised, therefore we still hope to texture the room after submission.

We went to the final presentation of the VR students, in which we got feedback on the work from their tutors. This gave us a few days to work on the aesthetic of some of the rooms, and any elements we needed to change last minute.

https://www.youtube.com/watch?v=LAj-5ItvnDQ

Overview:

Overall, this group project has been interesting, I enjoyed working with the girls from VR and feel I have learnt new things regarding VR software and how best to import and export files for it. Communication has been fine during the process, Lauren took the role of lead and was very easy to contact, this meant creating good models and animations was possible.

Despite this we also had many issues during the process. Firstly, one of the group members had left mid-project, this meant we needed to pick up the extra work. I had to animate the penguin as a consequence of this, and upon reflection I believe we were very ambitious with what we could achieve within the time-frame.

We also faced issues with texturing models in Maya and exporting to Unity, we were not able to overcome this before the deadline. Therefore the VR experience is not completely the aesthetic we were initially after. I have learnt that compromise is necessary when working in a group, as well as good and clear communication.

Categories
Collaborative Unit Term 2

NUKE

WEEK 2: Motion Vectors

This week we continued learning how to efficiently cleanup a plate. We were shown various methods in which to do this.

I needed to first remove the tracker marks on the face. I started by rotoscoping each dot, using a circle and ensuring it tracked efficiently.

Next I used a merge (stencil) with the roto and the footage, then using a blur node at the right level to successfully minimise the dots. This got rid of them at first glance, with a premult after it. I again merged (over) the roto and the plate.

Then I needed to re-blur the dots, and use a shuffle and grain to soften it. I also used an edge blur on the alpha to achieve the same effect.


WEEK 3 & 4: 3D tracking

We discussed how to distort and undistort the plate we are using in Nuke, this is important for our collaborative project with the Crypt scene. We were also introduced to 3D tracking and building in Nuke, using the Scanline render node specifically.

We used an example shot to practice 3D tracking. We were advised not to track anything with reflections, such as windows or water, so we would need to mask them out of the shot. We used the CameraTracker node, inputting the relevant camera information.

For this semesters project I used the CameraTracker node, inputting the details of the camera that was used, then the number of tracks (features) we wanted. I tracked then solved the tracker, we were told an accurate result has an error usually between 0.5 and 1.

I deleted the unsolved nodes, then recalculated, refining the accuracy of the pass.

I added an origin and ground plane with the points, this ensured the tracks were organised and correctly placed. Then I created cards for for the walls and ground, and merged them over the scene.

Once completed I exported the trackers and camera to Maya.

In class we then discussed to use of PointCloud and ModelBuilder on our example plate.

In week 4 we looked at Projections, first establishing the difference between using a textured card and projecting onto shot. We were taught the difference between cameras whilst using a projection, and the method of using a ‘project3D’ node with a patch.

The use of a framehold node was also explained, firstly when used on a camera, then when used on a scanline render.

WEEK 5 & 6:

We continued looking at projections and the problems that may arise such as stretching, doubling and resolution issues. We also went through the various different projections that you can do.

This week I also continued working on my Crypt shot, I tracked the front wall in Nuke, using one of the projection methods we were taught.

I rotoscoped the opening in the wall on frame 0 as it had the whole area in shot, I then used a framehold on it and projected it onto a card within the scene using Project3D. When I did the roto, I needed to apply an invert node, so that the correct part of the shot became the alpha. I also added an edge blue to soften the edges, for a more realistic roto.

Week 8 & 9: Green Screen

We started learning how to manage working with green screens in Nuke, firstly however we were shown how you can manipulate an image using the colour controls. Specifically we were shown how to use Keyer and Colourspace nodes.

For the homework, I edited the background using the luminance key in alpha, then blurred and channel merged it over the plate, following a method we were taught in class. I graded this to create a pinkish glow, this is because the foreground is red tinged so I wanted them to match.

I blurred the entire background slightly. Then I used a keylight to get an alpha for our front image, denoised it and then added an edgeblur.

Once complete, I merged them over each other.

WEEK 10:

To create the final shot, I used my previous roto and merged it over my render of the stationary steam engine. I had already created realistic lighting in Maya, and input all the correct AOV’s.

I followed our tutors script to correctly grade and colour-correct my sequence, I needed to seperate the AOV channels, I mainly used the diffuse_direct, diffuse_indirect and specular_indirect. I graded and coloured them individually, to get accurate lighting. Most importantly, I wanted to get the black points of my render and the shot to match.

When happy with the grading, I applied a write node, and rendered out the overall sequence.

Then merged the roto over my graded footage to create the final shot.

I managed to remove the markers on the floor, using 4 tracker nodes and rotoing them. Then I blurred the roto until the trackers were gone, I pre-multiplied then rendered out the final footage. I am having trouble with Nuke as it is not following the tracks correctly, I have to move them on each individual frame, which is taking too long.

An issue that I have been having is that the track wobbles a lot, when the scene is in Maya it is fine, however when I render it out and import to Nuke the track shakes again

https://youtu.be/q9ME-d5G6BE

I have tried so many different ways to fix this but keep coming to the same issue, I don’t have time to re-do the camera track from scratch. Ideally I would do that and start the process again, I will try this after the submission. Overall however, I am not happy with this which is unfortunate as I have completed the model with animation and textures, and graded it to the scene.

I am attempting to fix this problem in Nuke, I started by tracking 4 points on the machine, then changed the node to ‘remove jitter‘.

So it didn’t really work. The shot is still very unstable, I really can’t understand why when the track works perfectly fine in Maya. I will need to speak to a tutor regarding this issue.

I have found out what the issue was, the frame rate of the footage and the machine animation were not the same. I will render out a new sequence to input into my Nuke script, which should be stable.


Categories
Collaborative Unit Term 2

MAYA

WEEK 1 & 2

We were informed that this term we would be building a stationary steam engine in Maya, to be composited into a scene. We will be working on modelling, lighting and texturing, alongside this looking at more depth into rigging and hierarchy.

I started by compiling some reference images for my engine, using elements from different machines to inspire my own.

In this weeks class we created a simple wheel and piston system, we were taught how how to correctly rig this. Firstly, we very quickly built a basic piston, with a piston sleeve, and a wheel attached. We did this by using polygon shapes and manipulating them with various tools we had previously learnt.

We added locators at two ends of the piston, and using the Aim Constrain tool, we were able to ensure the piston would always move in the right direction (towards the locator).

We were also briefly taught how to use MASH to create the illusion of a rotating belt on our machines.

The animation is not totally accurate as the machine is not moving in the same direction as the belt. This exercise was primarily to get familiar with the tools to rig and animate our own models, so this isn’t a major problem.

I also started this term continuing some work on my previous Maya scene. I had the idea to create an advert for the bread, this is more of a personal project now – I just want to see if I can create a successful and completed narrative. I started drawing a storyboard for my idea. I’ll come back to this when I have more time.

WEEK 3

This week we continued to create our own steam engines, I drew a basic design based on elements of the machines I liked from reference photos. I chose these parts because I believe they will create a realistic and complex machine. Furthermore, the main aim of this task is to fully understand and create a successfully rigged and animated stationary steam engine, so I chose a design that involves a lot of moving parts.

I started modelling the wheel, it was taking some time to figure out exactly how I would like it to look. I started with a cylinder polygon primitive, adding to the geometry to create a sufficient amount of faces to manipulate – deleting the unnecessary ones to form the shape of the wheel. Extruding and using edge loops as necessary to create the detail. I also used a pipe polygon primitive to create the outer rim of my wheel.

This was the first attempt, I didn’t like the thickness of the spikes of the wheel, therefore decided to recreate it – using the same method.

I started on the steam tank, manipulating the faces of a cylinder to create the desired shape, extruding and using edge loops when needed. I added details such as bolts and panels to my machine as necessary.

To create the latch for the steam tank I originally constructed the right shape using a polygon cylinder, then used boolean difference to create the punctures. This was not an efficient method as it ruined the geometry.

Therefore, I decided to try a different method. I used a polygon cylinder to create the upper part of the latch, deleting the faces to create the hole. I then mirrored this and extruded them into each other. This method kept the geometry relatively intact, so I continued with it – I want to ask my tutor about this in the next class however.

I manipulated the geometry as needed to replicate my reference image.

WEEK 4

We continued modelling our stationary engines this week, our tutor helped me with cleaning up the geometry on the latch. He also showed me an efficient way of creating the organic shape between the two latch bolts. Using the extrude tool and manipulation of the vertices I was able to achieve the look I was after.

Next I wanted to clean up the UV’s on what I had already modeled, ensuring they were accurate by using the camera-based tool, and unfolding in the UV editor.

I also wanted to create a ‘hammered’ effect on the steam tank. I achieved this by adding divisions then using the sculpt tools to organically create the right aesthetic. I wanted the bolts to look hammered in also.

I continued modelling the details of the machine, going back and forth with the reference images.


I needed to sketch out the details of my engine in order to model accurately and efficiently. It helps me to visualize what I am designing.

WEEK 5

I continued modelling my stationary steam engine.

I used the lattice tool to bend the back-plate onto the steam tank, which my tutor had shown me as the most efficient method.


I continued modelling the front mechanism. I wanted to attempt animating part of the model, to ensure that the mechanisms I had modeled were efficient.

I used a Youtube video to see how the movement works. I also re-watched our tutorial on constraints and locators and using these same principles I attempted to apply it to my own machine.

First I established the pivot points of my movement, then placed locators at each of these points. It was important to ensure the central pivot and the locators were snapped together, I did this using the wireframe shader and two perspectives.

Then I used both the Aim and Point constrain accordingly, and put each element in the correct hierarchy in order to achieve believable movement.

This whole process took a bit of time, it was difficult to fully understand which points needed movement and what needed to be constrained. Furthermore what type of constrain each part needed.

I have successfully created the animation, however there are a few issues remaining with it. Some of the locators are jumping during the sequence, I will speak to my tutor about how to fix these problems in our next class.


I now want to build the piston at the front, I roughly drew out how it would look first.

I had a lot of issues when animating the front piston, this is because the aim and point constraints were not working efficiently on my model. After discussing with my tutor, I cleaned up the geometry so it is easier to manipulate then we tried the constraints again.

Upon reflection, we decided that the issue was with the model itself, I believe the proportion sizes of my handles made it difficult for the models to follow each other when animated.

With my tutors help we managed to get the piston working well enough for the task, we needed to manually offset certain parts of the piston and keyframe them in order to achieve this however.

I continued modelling the details of my machine using the same techniques as before.

The animation and machine is finished, I just need to slightly animate the belt and add the textures.


I imported my tracked scene from Nuke into Maya, then imported my machine as a reference. I aligned them up and added the image sequence.

Due to time constraints I decided to texture the model on Substance Painter, I watched a few tutorials and looked at reference photos to decide on my colour scheme and aesthetic. I knew I wanted to use copper as my primary metal.

I ensured the UV’s were correct for substance painter then added the right textures, then I added a paint layer and drew on the wear and tear that I wanted, as well as discoloration and stains.

I wanted to use a similar colour scheme to the scene, so would use the image plane as a reference point.

Once done, I imported the texture maps to my Maya model – at this point the specular is too high in my opinion, so I will play with the settings to get an accurate finish.

I updated my reference in the Maya scene, so my model would show the textures. I started working on the lighting, using two area lights in the corners of the room, one was a cylinder to create more of a soft look. The other needed to create sharper edges to the shadows so I used disk, and changed the spread value.

I also changed the grade of the light source, to be slightly more green. I also need to play with the exposure and intensity of the lights, as it looks slightly too dark right now. My tutor advised me that the black points of my model and the scene need to match.

I am also working on re-texturing the front piston. I am planning to do this in Maya, my tutor showed me how to create a good worn texture for the metal using the hypershade.

I used an image of hammered copper to add to my colour maps for the piston, painting on the discolouration and texture.

Refining the textures of each part of the machine was taking a long time, it is something that I could work on continuously as I feel you can constantly improve it. Overall when I was happy enough with the look, I created and selected the AOV’s we needed then rendered out the sequence and imported to Nuke, ready to be graded and colour corrected to fit the scene.

I have found an issue with my work. The camera track is stable in Maya yet whenever I render out the sequence and import to Nuke, the track becomes very shaky. I am unsure why this keeps happening, I rendered out my Maya sequence twice, even trying a more stable track. Yet the problem persisted, I have no idea how to fix this and due to time constraints I will not be able to before submission. I plan to do this after.

https://youtu.be/q9ME-d5G6BE

You can see just how much the machine is unstable in the shot, I really have struggled with trying to rectify this, so I think ideally I need help from one of my tutors.

I have found out what the issue was, the frame rate of the footage and the machine animation were not the same. I will render out a new sequence to input into my Nuke script, which should be stable.


Categories
Design for Animation, Narrative Structures and Film Language Term 1

Audio/Visual Presentation

Categories
Design for Animation, Narrative Structures and Film Language Term 1

Critical Report Assignment

I started this assignment by deciding what topic interested me the most, I found that the concept of discussing future technologies was an intriguing one, in addition to the uncanny valley effect.

I started my thesis by writing notes, to understand specifically what my title would become.

I created a visual/audio presentation using Powerpoint, to elaborate on some of the technologies discussed in my report, in relation to the uncanny valley in VR and AR.

Categories
Term 1 Xmas Assignments

Week 11

At the end of having researched the different job roles in the vfx industry, write a short summary on which job role or roles interest you the most and why?

The job roles that interest me the most involve working in 3D, and being able to create aesthetic imagery. More specifically, I am interested in the role of a 3D modeller, I enjoy using software such as Maya and Zbrush  

After researching the job roles, both 3D animator and texturing artists intrigue me, again for the same reasons as a modeller, I like the idea of creating virtual yet realistic imagery. Consequently, the CG generalist role is something I would consider, in order to hone my skills in a variety of professions relating to 3D assets. More specifically, it is character designing that I am interested in, rather than environment work at the moment.

I like the idea of bringing 2D images to life in a 3D software. So far, I have created two sculptures, the first was just to experiment with Zbrush, and therefore I modelled the face without a reference image, sculpting each feature individually until it all came together. Whereas the second sculpture needed to be based on a statue in London, this meant I would have to work from reference photos, this was a good challenge in attempting to recreate an asset from a ‘concept’ 2D image. It was tasks like these that had piqued my interest in character modelling, and after further research I believe it is a role suited to me.

This image has an empty alt attribute; its file name is Screenshot-46-1024x570.png
3D model of a bust, created in Zbrush.
This image has an empty alt attribute; its file name is IMG_0981-sfw.jpg
Reference image
This image has an empty alt attribute; its file name is Screenshot-70.png
3D model based on reference image of Golden Square Statue

When researching the job roles in the industry, the Creature TD was fascinating, I currently do not have the skills to undertake this role, but the idea of creating specialist textures such as fur, feathers and skin is one I am interested in. I also have an interest in rigging at the moment, which is something this role also undertakes. We have only been taught very basic rigging and skinning, so I am keen to continue learning.  

Overall, there are many jobs in the VFX industry that interest me so it is difficult to decide my preference. Underlying every role however is the creative and artistic elements to them, regardless of specific job. I am excited to work in an environment where I can be innovative.