Home General Discussion

How do Animation Studios texture?

interpolator
Offline / Send Message
Francois_K interpolator
I've been trying to find something about how Disney or Pixar or the likes texture all their environments or characters but I can't really find anything.
Or even environment breakdowns ,all I get is animation breakdowns.

I was curious if anyone had an insight of how they do it. I have a small inkling that it's maybe only shaders but I'm not a 100% on that. ( Or if they UV unwrap as well )

Replies

  • jfeez
    Offline / Send Message
    jfeez polycounter lvl 8
    Check this out. It was developed by disney awhile ago
    http://youtu.be/NT_FYRP7kqI
  • Kurt Russell Fan Club
  • Francois_K
    Offline / Send Message
    Francois_K interpolator
    Ah thanks guys!

    Apparently disney also wrote a paper on that so I might have a read through there as well ^^
  • BlvdNights
    Offline / Send Message
    BlvdNights polycounter lvl 8
  • Francois_K
    Offline / Send Message
    Francois_K interpolator
    BlvdNights wrote: »
    The wrong way

    How do you mean?
  • jgreasley
    Disney use PTEX painted using their own in-house texturing system called Paint3D.

    Pixar use Mari with UDIM and PTEX layouts.

    Dreamworks and Sony Imageworks use Mari with UDIM layouts, ( as do Weta, ILM, Framestore, DNeg, MPC, Cinesite, Blizzard Cinematics, Blur, MPC)

    Their UDIM layouts can run into many hundreds of patches per character or asset, normally using a 2k or 4k texture per patch.
  • marks
    Offline / Send Message
    marks greentooth
    jgreasley wrote: »
    Disney use PTEX painted using their own in-house texturing system called Paint3D.

    Pixar use Mari with UDIM and PTEX layouts.

    Dreamworks and Sony Imageworks use Mari with UDIM layouts, ( as do Weta, ILM, Framestore, DNeg, MPC, Cinesite, Blizzard Cinematics, Blur, MPC)

    Their UDIM layouts can run into many hundreds of patches per character or asset, normally using a 2k or 4k texture per patch.

    f856db1b48d7de7ec48a06153e795817a778fe967dd82001c50e55fe0696097e.jpg
  • jgreasley
    Heh! Thanks.

    I could probably answer any questions people have about high end VFX texturing.
  • Francois_K
    Offline / Send Message
    Francois_K interpolator
    So do the Character/Environment/Prob artists have to know how to texture with the PTEX and UDIM layouts and the various other texturing program or do they have specific people there to do that?
    When I see the credits I don't think i've seen the titles Texture Artist in Frozen/Tangled/ etc.
  • JordanN
    Offline / Send Message
    JordanN interpolator
    jgreasley wrote: »
    Heh! Thanks.

    I could probably answer any questions people have about high end VFX texturing.
    Ooooh, this is gonna be fun.

    1. Is texturing for stuff like cartoons, popular in VFX? What does the workflow look like?
    2. What is it like texturing for something photoreal? Are there any special rules, do you let the technology do most of the work, and what about an artist own input?
    3. Since high end VFX is years ahead of video games, what kind of technology can we expect to fall into game artist hands over the next couple of years?
  • jgreasley
    Francois_K wrote: »
    So do the Character/Environment/Prob artists have to know how to texture with the PTEX and UDIM layouts and the various other texturing program or do they have specific people there to do that?
    When I see the credits I don't think i've seen the titles Texture Artist in Frozen/Tangled/ etc.

    With PTEX there is no layout process as each face gets it's own texture. It's nice to not have to UV anything.

    It depends on the company as to who does the UDIM / UV layout. Some companies have their texture artists make their own, and in others it's a modelling task.

    Tools like Headus UV-Layout are commonly used to speed up the process. Pretty much essential when your character has 650 patches.

    Most of the time modelling and texturing are different jobs. It's not usual for the modeller to do the texturing too.

    Look development is the process of linking textures to shaders and balancing those shaders to achieve a final "look". In some companies one person paints the textures "Texture Painter" and another does the Look Dev "Look Dev TD". Other companies will join those roles into a single job (Sometimes called Surfacing or just Look Dev).

    Disney joins the two tasks and credits the people painting textures as "Look Development Artist"

    Dreamworks does the same and credits people as "Surfacers"

    Weta credits people as "Texture Painters"
  • DireWolf
    Is PTEX used in environments as well? Do they need to worry about using minimal amount of textures, tiling, or modular set assembly?
  • jgreasley
    JordanN wrote: »
    Ooooh, this is gonna be fun.

    1. Is texturing for stuff like cartoons, popular in VFX? What does the workflow look like?
    2. What is it like texturing for something photoreal? Are there any special rules, do you let the technology do most of the work, and what about an artist own input?
    3. Since high end VFX is years ahead of video games, what kind of technology can we expect to fall into game artist hands over the next couple of years?

    1. I'm not sure I fully understand the question but I'll write some stuff anyway :) .

    Companies like Pixar and Dreamworks will paint textures to achieve a cartoony look. It used to be much more common to use procedural shaders that are evaluated at render time rather than textures. These saved on memory and disk storage, but came at added complexity for things like anti-aliasing and flexibility. Although procedurals are still used, hand-painting is much more common than it was.

    The normal process is that a concept artist will prepare a large number of look studies, including things like colouring, patterns and surface detail. They may also provide a large number of reference photos of skin textures that they want the texture artist to match. This forms a "look pack" in Pixar terminology that is then handed to the texture artist.

    The texture artist will then match the look by painting diffuse, spec, bump, etc etc etc channels. This will be a combination of handed painted and heavily modified photo reference.

    2. Photoreal texturing takes two forms. Prop / environment matching and entirely synthetic textures.

    This might get a bit technical. Sorry.

    When prop matching the texture artist will ideally be provided with a large amount of professionally shot reference material. I.e they took the prop into a studio and shot lots of different images from different directions under controlled lighting conditions. If the reference photographer is doing a good job they will include a macbeth colour chart in the shoot.

    Macbeth_CC_Calc_Lab.jpg

    A Macbeth chart is a piece of card printed with different squares of known colour values. Having this chart means that you can balance the photos, i.e. adjust for any colour changes introduced by the camera, lens, sensor or lighting. If you know what colour a pixel "should be" you can apply colour transformations to ensure that it is.

    Ideally the photo shoot will also use crossed polarised filters to remove specular highlights. This is a technical way of shooting an image with two filters, one on the lens and the other on the lights. By using these filters you can recover only the diffuse component of the colour which is what you want if you're going to texture with it. You don't want any specular response in your textures as this will be added later by the shaders at render-time.

    WOPol.jpg

    Becomes

    WIPol.jpg

    When cross polarised.

    Once you've got your balanced, diffuse-only reference, you would then use a package like Mari to 3D project that reference onto your mesh of the same prop. If everything works out the model is accurate and you can align your reference shots exactly. Then it's a matter of creating plausible spec, rough gloss etc maps to recreate the surface response of the given prop.

    When you're working without a physical prop, but with an entirely virtual object, you'd normally source reference images that match the materials you're looking to recreate. I.e. high res metal, steel, concrete images. You'd then use Mari again, with lots of warping, stretching and other adjustments to "fit" the reference to the surface.

    Doing all of this in 3D means that you don't need to worry about seams. :)

    3. 3D painting will become much more common. Dealing with seams is a paint in the ass and there is no reason to put up with it. Most modern CG movies are almost entirely textured in 3D in Mari with PS being used only when it absolutely needed.

    Painting in the context of a live shader will become more important. As physically based shading becomes more common and shaders become more complex it will become more and more difficult to efficiently create a look using a normal 2D workflow. When you're painting esoteric maps like subsurface scattering, different gloss response controls etc. it's very difficult to judge what effect a mask will actually have when you run it through the shader. Programs like Mari allow you to run your own shader while painting, so it's a much more WYSIWYG experience.

    Beyond that, bigger textures, higher bit-depths and a greater understanding of colour management techniques and colour space handling.
  • jgreasley
    DireWolf wrote: »
    Is PTEX used in environments as well? Do they need to worry about using minimal amount of textures, tiling, or modular set assembly?

    Outside of Disney PTEX is most commonly used for environments. When dealing with complex, especially, scanned geometries it's a god-send not to have to UV them. They're sometimes a mess.

    Minimising texture sizes is no-where near as important as it is in games. Disks are cheap, memory is cheap, network is cheap. People are expensive. If working quicker means you are heavy handed and don't really optimize it's not really *that* much of a problem. It's better to over-paint and never need to add more detail than to go back and spot update things.

    A large environment like Rivendell from the Hobbit might have upwards of 30,000 8k textures. It isn't just environments that go a bit crazy....

    I was talking to the lead artist on a very well know creature from a recent movie and she told me the creature had 650 UDIM patches and 30 channels (diffuse, spec, bump etc) each at 8k x 8k which works out to 19,500 8k textures or nearly 300,000 2k textures.

    Most of the time everything is uniquely textured. It's quicker to create and less complex to manage. The texture setup is "put these textures onto that object using this shader" not "tile this x times and this y times, with this blending mode etc. etc."
  • elementrix
    Offline / Send Message
    elementrix polycounter lvl 16
    Thanks so much for explaining all this stuff greasley. Those numbers boggle my mind, for something like that creature with 19,500 8k textures, are they actually able to display the whole model within mari with all the textures applied? I use Mari myself a lot nowadays, but I just can't imagine working with so many textures...
  • JustMeSR
    Offline / Send Message
    JustMeSR polycounter lvl 4
    Yay! Someone that can answer my time long question.

    Do they bake stuff too? Or do they directly use model with the highest polycount?

    (Also a personal question if I may. Where do you know all these things from? Work, websites, school?)
  • EarthQuake
    Cool thread.

    jgreasley: That sounds like a crazy amount of data. Do you steam it in as needed or just rely on crazy spec'd out systems... Or rely on network rendering or something?
  • .Wiki
    Offline / Send Message
    .Wiki polycounter lvl 8
    EarthQuake wrote: »
    Cool thread.

    jgreasley: That sounds like a crazy amount of data. Do you steam it in as needed or just rely on crazy spec'd out systems... Or rely on network rendering or something?
    We render with arnold. The renderer makes use of so called "tiled" textures. These are basically mip mapped Files which are created out of our EXR textures.
    These tiled textures allow to load only the required mip levels of each UDIM texture.

    Here´s a quote from the solidangle documentation
    High-resolution unmipped texture maps are very inefficient to render, because the highest resolution level must be loaded into memory regardless of the distance rather than a lower resolution level. For that reason, you may want to use this flag to enforce that all your texture maps are already mip-mapped in advance (perhaps by using a preprocessing tool like maketx). When this flag is enabled, any attempt at loading an unmipped file will produce an error and abort the renderer.

    Another point is that we don´t store the real models in our scenes so they become really light. We use Referenced StandIn Files. These are basically precompiled rendercode Files created for each static asset. For Characters we create Cach Files which are loaded onto the Render Geometry. So The render Scene has no Rigs, Skins, Controls, or real Meshes.
  • jgreasley
    elementrix wrote: »
    Thanks so much for explaining all this stuff greasley. Those numbers boggle my mind, for something like that creature with 19,500 8k textures, are they actually able to display the whole model within mari with all the textures applied? I use Mari myself a lot nowadays, but I just can't imagine working with so many textures...

    Yep, Mari will render all of those textures in realtime. You need a bunch of fast storage / SSDs / FusionIO cards to work efficiently with that much data, but as long as your disks are quick enough, Mari can handle it.
  • jgreasley
    JustMeSR wrote: »
    Yay! Someone that can answer my time long question.

    Do they bake stuff too? Or do they directly use model with the highest polycount?

    (Also a personal question if I may. Where do you know all these things from? Work, websites, school?)

    In general geometry is represented as Subdivision surfaces which then have sculpted detailed applied on top.

    The base mesh will be modelled in Maya etc, and the detail sculpted in Mudbox / zBrush or painted as maps in Mari.

    Ideally the model will be represented as a relatively low res subd cage mesh and a collection of one or more displacement maps. The renderer will subdivide and displace the model at render time. This means you don't have multi-million poly models flying around. You have hundreds of thousands of ploys and displacement maps.

    Some facilities will split displacement into different frequency bands (coarse detail, mid detail, fine detail) which allows different features to be adjusted individually. An example might be having skin pores and details on their own maps, so you can balance them in close-ups etc.

    As to knowing this stuff, I've worked in VFX for the last 13 years and I wrote Mari. :) You pickup some stuff as you go along.
  • jgreasley
    EarthQuake wrote: »
    Cool thread.

    jgreasley: That sounds like a crazy amount of data. Do you steam it in as needed or just rely on crazy spec'd out systems... Or rely on network rendering or something?

    Yep, Exactly what .Wiki said. All modern production renderers have tiled caching schemes. They will page in just the tiles that it needs as it needs them. You need to pre-process the data into tiled, mipmapped versions, but doing that makes it possible to render these huge texture sets in realtime.

    A close analogy would be ID Tech 5 that was used in Rage. If you watch some of the videos about the Megatexture technology this is pretty close to what Renderman et al do to manage textures.
  • [Deleted User]
    Offline / Send Message
    [Deleted User] insane polycounter
    The user and all related content has been deleted.
  • .Wiki
    Offline / Send Message
    .Wiki polycounter lvl 8
    jgreasley wrote: »
    You need to pre-process the data into tiled, mipmapped versions, but doing that makes it possible to render these huge texture sets in realtime.
    These minutes of converting save us hours of rendering time :)
  • EarthQuake
    jgreasley wrote: »
    Yep, Exactly what .Wiki said. All modern production renderers have tiled caching schemes. They will page in just the tiles that it needs as it needs them. You need to pre-process the data into tiled, mipmapped versions, but doing that makes it possible to render these huge texture sets in realtime.

    A close analogy would be ID Tech 5 that was used in Rage. If you watch some of the videos about the Megatexture technology this is pretty close to what Renderman et al do to manage textures.

    Great, that makes a lot of sense, thanks! (Also Wiki, thanks!)
  • DireWolf
    jgreasley wrote: »
    Yep, Mari will render all of those textures in realtime. You need a bunch of fast storage / SSDs / FusionIO cards to work efficiently with that much data, but as long as your disks are quick enough, Mari can handle it.
    Thanks for great info jgreasley. I wish I'll one day get to work on such machine!
Sign In or Register to comment.