Hi all, i just wanted to share a neat little process i hacked together the other day, it's nothing super amazing, kind of novel if anything.
What it is is a way to basically paint on a screengrab from maya, bake that down using a second UV set as a ref, and done, you've got a workable texture.
Pros:
painting on a projection in whichever app you want
saves a bunch of time
Cons:
PSCS4 can already do this (i think?), so kinda moot if you've got that
lots of editing required after the fact
kinda long setup procedure (see below)
Process:
- make model (obvious)
- create main UV layout on 1st UV channel
- create empty UV set on second channel
- create new camera (will be used for projections later)
- position camera to get a good view of the model
- *important* set camera to be orthographic when done positioning
- planar project UV's based on camera to second UV channel
- take screenshot
- paint ontop of screenshot in PS, SAI, etc
- crop image so there are no pixel borders around model
- use the warp image command to bake down from that image to the first UV set
bam, projection painting. kinda hacky and long winded for what it is though
Now, what i want to do is automate it... lots of clicking and roundabout editing... not fun. it would be really useful though if you could click once, set camera to ortho, project to second channel, screengrab and crop, then click again, import and warp image. does anyone has any pointers on where to start... i know i'd probably need to learn mel.
Anyway, comments? is this useful, dumb, etc? would it be possible to automate?
Replies
I mean if you made an effort and made a good UV layout already you wouldn't have that much trouble painting directly on the original UV set in a 2D app, unless it consists of many small UV shells (many seams).
With this method you would have to be careful and try not to paint on surfaces facing away from your projection from UVSet2, because if you do you would later get nasty texture stretching when warping to actually fit a proportionally good UV layout (UVSet1).
What it could be useful for it projecting decal-ish details across UV seams, but there is an easier way for this just using a "Projection node" hooked in with a "File" node and then "Convert to file texture (Maya Software).
I might be missing some useful case here, but to sum it up, I don't see where this would be useful.
What could be useful instead would be this:
You have a good UVSet1, but it has some nasty unavoidable seams due to multiple UV Shells that you are having trouble working your way around. UVSet1 does not have any texture stretching.
1. Copy the UVs from UVSet1 to a new UVSet2.
2. In UVSet2 sew together the UV shells that are cause you trouble due to the seam(s) between them.
3. UV Snapshot UVSet2 and paint on the areas that now are sewn together and no longer has that nasty seam(s).
4. Use Warp Image to get your nice new result fitted in UVSet1.
The cons with this method would be if you or someone else later wanted to paint over this troubled areas, they would have to go through this process again. Getting a better UVSet1 would be ideal if it is possible.
Tried that technique some time ago in max, it works, but is a bit painful indeed. I'm sure this could be automated, especially in maya. Well Worth digging into (an much much more powerful than using laggy 3D paint tools inside a 3D app)
I suspect the hardest part to automate would be the cropping. DeepPaint3D crops the image to be exported to Photoshop along the edges of the 3D preview window, so that could be a solution. But maybe cropping along a screen space bounding square of the object being painted on could work aswell.
Tech guys unite!
Yes you are right. But this method does not account for surfaces facing away from the 1 ortho planar projection. Isn't this a big issue with this method?
Still, it's mostly useful for lower poly objects. For more modern the benifts would be most apparent for when it comes to freehand outfit graphic design and such, not much for actual texturing/painting.
But yeah it's a fantastic tool when it works smoothly. (seems like it never really does so far...)
There's a samplerInfo node in the Hypershade which has a facingRation value. Connect this to a ramp node and you can effectively color an object depending on how much it is facing the camera. See where I'm getting at?
Apply your method and combine it with a B&W render (from the projection camera) showing you which faces are pointing towards/away from the camera. This could be used as a guide or even a mask to avoid texture stretching due to surfaces facing away from the camera.
Gotta try this when I have time.
I tried your method + facingRatio idea i wrote about earlier and came up with this.
Cropping was to make sure the image would fit the projection. If you don't set the planar projection to preserve aspect ratio, it'll normalize said projection. Cropping along the edges of the screen is just to make sure that it'll fit this when applied. I'm sure there's a better way.
something i noticed, when playing with the projection plane, if you set it's center to be the center of the ortho camera's focus, and it's extents to it's renderable area, it'll create a perfect projection that'll match up to the render. Is there a way to do this by default?
It shouldn't really matter whether it is cropped perfectly or centered perfectly since the Warp Image will take care of transferring it anyhow? I must be missing something you guys are seeing in all of this.
The bitmaps that I used were these
-Color texture for the New UV Set.
-Facing Ratio Texture for the Original UVSet to use as a mask
-Color texture, the Warp Image results fitting the Original UV Set, using the Facing Ratio Texture as a Mask.