http://www.inf.ufrgs.br/~oliveira/RTM.html
check it out! pretty insane stuff.
they have a demo that you can download and take a look at a few sample objects. looks so damn good it's not even funny.
now if only we could solve that edge problem........
Replies
Yeah, it's cool, but it's not that new, in fact it was shown off in the very early Unreal Engine 3 videos, which were released quite a while ago.
As for the edge problem, there's always silhouette clipping...
nice find
sihouette mapping/clipping would solve those edge issues
silhouette clipping is teh cool tech though. Have yet to see anyone try to implement in a game. The relief stuff is neat. That Doom footage was mightily impressive.
I wonder if no one is using silhouette clipping in-game because it won't work with deforming meshes, only static ones [just an educated guess]? Or maybe it's yet another strain on the CPU/GPU, and so it's not competitive with a simple increase in the actual tesslation?
Anyhow, in that RTM paper, they show a silhouette method being used already, as part of their algorithms. Pretty frikkin amazing. Although it also seems to have a lot of aliasing... the silhouettes look kind of choppy. Gotta download that demo!
Thanks for the links.
With Bump, your basically simply offsetting the Bump map in relation to the light point, and the Greyscale of that pixel is then added to the one below. Giving you a shiney surface and faking height to a degree.
With Normal Maps what happens is the light now reacts to the normal position (stored as A(Length) RGB(XYZ)) just as it would a vertex normal. This gives the illusion of depth because the light is in a 3D rather than 2D space.
For Parallax Mapping it is like Normal Mapping v1.1, because what it does is pass over the texture coordinates and remaps the perspective in Screen Space.
Relief Mapping is different though, it uses the normal map as if it has real depth. The light reacts to the depth of pixel not just it's normal.
I think the best way to show this is with a screenshot where you can see each version.
See I think that describes the difference in the effect much better. That's purely the effects put onto a Quad.
Shame the demo seems limited to NVidia stuff.
On my 6600GT I can expect Doom 3 to run using
Normal Mapping - 120fps
Parallax Mapping - 90fps (110fps if it's using Virtual Displacement)
Relief Mapping - 38fps
So it would just be as easy to use real geometry, but doing that for levels would mean quite a few static models. Not to mention other costs. (more VRAM, etc.)
I would say a real benefit comes more from using it sparingly. Given the Texture throughput required.
This said the GeForce 7-Series, is specifically designed to speed up relief mapping so you can use it with almost zero performance hit. Still it's a good technique.
The four panels and center object on the door appear to float after relief mapping is applied to them, (i assume separately). Of course, D3 wasn't made with relief mapping in mind. But it is something artists would probably have to avoid. Shows just how real the illusion looks. Also notice the difference in edge hardness around the door.
Can this tech be used on characters, or best left on static objects?
[/ QUOTE ]
We used it on our next gen game on the main character (a demon), and to tell you the truth, on television resolution, in a 3rd person game, it just wasn't worth it. So we took it out.
For a first person game, on large environment object, I think it's well worth it.
Performance is painful and image quality suffers somewhat.
The effect only looks good at a distance, get up close at any extreme angles and u get results like this: http://www.doom3reference.com/pipes1.jpg
luckily tho he said it was a really quick modification and could EASILY be improved GREATLY.
It certainly goes to show what amazing stuff can be done though. Also it's only supported by top spec nvidia cards at present due to the amount of dependant texture look-ups required, it really isn't worth dabblin with the hack.
[ QUOTE ]
Can this tech be used on characters, or best left on static objects?
[/ QUOTE ]
We used it on our next gen game on the main character (a demon), and to tell you the truth, on television resolution, in a 3rd person game, it just wasn't worth it. So we took it out.
For a first person game, on large environment object, I think it's well worth it.
[/ QUOTE ]
Very true. In-fact even on a high-resolution computer screen your not going to see much quality difference over normal mapping on a character.
This is because the characters are already reasonable curved and shaped. So your just adding depth to the creation in terms of detail rather than completely faking things.
Relief is definately far better for larger flat surfaces where the gamer will get close up and want to see that detail.
Parallax for curved depth architecture, like rocks or colomns with spirals. Stuff like that.
I wouldn't waste the processor speed on the Characters though, Normal in most cases is more than enough. It is better (and quicker) to have a higher resolution Normal + AA Shader than a Relief Shader. End result is better IQ for the gamer.
A well crafted and properly lit normal/parallax map combo gives us a great effect in our engine.
If it's gunna be a huge performance hit on consoles...I don't think I'd want to afford it in my pipe. Parallax calculations are expensive as is.
-R
polarize... welcome to the boards!
Displacement = heightmap, same thing. Generate them same way you do normal maps, using Render To Texture, or Kaldera, or ATI's tool, or NVIDIA's Melody.
Trouble is though, in my experience what works well with displacement does not always work well with parallax or other bump techniques. When I generate a heightmap by rendering from a highres object down to a lowres one, the heightmap has facets. This is necessary for displacement to accurately perturb the low-res mesh in order to recreate the highres silhouette (do a search for heightmap in the Max8 help, they elaborate on this issue).
However with parallax or bump, the heightmap is not actually moving any vertices around, instead it is distorting the pixels across the existing surface, and this lowres geometry is using a single smoothing group. So the heightmap facets aren't cancelling out the lowres facets, instead it causes some ugly banding.
It helps me to subdivide and smooth the low-res object before making the heightmap, removing most of the faceting. But going too far can also distort the UVs or the result, so it's a balancing act.
Don't know if this helps you, but it helps me think when I type something out.
the focus has been shaders for too long, all this tech is boring and time consuming on the development end. I also prefer good art direction and texture work to normal mapping, as it still looks wonky like the plastic wrap filter in photoshop to me.
I also prefer good art direction and texture work to normal mapping, as it still looks wonky like the plastic wrap filter in photoshop to me.
[/ QUOTE ]
I agree. Human characters in particular have gotten fugly with all this new tech. I find great texture work to be more visually pleasing on the eyes. Except for games like HL2, since their lighting isn't as sharp, and the animations and voice work really bring out the personalities. This rush to parallax mapping seems too sudden, as it's an enormous hit on graphics resources, and the games using them will probably be the same ol' FPS tech demos appealing to the hardware obsessed.
Thx for your answer again. The facetting is in fact a big problem for me. I was afraid that i cant use displacementmaps as a heightmap. Since the normal to heightmap tools dont work accurate enough i had to paint all those maps and this just costs too much time and is of course not much fun to do so.
But subdividing a really very lowres mesh for rendering a displacementmap wouldnt work in my case.
But on the other hand i´m not sure if the pipeline for our project is a final one, because who wants relief / parallax / blabla mapping on very lowres geometry
Edit: Okay thanks, smoothing the lowres works at least better than painting the maps by hand. But i hope we´ll see some changes in generating those maps in the future. It all just gets too timeconsuming
the focus has been shaders for too long, all this tech is boring and time consuming on the development end.
[/ QUOTE ]
It isnt the tech its the users. Look at gears of war, those epic guys make that shit look natural and gorgeous.
First off, this is only the 2nd project cycle we're seeing of next gen materials, only 3-5 developers had it for the 1st string anyways. Give us developers some time to learn.
Aswell, there is no other alternative for technology progression. high res normnal maps come from high res geometry. if you were to make that geometry the primary mesh you'd still need a butt load of new UV's and that is time consuming aswell.
Besides...I don't make a high rez mesh for every normal map, as a matter of fact i use darktree and paint hightmaps for 90% of my details so...no, I don't agree with that tech progression at all.
Given that, you're still mislead and presumptious. Material passes make all the difference. Film CG 'get's its gorgeous look from materials.
Use the tech for a project (or 3) before you broad brush it as a poor prgression of technology bro.
-R
...
But subdividing a really very lowres mesh for rendering a displacementmap wouldnt work in my case.
But on the other hand i´m not sure if the pipeline for our project is a final one, because who wants relief / parallax / blabla mapping on very lowres geometry
...
[/ QUOTE ]
Look at the vids in John's link, specifically this one. And the dissertation used even simpler mesh than that. Well anyhow, PaK's point about performance might still be an issue. I'm just saying the low-res mesh shouldn't be the deal-breaker.
Not sure if I was clear... I subdivide the low-res only for the extraction, removing it once the map's been generated. That might save you a lot of hand-painting time, especially since it's really tough to paint heights accurately/smoothly.
-R
The results are awesome. compared to what i had before
@doc_rob
What can i check out in P&P? I dont found the thread
THX for your Tips!
PS: Sorry for the Mayaterms
http://boards.polycount.net/showflat.php?Cat=0&Number=91531&an=0&page=0
Glad it's working out.
PaK, good point. We've been simply using the same heightmap asset as the source for both the normalmap and the parallax map. First the engine converts it to a normalmap (for the detailed micro bumpage) and then samples the original down to use for the parallax map (for the larger relief). Saves a bit on disk assets (though same vid memory), allows more control over both ends (artist can alter normalmap strength, and parallaxmap sampling method), but it does take a load-time hit. YMMV.
I think someone mentioned this article in another thread, a good overview of the parallax effect.
accelenation's Parallax Mapping article
This tool/method seemed logical for the engineers but i wanted to make layers upon layers of details in photoshop with my normal maps and I found that the swuimmy effect was happening too often the moment i started adding any nubbly little details.
I asked my gfx engineers for more control, so now I plug my 8-bit parallax map into the alpha of the normal and it saves a little bit more than a new grayscale texture. It's not hard for them to impliment this functionality. Even though a 32 bit file is big, it's better than a seperate parallax map.
-R