I'm keen to try and build out a small VR (Oculus) environment but struggling to find any sources that explain the differences between building 3D art for standard screens and VR display.
Also is there yet a clear indication of what works better for VR? Unity or Unreal4?
If anyone could share any knowledge or resources it would be much appreciated!
Replies
Efficiency Matters; Care about your FPS and the efficiency of your environments, and don't let your environment be the cause of FPS problems. This will directly impact the end users experience and you could actually contribute to them feeling sick or dizzy because you wanted to spend some performance on something fancy.
Normal maps don't work as well; As VR gets better and better, the 'fake' dimension and lighting of normal maps starts to break down (Source: Valve). This may not matter much now, but in the future we may have to get creative, using tesselation or real geometry more often and not depending on normals to describe larger shapes, only to soften edges and add smaller details. (Another way to say this might be: Bad normal maps will be a lot more obviously bad, so get better at making good normal maps)
Consider vertigo and perspective more; Really good VR will more and more make you feel like you are actually in a place. Large spaces, extremely tall objects, these can start to feel a lot more 'surreal' (and sometimes fake in that sense) than they normally would. You could use that to your advantage, or if you are trying to go for something realistic it could work against you. You could also make your players sick or dizzy by putting them somewhere up very high or moving them around a lot, or moving things around them a lot. Just be conscious of these types of things.
These are my impressions, though I'm no expert. I have talked with Valve about it and experienced their VR Demo though. My impressions write-up is here; http://cmonteroart.blogspot.com/2014/01/i-was-lucky-enough-to-not-only-attend.html
Have you tried DK2? Did you had the same feeling with it?
The biggest challenges are mipping and aliasing. The screen is just inches from your eyes and each camera alias and mips slightly differently. This causes a lot of discomfort. You also lose a lot of resolution with the camera shader used to send a distorted image through the lens.
(Anti-Alias is the smoothing of polygonal edges. Mipmapping is the smoothing of textures as the angle / distance increase from the render camera.)
These are the rules we came up with so far.
I'm sure I'm forgetting lots of stuff right now. If I remember anything pertinent I'll update.
But after what you write about noisy textures, specularity and geometry the idea actualy seems perfect to me now.
There are other best practices, no camera movement, no forced movement and make UI physically floating items in space.