I've searched around on the internet but I'm not finding much about making sterescopic games.
Can anyone tell me how I would go about making my work sterescopic?
What I was thinking is that there would be some way in UDK (for examaple) to out put to stereo.
As I'm not finding too much, I'm guessing it is too new a tech to be supported.
So I would then assume that to output to stereo I would have to involve a programmer and they would have to write something to render out two views and combine them in real time.
Any thoughts or guidance would be appreciated,
Alex
Replies
http://area.autodesk.com/blogs/stevenr/stereoscopy_in_maya2011
But I think in the latest version of UDK there is some kind of stereo support, but not for render. Or was that with the nVidia cards? Anyways, I will put this in my head to think of a way to do it. It can be done.
The basic premise is that you need two cameras that are spaced a little bit that has a common focus point.
There are several ways of doing it:
B: Get some glasses with shutters in each eye, that opens and closes beyond the refresh-rate of the eye, 70hz or something. Then synchronize the footage with that. To make this work, you need a reciever that tells the glasses when to start getting jiggy.
C: Get some glasses that filters different UV frequencies, and play your footage on a monitor that can display these. Thats those new 3D TV's and monitors that keeps popping up.
Where the programmer comes in is because you need to constantly cast rays to make the cameras focus on whatever the crosshair is on. As for making it pop more, you need to space the cameras more, but if you do it too much, the player will get a urge to puke if he/she looks at something that is not on the crosshair.
Can't tell you exactly how to do it, as I havent got experience with UDK, nor with programming.
I read that UDK was gonna incorporate S3D, but I guess we can't get our hands on it yet.
Well I don't know much about the tech, but not the old 80s style red and green glasses (i have done animations using that before, actually looked OK) but whatever the current thinking is for games. I've read that the TV input is HDMI 1.4 which supports S3D, but how I make my levels S3D in UDK or any other kit is a mystery!
I'm not even sure if I need a special graphics card (as I understand it thats only for converting 2D to S3D) or a 3D monitor. There just desnt seem to be any info out there!
Issues:
Particle systems, anything that's time based (moving textures and such). Will not sync from the two renders. This is a performance thing, it will always be off.
Solution?:
Post process effect to render left and right, but from what I have read, it will always be from one camera and not actually shift the view for left and right. Might as well render out one and fake it doing it this way. There is software that can do this.
Unless you can shift camera left and right in the node... which I think you can. But do this per frame and compile the results on screen. Sounds crazy now that I read it out loud.
Quickly going through Unreal Script there's only one camera per pawn. Could be wrong.
What I outlined above with the camera rig will work. Just make sure you have a sync frame/moment in UDK so when you start compositing, everything will match.
Lamont how were you gonna tackle the combination of the two views assuming you were going the anaglyh route? I read somewhere I think in a sony slide that you could use a depth map to get stereo. need to find that....
http://www.google.com/url?sa=t&source=web&cd=8&ved=0CD0QFjAH&url=http%3A%2F%2Fwww.technology.scee.net%2Ffiles%2Fpresentations%2FStereoscopic_3D%2FPS3_Making_Stereoscopic_3D_Games.pdf&ei=8VNITIfCG8GblgfFquWGCw&usg=AFQjCNE5D9TQYejHPHfcjW3pPNvU5K5D_A&sig2=ny_2EsZeMgwcCTTFfGElhA
Solution?
You know what? When I was making a camera for top down action game, I messed up and the camera was going back and fourth so damn fast it was in stereo. I was getting two views and a single point of interest based on cross hairs. And this was all in kismet. You can put a hidden object as the camera target and this will be your point of interest.
After Effects or NukeX, well any composting application.
If you can render out one depth map from UDK and one still image, email it to me and I'll make a stereo version for you in any format. I'm working with a friend on stereoscopic software, so it would be a nice test.
Wonder how Batman: Arkham Asylum does it. Thats the only game I knew off that was on consoles and did 3d.
count me interested in developing something like this, its just that I don't have any 3d glasses right now- maybe I'll have a look tomorrow at the shops here in Sydney.
If you can't find glasses, go to a museum or a "plane-arium".
I always thought you needed a 120HZ monitor for shutter glasses.
From what I've heard with shutter glasses there's not much you can do from an art point of view. If your engine supports it then it'll work. It's purely something that's implemented in code.
that my plan of attack
I spend the last semester at university getting told how important stereoscopy is and what awesomeness it brings. The first thing you wanna make sure is (apart from if you're R&D or Tech Artist) - forget about it, Stereoscopy is nothing but another Post-Effect and it will not enhance the quality of your work! I feel it's important to point this out as 4 of 5 of my proffesors did this mistake (imho) and instead of talking about content they were talking about tech.
Back to the topic!
Stereoscopy is (as has been described) the rendering of two horizontal parallel cameras and theyr respective projection on each eye. The important thing is that you only have one output device in most cases (if you don't wanna do cross-eyed all the time) and your eyes can't seperate the image on theyr own (obviously).
So at first you need an output device or a filter for Stereoscopy Image (like Red and Green glasses which will not allow for anymore colors than grayscale or a specific monitor with glasses (shutter or polarization or hue shift)).
On the format of your output it depends what image you generate.
The easiest to fake/create are these two approaches:
Shutter glasses:
Create a camera rig (easiest way: 2 cameras attached left and right of another camera for previs) and get the shutter glasses controller's state and activate the corresponding camera of your rig (no signal is base camera). Alternatively you can get it cheaper by just creating a key-frame on every frame switching the cameras (this will not synchronize though and might not work too well).
Red and Green:
The exact same as the above - create two cams on a parent constraint, have them switch permanently at highest possible speed (by script or by keyframes) and overlay them with PP effect for Gray-Scale on Red Channel or Gray-Scale on Green Channel. It's not easy on the eyes and color needs to be calibrated but very easy to implement.
Easier to render but hard on the math and getting an absolute correct image: Read out the z-Buffer of your image (render seperate path or look up your engine's renderpath. Now you create two new images where you fill in the pixel of the rendered picture offset by + or - (left or right eye) of the grayscale value of the zBuffers pixel at the same original position * the amount of shift you want to have (will increase or decrease the stereoscopic effect).
http://www.barcinski-jeanjean.com/
making of:
http://blog.barcinski-jeanjean.com/2008/10/14/making-of-part-ii-stereo-photography/
http://blog.barcinski-jeanjean.com/2008/10/17/making-of-part-iii-anaglyph/
http://blog.barcinski-jeanjean.com/2008/10/31/making-of-part-iv-video/
Can't do it since I don't have the hardware.
Renderhjs, that is a cool site, can't check the other links at work, so will have to look at home!
Cheers guys.
Found this:
http://http.developer.nvidia.com/GPUGems/gpugems_ch41.html
Got some work done and my mind was going insane but apparently I didn't check screen align >.<
Havent messed with putting in a depth map instead of numbers....Anyone know of a better way to offset a texture rather than using the screen space to offset it? Don't even think the way I did it is even remotely correct but there is some sort of 3d effect so I must be on the right track
Goddamit fail edit.
3d map? Which one is this?
Performance is horrible.
A. Generate second image using depth map and displace filter (would have to look into if I can grab the algorithm that photoshop uses)
B. Constrain two cameras to the player and switch via kismet hoping that if they switch fast enough I can overlay the images. Problem with this is theoretically I would not be able to isolate the red and green/blue. Unless I can store the previous frame in somehow and use that.
EDIT: Totally forgot e-freak's comment on this. Hmm more experimentation...
Not sure if this makes sense, will try to make a pictogram for it