Just wondering if anyone has had experience with Nvidia's Tesla hardware. I know that at work, lightmap rendering is a big slowdown in our pipeline. We don't have a full on server setup like Bungie, who's got hundreds of servers burning maps. We have ~dozen dedicated and then people turn their desktop PC's over to the farm after work hours.
Tesla supposedly uses the GPU cores to calculate CPU tasks at much faster speeds than normal. I was wondering if anyone has used the hardware before and just how much faster it is for things like rendering. If it's a massive increase for calculating things like AO, shadows and such, then it might be something worth looking into at our studio.
Replies
When it comes to lightmapping i havnt heard of one either.
And, yep, there are not many GPGPU lightmappers there. I plan to expand xn4 to bake the lighting too, but a lot of things must happen before that :poly124:
Was reading an article on tomshardware from a medical firm, though it's a bit old [2008]. They were using 16x Core2Duo CPU's to render images and it was taking 45+minutes per frame. They swapped out to only 4x TeslaX870 cards and the render time dropped to under 16minutes - pretty impressive. They said even using a regular GFX card (Nvidia 8800GTX) with CUDA, was 8x faster than a Quad Core @ 2.66ghz.
I've love to see more renderer's take advantage of this.
As for GPU based lightmappers, you might want to talk to copypastepixel, check out his post and blog.
EDIT:
Here's a few render farms:
http://aws.amazon.com/ec2/
http://www.garagefarm.net/
http://renderfriend.com/
http://www.rebusfarm.com/
I have no personal experience with them though.
EDIT2:
Here's a thread about rendering with Amazon EC2, hmm looks not that simple. And that there may be other similar offerings from Microsoft and Google.
there is even preconfigured hw for that case
http://www.nvidia.com/object/realityserver_tesla.html
thats probably the easiest way to get a distributed gpu renderer working. But other products may exist as well.
"classic" gpu radiosity is also an option, ie rendering the scene with a 180° fov camera from every rad. patch position. If you want to go that route, you'd get yourself a whole bunch of geforces and code the system yourself.
But the main problem here is the lightbaking, i dont think Iray supports baking textures. Someone please prove me wrong
then there is always the option to do such a tool with optix oneself. the api is really easy to work with. the distribution on multiple system probably is the tricky part and how to deal with the situation if a scene is too big for memory, but given it's game art, even an entire level probably never is that huge in resources. but jogshy really is the expert with this stuff.
nice post on the GPU based lightmap creation process
Thought I felt my ears burning
cheers,
copypastepixel
blog:copypastepixel.blogspot.com