Hi all,
I am currently studying video game art at university. One piece of research that I am doing is looking at the future of asset creation for games.
We all know about current asset creation pipelines and techniques such as modelling, sculpting, texturing (diffuse, normal, AO etc.) but what does the future hold? Unlimited Detail Techology? Using schematics or photo's which the computer can make into a 3D model? Or something more texture based?
Any input or interesting links would be most welcomed.
Thanks for your input.
YJ
Replies
Im assuming this is a collage level paper, so where i would start is do some research on what new technology is getting moore and moore popular in the industry.
Since i haven't done this (at all) i can just give you some wild guesses.
Tesselation and heightmaps is a area i think is really intresting, however not alot of games seem to use it and its rather a gimmick than the standard of game making.
I think it should be however, it dosnt divert that much from the current art pipeline and it could bring the detail up significantly.
However it seems there are alot of technical complications when implomenting and i haven't really seen any of it on the consoles yet. Hence the "gimmick" label.
FYI: Unlimited Detail is a scam.
EDIT: Also i recall seeing like a hour long tech talk with Dice about the tech in Battlefield 3. Might give you some hints on how their lighting works.
As for sculpting I think dynamic tessellation is the direction things are going. If you look up some of farsthary's work on 3dcoat, he's done a lot in this area. Between using voxels or dynamic tessellation, I think independence from topology will be a major thing (it can already be seen in practice today but it will become much more refined).
I think brdf will affect more about how we make our materials, and because of that I think there will be less of an emphasis on the diffuse (as there is today) compared to getting the look of the material itself right. With better radiosity getting everything in the diffuse will be less and less important.
These things are all speculation and a lot will depend on how well they can be streamlined, how reliable the techniques will be, and how well they can be implemented given time and resources.
as visceral said tesselation is right now just a gimmick but will be the big stuff with the upcomming nextgen consoles in a few years ( i guess 2 or 3 which might be wrong!)
voxel based tech/content will be the future even if just an far away one still.
sculpting will be bigger in future then now as will be 3d scanning... just look at tools like zbrush and how hard they are pushing voxelbased content creation (dinamesh)- this will just get more not less so will voxel based engines get more and better not less.
and at the texture/material/physics front we got realtime raytracing which is lurking around the corner all tha time xD. When rasterisation hits a dead end one day maybe this is going to be how its done.
no guarantee on any of this !
cheers
The only thing I could see in the best case scenarios is a better set of tools speeding up the processes. These are either studio-dependent, or engine-dependent. For instance, it would be fantastic if UDK had better normalmap synchronization, and offered some sort of plugin linking a Max scene with a low and and a high ready to be backed to an exporter plugin, turning the scene into an useable Unreal package in one click. It wouldn't change the asset being built itself at all - it would simply make its export more accurate and faster...
Visceral - I was thinking in the next 10 years or so.
http://www.zbrushcentral.com/showthread.php?164835-FiberMesh-Preview
From an art side, I hope that the pipeline will become more streamlined. All the cool features we have nowadays (normal maps, displacement mapping etc.) all require a lot of time spent on things that, in the end, won't even be in the game. I hope that eventually we can nix the whole baking process and just export un-subdivided highpoly meshes into a game engine, tesselate them in game and apply materials rather than trying to recreate them using textures.
In some cases we're pretty close to this already, thinking about Forza and GT, but it isn't a standard yet..
/edit: I also think that art might not necessarily be the focus of future game development. I think AI might take a couple of steps forward, seeing how there haven't been any groundbreaking developments lately, at least none that I've heard of..
I'm probably way off, though..
Ahhh I finally get what u are getting at Pior! You'll have to excuse me for being dense!
Yes! Artists have the tools to do amazing work! I agree. We are at that threshhold with tessellation what is possible in Zbrush for the most part is what was imagined realized! ( except for perhaps character artists: a dynamic solution to define hair and clothing volumes with a dynamic method )
Sort of makes you wonder when character artists had to suffer with nurbs tools...
Imagine this movie never existed and I presented these images to Pixar looking for a job as a character modeler... I imagine the best I could hope for is crate duty or most likely something involving carpal tunnel? Yet this work was done of course by an actual character modeler at Pixar. But what really is the true reason for such a horrible character model?
The nurbs tools were not technically advanced enough to allow the artist to realize quality and or poly/subd methods should have been used instead.
Or "Back Then" no one knew how to make good looking sculptures but now we all know better?
Or, the industry didn't know any better and at the time this guy's ninja skills seemed like the bees knees? ( talented character artists that could handle caricature and Pixar magicians were not introduced to each other yet )
I suspect the last answer is most likely the right one. ( with the first situation impacting quality of life having to suffer with dem nurbs )
It is not as if talent did not exist before Toy Story 3 and I find that talented artist rarely find it impossible to get nearly best work out of a different medium ( for a sculptor volume and silhouette are universal whether defined by sculpy, wood, or polygons )
heck even many 2d and 3d artists are just fine switching dimensions as well!
On the other side of the coin...
There is still a lot of butt ugly work in even current gen. I am wondering if technology allows rendering fidelity to exceed todays target render standards... whether or not another wave of jobs could be lost to new talent inspired by cinematic levels of rendering? ( catching recruits who would of otherwise gon to the ranch? )
much better:
Imagine armies of thousands of characters running over a hill and charging at you. Right now that would destroy the framerate but with better computers and perhaps better tools this will be more possible. Technology wise this would require crowd simulators but we already have those for movies like LOTR, etc. so it may be nothing new tool-wise.
Example Music from 70s and 80s compared to now. Games like Doom, Wolfiestien, Medal of Honour many more from 90s to early 2000 had some thing in them worth playing for.
These days gaming is more like Ford mass production just make make games one after another so that kids waste money and business keeps running.
It would be nice to see a original game story and whole new game mechancis not just eye candy.
15 Years from now we may have all in one in house 3D projection platform where you are surrounded by 360 visuals and you move your body like Kinect to interact with the environment with super cool photoreal details. I name it the "Z Box"!
Imagine how cool it would be to actually walk in corridors of LV 426 and ducking and lying on ground for cover instead of using ur Pulse rifle when aliens jump on you. lol
Augmented reality is already here neways + 3D projection (the stuff you use to turn buildings into projection screens) + body motion capture (kinect)
The idea is copyrighted
Anyway, I dont know if you've seen the "GameSphere" or whatever its called, but me and my friend where having one of our regular brainstorming conversations. Sometimes its about science, society, music etc, but this time it was about what would be a SICK new gaming technology. We came up with an idea for a sphere, housed on a ring of rollers connected to electrical contacts, which would allow the player, standing inside to have FULL freedom of movement, working similarly to how a ball-mouse translates the rolling ball into movement. You'd put on a VR headset, and wear these special gloves and ankle bands, which communicated with an array of sensors outside of the sphere to allow FULL interaction with the gameworld. We were talking about it for SO long, just adding idea after idea. Then about 2 months later, a couple of yanks with a lot of dough, must have tuned into our brains with their scanner devices, and stole the idea right from under our scalps!
We were gonna become millionaires with our invention! If only we had an American budget to throw around! Nah we're just glad that SOME bright mofo's are making OUR ideas come to life. The gaming industry needs a SERIOUS kick up the arse, and it needs to think twice about whats more important, the actual game experience, or the profit... EA, anybody?
Hmm. You do realize that CoD multiplayer is an excellent game, right ? That fact that you don't like it doesn't mean that everybody else playing is a dumb redneck.
Also, there's no such thing as "the whole industry following it". Skyrim and GTA are perfect examples. For a game to be popular and succesful, it just has to be ... good.
Yup, that's called Lasertag
Anyways, back on topic.
Create a new max file with some modular environment base meshes, press ctrl+s.
Alt+tab to zbrush and see a list of all the objects you just created in max absolutely no navigation or telling it where to look.
Sculpt them. and save the zbrush file.
Switch back to max and see the new high res meshes automatically added to the scene (and updated as you change them in zbrush).
Duplicate the base meshes and create low poly meshes using normal modeling tools, or retopology.
Select low poly model and press a bake button, output texture file names and paths are automatically generated. Tweak bake cage if needed and click bake again. press ctrl+s.
Create diffuse, spec, modify normalmap textures in photoshop (hopefully with a working 3d paint feature sometime in the future).
Switch to game engine and see your new low poly models, textures and materials without any manual exporting or importing.
Add the modular pieces to the level and add idtech5 style stamps and vertex/texture painting, this wouldnt even require a virtual texturing implementation, just basic texture streaming.
Most of this linking could be done through plugins for each program.
More engine side features could include geometry manipulation like smart booleans where you can damage an object by subtracting chunks or splitting with configurable surface material settings which would add edging detail geometry and automatically blend damaged texture data into the textures.
[ame="
^This is already happening. Although, from what I've seen, the algorithms could be a little smarter (like only increase the triangle count so that the silouhette gets affected. EarthQuake first mentioned this a while back.
I turned this feature on in Batman: Arkam City and Batman's cowl was no longer boxy. When you get up close to something, it'll self-shadow itself because the triangles are there (whereas before the bump and detail were only faked with normal mapping).
I think most realistically, [ame="
Although, that's not really realistic. Rendering a ball on a plane is not a game, so using ray tracing for bits and pieces for the next-gen is probabaly what might be done (for specular reflections (?)).
Game Developer just had an awesome article on Ptex, which comes with some restrictions, but no traditional UVs. And apparently, DirectX11 has enough pieces of the puzzle in place that you could have a 4,096 triangle mesh (I'm guessing from what they say) in game (now?).
Then there is this (Infinite detail):
[ame="
As far as what pior and others say that graphics fidelity can't go beyond its current state, it could, but the caveat will be making the extra content easy enough to justify the extra fidelity.
However, big magic technical shifts are not likely to happen, because the nature of realtime CG graphics is of slow iteration and improvements on existing tech. Also, from an artistic standpoint, we already reached the point where source assets are already of extremely high quality. (see the Gears of War 3 thread by Kevin).
So to me, the future of asset creation lies in the improvements of the bottlenecks left to fix, to make the existing processes faster. And even if an (hypothetical) virtualized voxel game engine came around, it would do nothing to help asset creation speed - you still have to paint your diffuse, spec, gloss, and so on. The Arkham City example you quote is exactly the same : this is an engine improvement providing higher visual quality, but that says nothing about asset creation technique (wich I believe is the subject of this thread). It sure is very cool tho
It's actually interesting to think of it from different point of views. To an unexperienced person (who might not really understand what a good quality game asset requires), the batman cowl smoothing might mean : "Oh this is awesome! No need for normalmaps anymore, we can just smooth everything and it will look cool!". However the more experienced industry vets (programmers and artists alike) will understand that this is just an extra quality boost, but it will never replace quality assets and/or remove the need to UV meshes and paint textures. This is why the voxel guys showing off their engine have zero credibility here, and this is why Carmack is so humble about his tech. No piece of tech can replace well crafted art ...
Blows me away that he does not even know how beautiful and immersive nvsurround/eyefinity makes RAGE? Well mebbe he iz werkin' on dem goodies for the PC version?
Thanks for posting
Seems many of his views for the future are the same as:
http://graphics.cs.williams.edu/archive/SweeneyHPG2009/TimHPG2009.pdf
Crysis 2 Ray-Marched reflections are cool. I just wish they would actually reflect things off screen, it's a little jarring to have a true reflection fade into a cube map when you slightly move your camera.
BTW, what exactly is the difference between ray-marching and ray-tracing, is marching only done in 'screen space' (I think that's the right word)?