it really makes me sad how people equate higher polygons and bigger textures as 'better graphics' even if the art is nowhere near as elegant or proficient. like when people say PSP games have better graphics than DS games hands down...(like, all of them, not on a case by case basis as it should be) ugh.
After the inevitable apocalypse we are going to experience quite a setback. Once the fires are out and all the zombies dead we will probably found ourselves playing cube:
The BladeRunner game from Westwood studios used Voxels for its characters, although they were super low res and pretty low framerate, so the quality level was fairly poor.
I think we're either going to go some form of Nurbs/Subdivision/Displacement or Voxels or a combination of the two w/ raytracing rendering. More robust & physically or scientifically accurate lighting & physics systems.
lots more post effects- they are cheap and hide the art, my bet is a ressurgence of the lens flare, but like that video above it wont just be on the sun, every specular highlight will be an accurate dynamic lens flare with rainbows and everythink
The reasons I suggested per-poly collision as a way of clothing in the future, is because Rubbery Normal Mapped clothing looks like complete trash in motion. Having a fold in place stay in stationary place completely breaks the believability of cloth and skin. It might not be the greatest piece of artistry to let the physics do it, but it certainly LOOKS much more natural.
I think a lot more can be done with vert animation ontop of bones with animated blend textures and animated UV's before we give it all up to a simulation. We have all of these things already in one form or another, but we don't have them hooked up working to blend at the right times on one character.
I like the idea of actual geometry forming wrinkles. But maybe some kind of active subdivision in key spots that is driven by the animation could be put in place. So you only get wrinkle geo in places its needed at the time its needed. Maybe use a map or vertex colors to define areas that can subdivide for wrinkles.
God knows how many times we've had to watch Solid Snake, or Marcus Fenix 'angry eyes' brow wrinkles make them look from VERY VERY angry to, VERY angry, to Sad Angry, to Angrily laughing.
That has a lot to do with the base pose the character was modeled in and not really the capability of the animation. That gripe can be solved with today's tools. If he's angry 70% of the time, cool, tell the animator to make him angry 70% of the time, but make sure he/she has the proper base to start from so the other 30% can actually be done. It's more of a left over "we must model an expression else there won't be one" mindset.
Bottom line I'd rather see all that processing go to other areas that are farther behind the curve, than wrinkles. I'd like to see:
- A working squash stretch system
- Vertex animation brought back for specific things like facial animation.
- A real time muscle system before we go nuts with clothing. Most characters are either naked or covered in armor so clothing is pretty optional?
quality comes from the artist! lets also start like this.
The biggest problem that I see currently with asset creation is that the "artists" need to know a shitload of abstract 3d knowledge and other abstract 2d tricks to get something down that ends up being an asset in a game. On top of that this knowledge for the programs changes quite fast so everyone needs to keep track of this "new & fancy" stuff all the time.
On the other hand the "artist" should also work on artistic skills, building up imagination, working on expression of the ideas...
What I see currently on the outcome of this situation in the industry is that most stuff is boring craftsmanship hulls due to the thin line that someone is able to manage both sides. Modifications on top of modifications and other sort of silly ideas for the trashbin created by clicking on the next extrude buttons.
Everyone grew up and experienced physical ways how to express something or whatever, if it was playing with sand, singing in the church-choir or licking a frozen metalbar. Each expressed a different outcome to the audience and pinched another wrinkle of experience in your brain.
All the current computer stuff is sooo much based on neglecting this physical experience from your hands and body so on that its pretty much just a mindgame to create something with it. I havent touched any 3d model that I created nor did i feel much while creating them, a small thanks to wacom for giving me some small glimpses of brush feelings...
Where can I feel or see the impacts of wheight, mass, volume? I dont feel a deep urge to have the next multi-ring-seperate-collapse shourtcut on my keyboard or mouse.
I want to be able to "do" stuff somehow a little more in 3d programs, getting a little more feedback in my fingers and ways to be able to form something a little more similar to what I already learned in the tools workshop in groundschool not having the feeling of missing the button of menu 56/invert hull. I have the impression that its currently still too much rooted in the first codelines of wired nurds, years back in time. Adding things on top of each other, expanding and collapsing, no one touched the ground...
So physical feedback and some roots built away from the inner old nurdy core would be my call.
The line is thin for creation with all the changing technology and awesome fancy shit on top with this immense fantasy for the flipping of words in building the new version 12.334. As for assets I would suggest sometimes more the risk of taking a step back, makin it nice and deep & not deal too much with Mr. Awesome Tech.
I'm in the ring for virtually reality, Imagine being able to shrink your self down and fly around a high res model sculpting in the details you see fit, or have the program lay down basic projections of what you see in your head. I'm talking walking around inside a characters wrinkles and changing them up with your freakin mind or hands! Instead of going over and clicking the color you think you want you've just think about it, or think of the colors of roses or metal or anything, and you've got it, ozzing out of your finger tips and you can feel it, the wetness of the paint, the smell, It be a so much more personal experience to really interact with your art. I feel like what I make on a computer almost isn't real somehow, and often frustrated with how a program just won't let me "do this or that" or "just friggin mush it."
Really think about how awesome It would be, creating land scapes and architecture, making your self huge and molding mountain with your hands. Or drawing drawing out a house with your hands, and then walking about inside, don't like the kitchen? Move that shit around with your zero point energy-esq fingers!
Not to mention the feeling of really being in the game, getting punched in the face multiple times, and damn it stings, your nose feels like it's broken, and then you leave the game and your fine.
Crap, now I realize I need to freeze myself or get my brain in a robot body.
i think that for ingame stuff displacement mapped subD's will probably be what were doing next. kinda seems like the logical progression from bump to normals to parallax/relief to full on displacement, and it seems to be fully accelerated on new cards/ next gen systems from ati and nvidia. But theres probably also gonna be more voxel based stuff when intel's larrabee hits the market, just cause its so programmable and there will be more devs experimenting with different kinds of ways to render stuff in software. yeah more poly's, bigger textures too of course, shaders getting pretty much film quality, good sss please.
on the actual modeling side, 3d coat makes me think that voxel based high poly modeling is almost certainly the future. there really just isnt any reason for artists to have to think about edgeloops and correct topology, but im still not sure those aren't still good things to know. people who model with clay and wax dont know any of that crap and it doesn't seem to have made the slightest difference in the quality of sculpts they have produced over the last 5000 years. so yeah, id wager that in 3 years almost every high poly model (im a character artist so i could be totally off for hard surface stuff, but i dont think so) is made from voxels barring some major change in the way polygons work. we'll probably sculpt voxels, paint voxels and at the end bake some 80% pregenerated 20% hand worked retopo'd subD cage into a near flawless ingame asset.
of course these apps will all be buggy and never live up to the potential they looked like in the press release, but on a good day we will at least dare to think that the next version will actually fix the current ones problems.
i think quality is absolutely about good artists and good programmers (who actually implement the fancy tech) working together, its a videogame so its 100% partnership. Almost everything cool i see is from when these two groups actually work together and solve problems together.
last thing, low poly modeling, aside from a few very cool, artistically inspired instances, can go to hell in a handbag.
I dunno nekked, no matter how cool this all sounds I can't get myself to believe in a complete rework of the way game engines operate. I am all for hoping for a much more generalized used of voxels for highpoly creation, but hoping that realtime (= OPTIMIZED) 3D will be like that anytime soon, I don't think that's really feasible. Maybe for terrain and levels, but for characters for instance I don't think it would degrade very elegantly.
Often fancy tech looks good on paper and formulas, but not in practice. For instance, take edge creasing (something we already have, in multiple flavours, in 3D apps today.) If you ask a programmer, he will tell you 'yeah for sure, we can do everything that way, if it's sharp it's sharp! Use creases!' but in practice as modelers we all know that it goes only this far, and in the end you need double edges that subdivide well, and so on.
I'd rather not bet on games starting to use some uber fancy tech, since it was always promised to us in the past but never really happened (basically at every directX release haha). The authoring tools however I hope will improve, and faster than they do these days. Technically there is not much difference between the earliest bumpity, shiny raytraced checker balls renders from the 90s and, say, Doom3 (apart from D3being optimized and running in realtime). What makes a Doom3 screenshot look better is the toolset that came along in between (solid subD, and sculpting apps) and the artistry...
I forsee UI usuability and apperance getting much better and more intuitive. Think about how different devices have been getting so much easier to use lately? look at the iphone /ipod touch. I would never have imagined somethign with a UI like that even 5 years ago. Once more artists and graphics designers start getting involved with UI design isntead of programmers trying to set it all up things will get much much better.
i just wish someone would come out with an animation package with an interface like silo........
No more will we be sculpting wrinkles in Zbrush. We will instead be modelling every single wrinkle, so they will fold and crease naturally with the animations.]
More likely we will get sculpting with retopo on the fly IMO. I seriously doubt sculpting will become less prevelent.
I think having to cut up my bloody mesh all the time to make hard edges will dissapear soon-ish. You can have control of edge hardness in some packages but its not well developed yet IMO.
For the near future (like within the next 5 years, I'd say), I predict less artist involvement in the optimization end of asset creation. Whether it's through voxels or some other means, I think high res modeling and all the texturing will be entirely sculpting/3d painting based, without any need to even think about the polygons or the UVs. It'll all be baked down and optimized automatically.
The way I expect that to work is that we'll define the important loops on the high res models in the sculpting app and the optimization works just like the quadric edge collapse decimation in Meshlab, but it always maintains the defined loops.
The loops defined in the sculpting app would be linked to an armature for animation, with falloffs or 3d painted weight maps to tweak the deformation, and the hi res model is animated instead of the game res (using the same rig as will be used in-game) with schmancy jiggle and elasticity maps or what have you, and it's all baked down into the final animation sets along with the textures and everything.
It would make the entire pipeline almost completely nontechnical and platform-independent, so the same artwork can be decimated down to a 10k poly model or a 2k poly model but it has no bearing on the artwork because you just make it once and it's batch-processed down to the required specs in the form of streaming megatextures on low poly models.
Stylistically, I suspect there will be a major surge in surreal stuff along the lines of Little Big Planet, Oddworld, and Psychonauts.
I dunno nekked, no matter how cool this all sounds I can't get myself to believe in a complete rework of the way game engines operate. I am all for hoping for a much more generalized used of voxels for highpoly creation, but hoping that realtime (= OPTIMIZED) 3D will be like that anytime soon, I don't think that's really feasible. Maybe for terrain and levels, but for characters for instance I don't think it would degrade very elegantly.
Often fancy tech looks good on paper and formulas, but not in practice. For instance, take edge creasing (something we already have, in multiple flavours, in 3D apps today.) If you ask a programmer, he will tell you 'yeah for sure, we can do everything that way, if it's sharp it's sharp! Use creases!' but in practice as modelers we all know that it goes only this far, and in the end you need double edges that subdivide well, and so on.
I'd rather not bet on games starting to use some uber fancy tech, since it was always promised to us in the past but never really happened (basically at every directX release haha). The authoring tools however I hope will improve, and faster than they do these days. Technically there is not much difference between the earliest bumpity, shiny raytraced checker balls renders from the 90s and, say, Doom3 (apart from D3being optimized and running in realtime). What makes a Doom3 screenshot look better is the toolset that came along in between (solid subD, and sculpting apps) and the artistry...
Just my 2c.
Yeah dude i have to agree with pretty much everything you've said in this thread(except that sillyness about lowpoly stuff, go make some DS games you losers).
Most of what has been mentioned here, as it relates to realtime render i think is the future of TECHDEMOS, but it will be a long time before we actually see any of it used in a real production environment. Voxels are great, but there advantages seem much more on the asset creation side, not realtime rendering(unless you need some special actively deforming/transforming character or something of that nature). Real displacement is cool and all, but in the reality of an actual game, will you notice the difference between a 20,000 poly character with a good normal map, and a 500,000 poly raw mesh? You see they both need a similar texture to operate, and one will use a lot more resources to render. When we're talking using this stuff in a real game i think its very cute to think we can fill the screen with 500,000-5,000,000 poly subdivided characters, but the reality is that the tech will need to advance quite a deal more before that is feasable. Really ask yourself, is it going to change the game experience at all if you're looking at the same character with 20,000 tris + normals, or 5,000,000 polys? In motion you probabbly wont be able to even tell the difference.
What this all comes down to is, yeah we're going to see a lot of new features and tech demos, that will sell a lot of graphics cards, but other than that it will be more of the same. I think the real breakthroughs are going to be coming in terms of highly customizable realtime radiosity lighting, better animation support, better particles, ect moreso than supernew modeling methods.
Until its actually 1. A much better workflow to create art for this new methods, people always seem to say sub-division stuff will be the future in games, have been saying it for years. But the truth is that the workflow would be much more difficult to actually use. What do we do now, we create a highpoly model, using all sorts of cheats and hacks to save us tons of time in production, do not even think about optimizing it, and project this onto a lowpoly mesh, we dont even uv our highpoly models. Now you're telling me that i'm going to need to create clean, optimized ready to animate sub-d stuff for everything in a game, uv and texture it? For what reason, so we can have a little smoother silouhet to our models? Fat chance.
2. Cheap enough that you can not only render a single character, but multiple characters, rigged and animated, the environment, effects, and all the various AI/Gameplay systems together. People talk about how good of an LOD system displaced sub-d meshes will have, but i think you will lose a good deal of quality there, because your "texture res" will be stepping down quite a bit too, because the geometry needs to have the detail to support the displacement maps. At the expense of assets taking a hell of a lot longer to develope, i just dont see most studios going this route for a very long time.
actually i think displacement mapped sub-d's are made with exactly the same workflow we use now, despite the fancy name. basically do everything the same, same quick tricks on highpoly, still dont need uvs on it, still make your lowpoly ingame model, with uvs and good edgeloops for deformation and efficient use of polys. the difference is instead of casting a normal map (but that could be done too, and probably would for fine details) you cast a displacement map, and the actual casting may be slightly different since its going to be applied to a smoothed model. you still build almost the exact same low poly model for the ingame asset, but ingame it will subdivide the model x number of times and apply the disp map to it in realtime. if its far away, dont subdivide it at all, walk right up to it and maybe subivide it 4 times and use the full res disp map on it. displacement maps are actually used cause its not feasible memory and performance wise to do that kind of quality with raw polies, its really a very efficient way to have very nice looking assets. Just compare the size of a 4 million poly head to a 2000 poly head plus maybe a 512 dispmap and 1024 normalmap. they will probably render almost exactly the same, but one is far far smaller and performs far far faster.
Thats basically my understanding from some papers by Hughes Hoppe, a research guy at MS. I'd post a link but none of his research stuff is showing up now, but you can see the google cache if you look for it, he seems to really know his stuff. I wouldn't be surprised at all if thats how most characters and assets are made for games on the next xbox.
Well certain tricks like floating geometry wouldn't work as well with displacement, so you'de either have to manually fix that in PS, or spend more time making a more "correct" highpoly model. And really, this is only suitable for characters imo, you need other methods for things like props, and weapons, which generally is going to be a much bigger part of your game than characters anyway. Thats why i talk about sub-d stuff there, people have been saying for a long time that instead of using baking down stuff for mechanical things, we'll just use a sub-d cage. I think that is wishful thinking at best.
And yeah, 4 million polys raw isnt going to work well of course, but a mesh sub-divided to 4 million polys using a displacement map isnt quite free either. I think you're still talking a huge performance difference between that and a lowpoly+normal map. I just dont think the performance hit will make it worth while for quite some time yet, i think eventually this is where things will go, i just dont see it in the next few years. But i'm no programmer, and have no idea what sort of performance this stuff really has.
what pior and eq said makes sense. all this new tech will improve things, just like it did before, but there will be no revolution here. Sculpting is pretty much perfect for creative modelling, how the data is stored (voxels or mesh) is secondary to the modelling process itself, or the tools "shaping" stuff.
Imo the lowpoly biz will get a boost again thanks to iphone/wii and alike systems. We will see more small devices with similar hardware to netbooks/smartphones replacing classic desktop pcs, opening the market further and being dead cheap/easy to use/low power consumption. You dont want that giant computer sucking 100s of Watts and generating the associated heat/noise, technology is now there to make that change happen. We will see more downloadable stuff... That kind of future format probably is more "cash-in" friendly, than AAA titles for costly year long productions for the hardcore gamers.
Besides, lowpoly will still look better today thanks to all the postprocessing, better tools, more physics and whatever compared to the past.
As EQ said, the new power will be invested in stuff, it has always been invested to: shading, lighting, shadows, natural phenomena... a bit more of everything. Pure detail wise I think stuff is pretty damn good already (if you look at RE5 and alike). Uncanny valley is hit by lack of proper animation for realistic rendering anyway.
Another important goal is to reduce the quality reduction assets go thru. Ie often stuff is created in higher quality, than can be put in game (higher res texture, high poly models...), but still is paid for. Things like voxels, displacement mapping, real-time subdiv are just matters of getting a better compression ratio back whilst still being suited for real-time and the hardware that is around. And that is nice for download sizes too.
Subdiv is cool not because we can finally have 4 mio poly assets on screen, that goal we could achieve already, but because it costs less memory and allows us to store many models of that quality. Same with "megatexture" or "megavoxel" or whatever. However if they pay off is still to be seen.
Think of the HD codecs around these days, when I look at 5 MB movie file and compare its quality with what 5 MB were like 10 years ago...
What we will get in the long run is better "open" architectures that allow programmers to use hardware for their needs in a more efficient way. The problems and ways of solving will stay the same. Back to "software rendering", just now powered with many-core/many-thread hardware.
Researchers had "multi core" thanks to clusters for years, we have "modelling" since forever in CAD systems. Academics have always invested into all kinds of techniques/algorithms on maxing out the latest hardware somehow (realtime here is used if you achieve like 15 fps for a single effect on the latest hardware hehe). It's just so much that noone knows what will be useful once the mainstream hardware can do it, and what that hardware will be like. Bump mapping is from '78, normal map baking from '96. RayTracing well according to wikipedia in '86 they had a system doing "network distributed interactive raytracing"...
Still this is exciting times, as always, tech moves on, we along... but remember wii did not win thru processing power... and to keep to the HD codec metaphor, the crappy movie file 10 years ago "worked for you" back then just as well. As EQ put it, you are not gonna see all the crazy extra details like one dent more in the silhouette anyway, not if the game actually is good and grabs your attention to other stuff.
Replies
But this: http://vimeo.com/4240520 YUS!
A lot of job ads requiring ability to speak and read mandarin.
I think the vertagons are those dudes that help you in Half Life 2.
And I think polycies are procedures that management puts in place to ensure bad quality, and mandate that you work lots and lots of overtime.
/bad pun
Wow, we got owned.
[ame]http://www.youtube.com/watch?v=4aGDCE6Nrz0[/ame]
I think we're either going to go some form of Nurbs/Subdivision/Displacement or Voxels or a combination of the two w/ raytracing rendering. More robust & physically or scientifically accurate lighting & physics systems.
I like the idea of actual geometry forming wrinkles. But maybe some kind of active subdivision in key spots that is driven by the animation could be put in place. So you only get wrinkle geo in places its needed at the time its needed. Maybe use a map or vertex colors to define areas that can subdivide for wrinkles.
That has a lot to do with the base pose the character was modeled in and not really the capability of the animation. That gripe can be solved with today's tools. If he's angry 70% of the time, cool, tell the animator to make him angry 70% of the time, but make sure he/she has the proper base to start from so the other 30% can actually be done. It's more of a left over "we must model an expression else there won't be one" mindset.
Bottom line I'd rather see all that processing go to other areas that are farther behind the curve, than wrinkles. I'd like to see:
- A working squash stretch system
- Vertex animation brought back for specific things like facial animation.
- A real time muscle system before we go nuts with clothing. Most characters are either naked or covered in armor so clothing is pretty optional?
The biggest problem that I see currently with asset creation is that the "artists" need to know a shitload of abstract 3d knowledge and other abstract 2d tricks to get something down that ends up being an asset in a game. On top of that this knowledge for the programs changes quite fast so everyone needs to keep track of this "new & fancy" stuff all the time.
On the other hand the "artist" should also work on artistic skills, building up imagination, working on expression of the ideas...
What I see currently on the outcome of this situation in the industry is that most stuff is boring craftsmanship hulls due to the thin line that someone is able to manage both sides. Modifications on top of modifications and other sort of silly ideas for the trashbin created by clicking on the next extrude buttons.
Everyone grew up and experienced physical ways how to express something or whatever, if it was playing with sand, singing in the church-choir or licking a frozen metalbar. Each expressed a different outcome to the audience and pinched another wrinkle of experience in your brain.
All the current computer stuff is sooo much based on neglecting this physical experience from your hands and body so on that its pretty much just a mindgame to create something with it. I havent touched any 3d model that I created nor did i feel much while creating them, a small thanks to wacom for giving me some small glimpses of brush feelings...
Where can I feel or see the impacts of wheight, mass, volume? I dont feel a deep urge to have the next multi-ring-seperate-collapse shourtcut on my keyboard or mouse.
I want to be able to "do" stuff somehow a little more in 3d programs, getting a little more feedback in my fingers and ways to be able to form something a little more similar to what I already learned in the tools workshop in groundschool not having the feeling of missing the button of menu 56/invert hull. I have the impression that its currently still too much rooted in the first codelines of wired nurds, years back in time. Adding things on top of each other, expanding and collapsing, no one touched the ground...
So physical feedback and some roots built away from the inner old nurdy core would be my call.
The line is thin for creation with all the changing technology and awesome fancy shit on top with this immense fantasy for the flipping of words in building the new version 12.334. As for assets I would suggest sometimes more the risk of taking a step back, makin it nice and deep & not deal too much with Mr. Awesome Tech.
Really think about how awesome It would be, creating land scapes and architecture, making your self huge and molding mountain with your hands. Or drawing drawing out a house with your hands, and then walking about inside, don't like the kitchen? Move that shit around with your zero point energy-esq fingers!
Not to mention the feeling of really being in the game, getting punched in the face multiple times, and damn it stings, your nose feels like it's broken, and then you leave the game and your fine.
Crap, now I realize I need to freeze myself or get my brain in a robot body.
on the actual modeling side, 3d coat makes me think that voxel based high poly modeling is almost certainly the future. there really just isnt any reason for artists to have to think about edgeloops and correct topology, but im still not sure those aren't still good things to know. people who model with clay and wax dont know any of that crap and it doesn't seem to have made the slightest difference in the quality of sculpts they have produced over the last 5000 years. so yeah, id wager that in 3 years almost every high poly model (im a character artist so i could be totally off for hard surface stuff, but i dont think so) is made from voxels barring some major change in the way polygons work. we'll probably sculpt voxels, paint voxels and at the end bake some 80% pregenerated 20% hand worked retopo'd subD cage into a near flawless ingame asset.
of course these apps will all be buggy and never live up to the potential they looked like in the press release, but on a good day we will at least dare to think that the next version will actually fix the current ones problems.
i think quality is absolutely about good artists and good programmers (who actually implement the fancy tech) working together, its a videogame so its 100% partnership. Almost everything cool i see is from when these two groups actually work together and solve problems together.
last thing, low poly modeling, aside from a few very cool, artistically inspired instances, can go to hell in a handbag.
Often fancy tech looks good on paper and formulas, but not in practice. For instance, take edge creasing (something we already have, in multiple flavours, in 3D apps today.) If you ask a programmer, he will tell you 'yeah for sure, we can do everything that way, if it's sharp it's sharp! Use creases!' but in practice as modelers we all know that it goes only this far, and in the end you need double edges that subdivide well, and so on.
I'd rather not bet on games starting to use some uber fancy tech, since it was always promised to us in the past but never really happened (basically at every directX release haha). The authoring tools however I hope will improve, and faster than they do these days. Technically there is not much difference between the earliest bumpity, shiny raytraced checker balls renders from the 90s and, say, Doom3 (apart from D3being optimized and running in realtime). What makes a Doom3 screenshot look better is the toolset that came along in between (solid subD, and sculpting apps) and the artistry...
Just my 2c.
i just wish someone would come out with an animation package with an interface like silo........
More likely we will get sculpting with retopo on the fly IMO. I seriously doubt sculpting will become less prevelent.
I think having to cut up my bloody mesh all the time to make hard edges will dissapear soon-ish. You can have control of edge hardness in some packages but its not well developed yet IMO.
The way I expect that to work is that we'll define the important loops on the high res models in the sculpting app and the optimization works just like the quadric edge collapse decimation in Meshlab, but it always maintains the defined loops.
The loops defined in the sculpting app would be linked to an armature for animation, with falloffs or 3d painted weight maps to tweak the deformation, and the hi res model is animated instead of the game res (using the same rig as will be used in-game) with schmancy jiggle and elasticity maps or what have you, and it's all baked down into the final animation sets along with the textures and everything.
It would make the entire pipeline almost completely nontechnical and platform-independent, so the same artwork can be decimated down to a 10k poly model or a 2k poly model but it has no bearing on the artwork because you just make it once and it's batch-processed down to the required specs in the form of streaming megatextures on low poly models.
Stylistically, I suspect there will be a major surge in surreal stuff along the lines of Little Big Planet, Oddworld, and Psychonauts.
Yeah dude i have to agree with pretty much everything you've said in this thread(except that sillyness about lowpoly stuff, go make some DS games you losers).
Most of what has been mentioned here, as it relates to realtime render i think is the future of TECHDEMOS, but it will be a long time before we actually see any of it used in a real production environment. Voxels are great, but there advantages seem much more on the asset creation side, not realtime rendering(unless you need some special actively deforming/transforming character or something of that nature). Real displacement is cool and all, but in the reality of an actual game, will you notice the difference between a 20,000 poly character with a good normal map, and a 500,000 poly raw mesh? You see they both need a similar texture to operate, and one will use a lot more resources to render. When we're talking using this stuff in a real game i think its very cute to think we can fill the screen with 500,000-5,000,000 poly subdivided characters, but the reality is that the tech will need to advance quite a deal more before that is feasable. Really ask yourself, is it going to change the game experience at all if you're looking at the same character with 20,000 tris + normals, or 5,000,000 polys? In motion you probabbly wont be able to even tell the difference.
What this all comes down to is, yeah we're going to see a lot of new features and tech demos, that will sell a lot of graphics cards, but other than that it will be more of the same. I think the real breakthroughs are going to be coming in terms of highly customizable realtime radiosity lighting, better animation support, better particles, ect moreso than supernew modeling methods.
Until its actually 1. A much better workflow to create art for this new methods, people always seem to say sub-division stuff will be the future in games, have been saying it for years. But the truth is that the workflow would be much more difficult to actually use. What do we do now, we create a highpoly model, using all sorts of cheats and hacks to save us tons of time in production, do not even think about optimizing it, and project this onto a lowpoly mesh, we dont even uv our highpoly models. Now you're telling me that i'm going to need to create clean, optimized ready to animate sub-d stuff for everything in a game, uv and texture it? For what reason, so we can have a little smoother silouhet to our models? Fat chance.
2. Cheap enough that you can not only render a single character, but multiple characters, rigged and animated, the environment, effects, and all the various AI/Gameplay systems together. People talk about how good of an LOD system displaced sub-d meshes will have, but i think you will lose a good deal of quality there, because your "texture res" will be stepping down quite a bit too, because the geometry needs to have the detail to support the displacement maps. At the expense of assets taking a hell of a lot longer to develope, i just dont see most studios going this route for a very long time.
Thats basically my understanding from some papers by Hughes Hoppe, a research guy at MS. I'd post a link but none of his research stuff is showing up now, but you can see the google cache if you look for it, he seems to really know his stuff. I wouldn't be surprised at all if thats how most characters and assets are made for games on the next xbox.
And yeah, 4 million polys raw isnt going to work well of course, but a mesh sub-divided to 4 million polys using a displacement map isnt quite free either. I think you're still talking a huge performance difference between that and a lowpoly+normal map. I just dont think the performance hit will make it worth while for quite some time yet, i think eventually this is where things will go, i just dont see it in the next few years. But i'm no programmer, and have no idea what sort of performance this stuff really has.
Imo the lowpoly biz will get a boost again thanks to iphone/wii and alike systems. We will see more small devices with similar hardware to netbooks/smartphones replacing classic desktop pcs, opening the market further and being dead cheap/easy to use/low power consumption. You dont want that giant computer sucking 100s of Watts and generating the associated heat/noise, technology is now there to make that change happen. We will see more downloadable stuff... That kind of future format probably is more "cash-in" friendly, than AAA titles for costly year long productions for the hardcore gamers.
Besides, lowpoly will still look better today thanks to all the postprocessing, better tools, more physics and whatever compared to the past.
As EQ said, the new power will be invested in stuff, it has always been invested to: shading, lighting, shadows, natural phenomena... a bit more of everything. Pure detail wise I think stuff is pretty damn good already (if you look at RE5 and alike). Uncanny valley is hit by lack of proper animation for realistic rendering anyway.
Another important goal is to reduce the quality reduction assets go thru. Ie often stuff is created in higher quality, than can be put in game (higher res texture, high poly models...), but still is paid for. Things like voxels, displacement mapping, real-time subdiv are just matters of getting a better compression ratio back whilst still being suited for real-time and the hardware that is around. And that is nice for download sizes too.
Subdiv is cool not because we can finally have 4 mio poly assets on screen, that goal we could achieve already, but because it costs less memory and allows us to store many models of that quality. Same with "megatexture" or "megavoxel" or whatever. However if they pay off is still to be seen.
Think of the HD codecs around these days, when I look at 5 MB movie file and compare its quality with what 5 MB were like 10 years ago...
What we will get in the long run is better "open" architectures that allow programmers to use hardware for their needs in a more efficient way. The problems and ways of solving will stay the same. Back to "software rendering", just now powered with many-core/many-thread hardware.
Researchers had "multi core" thanks to clusters for years, we have "modelling" since forever in CAD systems. Academics have always invested into all kinds of techniques/algorithms on maxing out the latest hardware somehow (realtime here is used if you achieve like 15 fps for a single effect on the latest hardware hehe). It's just so much that noone knows what will be useful once the mainstream hardware can do it, and what that hardware will be like. Bump mapping is from '78, normal map baking from '96. RayTracing well according to wikipedia in '86 they had a system doing "network distributed interactive raytracing"...
Still this is exciting times, as always, tech moves on, we along... but remember wii did not win thru processing power... and to keep to the HD codec metaphor, the crappy movie file 10 years ago "worked for you" back then just as well. As EQ put it, you are not gonna see all the crazy extra details like one dent more in the silhouette anyway, not if the game actually is good and grabs your attention to other stuff.
http://www.cs.tau.ac.il/~galran/papers/iWires/
interesting take on "feature extraction" and feature based modelling. Think in terms of bringing modelling to masses in a sketch-up like approach.