So, I was listening to the Games Industry Mentor Podcast and they were making predictions for what would be important in the future in regards to environment art. I was wondering what Polycount thinks will be more emphasized in future game art in general as well as specific fields such as environments, characters, simulation, physics, sound, animation, etc..?
Lets put the limit in the next console generation lifespan.
For example, will hard surface in zBrush become an important skill to have in your portfolio or will there be better sub-d modeling tools, Different rendering techniques like deffered radiosity or baked GI, raytracing, voxels? What about future consoles and there impact on constraints like memory, framerate (will more games target 60fps?), simulation (physics, AI)?
Replies
Make good art, become efficient (with whatever tools suits you) the rest doesn't matter. Tools come and go. The better you become in your crafts, the more skills you build up, the better you are prepared for the future.
What you mention (different rendering technologies, AI,..) will not affect the base content creation work. The true game changer of actually making hi-poly art, is already behind us.
Whether light is baked or real-time, doesn't change the fundamentals of lighting a scene. Whether your texture is 2k x 2k or 128 x 128 doesn't influence the aesthetics of the overall look. Games like "chrono trigger" or "Rage" are visually appealing despite totally different technologies.
Typically "how you pull" things off, I think never mattered that much, as long as you did good work fast
edit: Should say that on high-end we should assume getting more film-like shading effects. That means observing nature, see how materials react to light, what makes things distinct. Building up this awareness should be important. In the end as artist you will given a shader + tools (no matter how that's implemented) and you are given some tweakables and possibilties to influence said shader, and your job is to bring out individual/distinct looks.
But the same was true for painters for centuries
Poly budgets will increase. Texture resolution will increase.
Neither of those will really matter that much with asset creation as a whole - we're already at a stage where we can handle those well in games.
Where that will have more impact is how many of those large textures and high poly objects we can display at once and how they get used in games.
Examples? The vehicles in Gran Turismo are high res, and the ambient "filler" vehicles that you see in the streets of Battlefield or Call of Duty are not - they are boxy. This is because they are filler objects.
I'd expect that in the next generation we'll see much more detail in those filler objects - I'm not saying that the vehicle you'll see parked in the multistory carparks of a FPS game will have the same level of detail as the cars in Gran Turismo 7, but they'll certainly be closer to the ones in GT4.
This will also lead to another shift in content creations - companies won't be spending time to create those higher res assets that are still filler items, instead there will be more outsourcing for those objects and more art houses that specialise in creating those assets.
My predictions for the next gen are (ignoring the obvious increases in asset budgets):
Displacement maps will replace normal maps
More attention paid to the incidental assets
Extra processing power will be spent on physics and AI. The dynamic worlds will become more interactive.
More
think they will be used side by side, one for larger forms, and one for surface detail.
We still have shoddy Normal Maps, only now artists must worry about shoddy tesselation.
That goes without saying, but I specifically mentioned a time limit for the lifespan of the next-gen consoles, what kind of advancement can we look forward to with tools (to make easier and less shoddy normals for example) and what will newer engines allow us to do?
Rage's megatextures do change your workflow at least to some degree. Sure, you are still going to be using modular pieces but last I read, ID had hired a whole ton of stamping artists that did nothing but break up the inherent tiling of modularity.
The guys who developed BF3's lighting, Enlighten, also implemented their tech in the UDK, but they had an interesting option, you could use it to pre-vis what your scene would look like with Radiosity if you still wanted to bake the lighting.
Perhaps this previs tech could makes way into DCC tools like 3ds max so studios that can't go through all the effort of implementing a realtime deffered radiosity reandering scheme could still have the benefits of fast iteration when authoring their lighting?
So, people are always compositing normal maps that they RTT with ones from crazy bump, but if you are doing displacement I'd imagine to get a acurate result it would be a little harder, no? Or is that scale of detail left striclty to the normal map?
haha! Why is that? lack of time for proper sub-d RTT? wouldn't outsourcing handle that?
So, obviously poly limits and memory will increase, but I wan't to stir up some discussion on what kind of specific workflow changes would come about from some of the other interesting specialized tech.
I do expect more games to target 60fps, but they'll still be the minority. And AI will stay about the same, though there might see one or two exceptions if some developers put real priority towards it.
Personally I hope to see some games that go really crazy with physics and environmental destruction next gen. The only one this gen that impressed me on that front was the canceled game Reich (no relation to Halo) by Ignition:
[ame="
For Workflow changes, as you said the previs tech stuff is definitely growing (just look at things like iray). Cutting iteration times is a common goal in all things simulation (be it rendering or chemistry, materials, construction...). I think we also saw that many tools nowadays are "in engine", not like the old days where you built stuff in CAD like app. That can still be improved.
Megatexture, just because the technology exists, doesn't mean it will become an industry standard of doing things that way. id had done real-time lighting with fancy stencil shadows in doom, never became industry standard.
Raytracing, if you look at bunkspeed, we are not there yet when it comes to full realtime usage. Maybe (similar to Rage's special use of tech) someone will do an iconic game with that sort of tech. But grand-scale, dunno.
I think if you look at BF3 on PC, that gives a good glimpse on what's technically expectable for next consoles as well. Global Illumination (Enlight) you mention yourself, what currently has to be mostly baked, I'd expect to fly with the next consoles. The consoles hold PC back, as devs don't invest too much money into extras that won't work on console.
Consoles will be based on what's technically doable today, because everything is bound by development costs. Yes you get more out of the hardware compared to inside a PC due to the low-level interface, but essentially I don't think we get another "CELL" like scenario, where new hardware pops up. With the mobile devices, things like steam... established nowadays, I doubt MS and Sony will throw insane amounts of money around this time. I pretty much expect the same we had before just in HD (i.e. more or less what PC delivers, higher res, more detail, more things going on, think PhysX edition of Batman).
It's always good to compare games of a series, say the "last" title for each of the consoles they appeared on.
As for tools, look for tasks that call for number crunching to be sped up.
That's my personal opinion.
And while it will sound like marketing (because I work there) but if you look at some technology that is used in applications and games (CUDA accelerating rendering as well as PhysX, or OpenGL stuff that speeds up 3d Coat...) I'd expect that level of "doable" for the next console.
Everything has to be balanced with costs somehow. So adding "more" must get cheaper
I think you made a good point about megatextures tech not becoming standard, but it's not like ID owns virtual texturing and other engines can't implement similar special cases, like BF3 does for terrain. ID's stencil shadows didn't effect authoring as much as I would think Megatexturing would.
I personally don't care much about the latest shiny tech being implemented in game engines. My only wish would be for the next version of, say, Mudbox or Zbrush to be even more well rounded so that artists could really focus on art more than tech. Imagine the time we would save if Mudbox was integrated inside the Max viewport ...
I'm so over modeling in program A, then retopo in program B, then UV mapping in program A or C, then baking in program A, B, or D, then import to engine X....ah fuuuuu my head hurts. That whole process is clunky and it's the Artists that have to jump through the hurdles of that instead of focusing on what matters...the art.
3D Coat is probably the closest to having all this. But it's a jack of all master of none vibe there.
in a way, they have not only freed people of texture limts, but geometry limits as well. you don't have to have strictly modular pieces everything can be made hi poly and baked down into unique textures.
Maybe eventually instead of stamping you can just make you scene out of modular pieces, the whole scene, then take chunks of it at a time into a sculpting app and make it all more unique in geo as well. then just retopo your scene as a whole and RTT.
you dont actually get more polys but you can sculpt everything uniquely.