Awesome thread Warby. Gonna take some time to read thru after work.
Just an FYI: Feet dynamically adjusting to the floor has been around for a very long time. Early examples include Jurrassic Park Tresspasser, Heretic 2 or Looking Glass's Terra Nova.
I know the water fall should have separate planes for the textures and needs the top corner needs another animated spike texture, I'd do it differently if I did it again.
About this:
You can see in some places bits where it overlaps the other parts of the environment (i.e. tree stumps)
I would not be surprised if this was just additive blend polygons built on the CPU and drawn directly on the scene once all the solid surfaces have been drawn. The lack of depth testing shows that they were concerned about these polygons clipping into ramps/etc. and they're able to reject the light pools early to stop them showing through walls.
@james wild yes i also noticed how they sometimes bleed over the corner of a wall or a column i assumed that was an intentional fx kinda like CELSHADING bloom . but your reasoning sounds more likely ! the little tree remains in the bottom right clearly shows that it doesnt draw "behind stuff" on this image however they do get blocked by the torchstands:
making me believe that there is actually a tech difference between the small and the large ones ...
Holy shit! warby you killed this! Absolutely great thread. Bookmarked! Printed! Tattooed on my chest!
You picked a great game to breakdown. It looks uber simple to people but there is so much going on behind the scenes. Showing this to beginners to going to really help people get a handle on how games are done.
making me believe that there is actually a tech difference between the small and the large ones ...
I'd imagine that the smaller ones, being so large in number, do a search for the nearest static polygon, and then if that point has line of sight with the camera, draws. This would not work with the larger sources as they're big enough that you'd notice the light pop into sight as you walked into the room. The larger sources, being mostly static, are probably just animated models with some code to change the lighting of any dynamic that walks into it.
I'd imagine that the smaller ones, being so large in number, do a search for the nearest static polygon, and then if that point has line of sight with the camera, draws. This would not work with the larger sources as they're big enough that you'd notice the light pop into sight as you walked into the room. The larger sources, being mostly static, are probably just animated models with some code to change the lighting of any dynamic that walks into it.
wouldnt that require a raycast per lightsource? that sounds mighty expansive or can this kind of stuff get extracted from the zbuffer (probably with a frame delay)
wouldnt that require a raycast per lightsource? that sounds mighty expansive or can this kind of stuff get extracted from the zbuffer (probably with a frame delay)
If the basic scene polygons are separated out and optimized well (using a BSP or something though given the above pots-through-walls that's unlikely) it might be very, very cheap.
a friend from mapcore "redyager" just let me know that its apparently possible to import windwaker models into mario galaxy 2 and use that as a model viewer somehow thought you might find these different perspectives interesting too:
This is awesome. Wind Waker is one of my favorite Zelda games. It's really cool get a glimpse of the techniques they used to render stuff in-game. Thanks for sharing!
Consoles have a huge leg up on the number of draw calls per frame. The numbers change every few years, but console games can get away with 10s of thousands, while PC games need to keep it ~500 per frame to not choke. It's just a side effect of the architectures. x86 tech is a bitch and has lots of backwards compatibility quirks.
New x86 processors actually have to design around, and emulate bugs that were in earlier x86 chips! The architecture kept getting faster, but not better. Also, the BIOS you have when PC boots up has always just been a series of hacks over the original old school BIOSes to support newer stuff.
Consoles are designed fresh, and don't have this baggage.
BSP isn't the only way to sort a scene. It's Doom and Quake (1-3) era tech. Zelda is not a corridor shooter. There is brute forcing, and octrees and all kinds of other methods. When a game is capped at 640x480 with most only ever seeing the inner 512x384, then you can get away with overdraw.
There was never an 8 light limit. The old OpenGL spec said that everyone needed to define AT LEAST 8 lighting constants to be available, but any OpenGL implementation could use as many as they wanted. So...
-Not everyone used OpenGL's lighting to do their lighting. Lots of other tricks.
-No one used 8 lights! Too taxing. 2-3 at most.
-You could render 8 lights, then set another 8 and render again. It was only 8 PER DRAW CALL. Just like you have 1 set of textures and a shader during a draw call. But almost no one did this.
What would happen was, for every dynamic object drawn (anything that wasn't part of the static light mapped geometry), you would sort all the lights, pick out the closest x number, and render with those.
Sort lights, Set Light constants
Set Material (textures, draw mode, etc...)
Draw polygons
This is visible in the picture you have with Link between the 2 torches.
Dynamic IK systems are nothing new. Uncharted is lucky if it's in the first 1000 games to implement it. Any game that uses skeletal animation can do IK stuff easily. Dynamic IK is most likely what is used on the staff weapon with the cloth you have pictured near the top.
Aw yeah, anything Wind Waker is a win in my book. I remember I created that thread asking how to recreate some of the effects, I owe you guys for that since the work got featured on Kotaku.
This was a good read. In some ways I wish Nintendo would release an official book in the same format you've created to explain it in more detail.
Amazing thread. Wind Waker is not only my favourite Zelda game, but also probably my favourite art used in any game too. I love seeing a breakdown like this. Top marks!
Very intersting, thanks !
And those texture :poly142:
I took a look at some Super mario galaxy textures not so long ago, I was astonished by the optimization! It's almost unbelievable to think that this game came out 10 years ago and still looks more than fine !
Replies
Just an FYI: Feet dynamically adjusting to the floor has been around for a very long time. Early examples include Jurrassic Park Tresspasser, Heretic 2 or Looking Glass's Terra Nova.
About this:
You can see in some places bits where it overlaps the other parts of the environment (i.e. tree stumps)
I would not be surprised if this was just additive blend polygons built on the CPU and drawn directly on the scene once all the solid surfaces have been drawn. The lack of depth testing shows that they were concerned about these polygons clipping into ramps/etc. and they're able to reject the light pools early to stop them showing through walls.
Feet IK'ing dynamically to the ground has been around for years man, even n64 had that.
keep it up man, its cool lol
making me believe that there is actually a tech difference between the small and the large ones ...
You picked a great game to breakdown. It looks uber simple to people but there is so much going on behind the scenes. Showing this to beginners to going to really help people get a handle on how games are done.
I'd imagine that the smaller ones, being so large in number, do a search for the nearest static polygon, and then if that point has line of sight with the camera, draws. This would not work with the larger sources as they're big enough that you'd notice the light pop into sight as you walked into the room. The larger sources, being mostly static, are probably just animated models with some code to change the lighting of any dynamic that walks into it.
wouldnt that require a raycast per lightsource? that sounds mighty expansive or can this kind of stuff get extracted from the zbuffer (probably with a frame delay)
If the basic scene polygons are separated out and optimized well (using a BSP or something though given the above pots-through-walls that's unlikely) it might be very, very cheap.
also I recall feet IKing in the windmill in OoT, (i dont recall if its on stairs/uneven ground too though)
[ame="
i assume this means that they use the same engine ...
Ive heard that Twilight Princess used the same engine as Wind Waker. Does anyone know if Skyward Sword did as well?
http://www.kotaku.com.au/2012/08/need-a-reminder-of-how-incredible-wind-waker-was-look-no-further/
Consoles have a huge leg up on the number of draw calls per frame. The numbers change every few years, but console games can get away with 10s of thousands, while PC games need to keep it ~500 per frame to not choke. It's just a side effect of the architectures. x86 tech is a bitch and has lots of backwards compatibility quirks.
New x86 processors actually have to design around, and emulate bugs that were in earlier x86 chips! The architecture kept getting faster, but not better. Also, the BIOS you have when PC boots up has always just been a series of hacks over the original old school BIOSes to support newer stuff.
Consoles are designed fresh, and don't have this baggage.
BSP isn't the only way to sort a scene. It's Doom and Quake (1-3) era tech. Zelda is not a corridor shooter. There is brute forcing, and octrees and all kinds of other methods. When a game is capped at 640x480 with most only ever seeing the inner 512x384, then you can get away with overdraw.
There was never an 8 light limit. The old OpenGL spec said that everyone needed to define AT LEAST 8 lighting constants to be available, but any OpenGL implementation could use as many as they wanted. So...
-Not everyone used OpenGL's lighting to do their lighting. Lots of other tricks.
-No one used 8 lights! Too taxing. 2-3 at most.
-You could render 8 lights, then set another 8 and render again. It was only 8 PER DRAW CALL. Just like you have 1 set of textures and a shader during a draw call. But almost no one did this.
What would happen was, for every dynamic object drawn (anything that wasn't part of the static light mapped geometry), you would sort all the lights, pick out the closest x number, and render with those.
Sort lights, Set Light constants
Set Material (textures, draw mode, etc...)
Draw polygons
This is visible in the picture you have with Link between the 2 torches.
Dynamic IK systems are nothing new. Uncharted is lucky if it's in the first 1000 games to implement it. Any game that uses skeletal animation can do IK stuff easily. Dynamic IK is most likely what is used on the staff weapon with the cloth you have pictured near the top.
I don't remember this in the game, maybe it's an emulator bug? Then again it's been a while since I played it.
Actually even earlier than that. Link in Ocarina of Time also adjusted his feet to height difference.
I'm loving all of it!
(Bring us more! Feed us!)
This was a good read. In some ways I wish Nintendo would release an official book in the same format you've created to explain it in more detail.
@nix my money would be that it was just too lowres to see/notice
@habboi i would by that in a heartbeat
http://kotaku.com/5937500/why-the-prettiest-zelda-game-of-all-still-looks-so-amazing?utm_campaign=socialflow_kotaku_facebook&utm_source=kotaku_facebook&utm_medium=socialflow
And those texture :poly142:
I took a look at some Super mario galaxy textures not so long ago, I was astonished by the optimization! It's almost unbelievable to think that this game came out 10 years ago and still looks more than fine !
Haven't played a Zelda game since '64.