"Quake 3 took a hybrid approach with the torso, head and legs being separate skinned meshes."
That's not exactly true. In QIIIA the elements with smooth deformations (torso and legs) are not skinned meshes (as in, driven by a skeleton) ; they are purely vertex animations. They can of course be authored as skinned skeletal assets, but they are not exported as such. So in the end the engine simply handles the relative rotation between head, torso, and legs based on player input. No bones involved, and I believe the head part didn't support animation at all, only reacting to mouselook but not deforming.
To the OP : overall there's much more to it than just segmented VS smooth. Also and FYI, "rigging" actually means the work done at the authoring stage (in Max/Maya/Blender/C4D), consisting of building complex systems to drive bones with special controllers (for instance : setting up a lookat constraint to direct the gaze of a character by simply moving a point in space, or limbs IK to pose arms and legs). So going back to the QIIIA example, a QIIIA character could be *authored* in Max/Maya using a very complex rig with many bones and advances constraints (Magdalena ! ) ; yet the game would see none of that since the export for a bodypart simply consists of a raw matrix of vertex positions changing over time.
Inversely, just because a game has segmented characters doesn't tell you much about what the engine actually does. A visually segmented model could be simple child/parent parenting, or it could be geo weigthed to a skeleton but only using 0s and 1s.
I would say that if an old game has very crude and typical "robot-like" animations (think : early 3D arcade games like Virtua Cop1), then the assets are probably parented without the use of a skeleton, and were probably animated by simply rotating sub-objects without even using a skeleton. Of course there's also the possibility that such robot-like animations are simply the result of the game being done by a very small team without human or tech support for actual rigging. And then there's the case of Mocap, allowing for something like Virtua Fighter - not necessarily requiring *any* advanced rigging for the creation of animations, since they come already done. Which explains why the characters in early VF games move very convincingly ... but their claw hands move in a very crude manner - because there's no mocap for the hands.
If anything this explains why 3DSMax was so popular in gamedev for a while, as despite being inferior to Maya for fully custom rigs it came with Character Studio, which allowed for an unusual but very intuitive way of setting up an humanoid rig and had clever animation controls that allowed to animate without having to interact with typical animation curves ; and also, it provided a unified exchange format for mocap data. So basically the best of both worlds, turbocharging the creation of third person games and convincing game characters in general.
I believe consoles tend to have a more varied history of technical implementations of 3d characters because of fighting games and third person games being more of a consoles and arcades thing. A look at game lineups for 1995/1996 shows that PC games had barely any 3D characters at the time - 3DFX was only founded in 1994, and the Voodoo2 only came out in 1998.
Hey @poopipe Thank you! I got it working. For whatever reason though, Unreal took issue with the max function so I plugged it into a min and inverted
Top row of planes are in negative UV space with UVs flipped causing the mismatch with their flow map. With your method of using negative space as a mask! Thank you!
Sorry @Alex_J I completely forgot to respond. Yes, I could have done this in a 2nd UV channel, my brain just went to trying to get negative space masked out. My current material is using the second UV channel as a "dye mask" and I don't really want to bring in more channels than I need. Thank you again though, it is a totally viable way of doing it.
It definitely looks impressive on the first glance, you have some really great looking assets and there is obviously a lot of hard work put into all of it. So you have some skills, dedication and some good looking portfolio shots, especially for a student. So nice job!
BUT (there is always a but) you have some things that throw up yellow and red flags. I apologize if this comes off as a little rough, don't be disheartened, I really do like your stuff.
PROOFREAD!
9 spelling mistakes in the Fjord project description, that's a lot of red ink. I have no idea what "regrrdi optimization" is. I googled it, google doesn't know either. Written communication skills are critical when interacting with everyone, writing documentation, and communicating in chat. Also the last line "All comments are well accepted, I hope you like it!" This comes across as "a little needy and starved for attention". Managing those types of people can be draining.
Hiring managers tend to focus on two things:
1) Does it look good. 2) Is it well built. You not really answering that last question very well. Lets see some more topology, lets see some uv layouts, tri counts, texture sheets. Material graphs in unreal. All of these things give me confidence that you can build assets correctly and I'm not seeing them.
Sanctuary Ruins
First impression: The slowly panning beauty shots are great, its a good looking scene Second impression: The description gutted me. This could absolutely be worded in a better way.
"Concept: Jose Vega..." "...It was really fun exploring the assets from Quixel..."
Cool, you credited the person, but the concept isn't yours, bummer. That's ok we often work off of concept art, but which assets are Quixel and what work is yours? Maybe hide the Quixel assets, or turn off textures so it's clear what work is yours. Was this a level layout exercise, lighting, composition, camera work? What was the point if not asset creation?
Maybe say something like "This project focused on using pre-made quixel assets so we could focus on composition, level design and rendering. I created several assets myself and here are the break downs..." and then show breakdowns, topology, uv layouts, material graphs ect...
Dremel
First impression: This looks good, especially the materials! Second impression: A lot of the reference matches your textures. So again I wonder "what work is yours?" Can you sit down with Substance and generate textures like this without photos? I don't know your portfolio doesn't tell me. This is a bad place to be.
The tri count and wireframe worry me a little. it seems excessive for a prop. The 5k cord looks like a auto-crunched spline that probably won't LOD well because each loop is it's own geo. Having a more solid mass of wires with one or two loops sticking up would probably be lower in tri count and LOD better.
Omni Scatter
First impression: It looks good but its a tool, it doesn't need to look good, it needs to fill a need. Second impression: I need a bit more info about how you created and used it. Are you building randomly generated biomes in Unreal? Scattering in Houdini and importing the whole thing? How did you create this tool, how might it benefit others? What skills did you use to build it? Code? C++? Python? Node Editor? Does it have a user interface that you created?
Site N-8
First impression: Looks good Second impression: That topology on the cloth tubes, looks auto crunched and is still excessive. I could probably get that exact same look with half or a quarter of the polygons. The chains are probably insanely high poly, like the rope bundles?
At this point I stopped, I would pass. It looks good on the surface but the answers I'm getting from your portfolio about "how was it made" either aren't there or are red flags.
Keep working on new projects, keep flushing out your student work with new things. For a student, you have a really good start, but you're not quite there yet and unfortunately the industry is full of very talented people who are all looking for work, and a lot of them have a lot of experience.
Focus less on pretty shots and more on the nuts and bolts of building assets.
Those are my thoughts, feel free to completely ignore them if you disagree, hopefully this is helpful and nudges you in the right direction, good luck and I look forward to seeing more from you in the future.
working on a new shader setup for my characters. got a bit weary of fighting with Blenders latest Principal shader, which has some odd features re the base colour, seems a bit weird to set up anyway I like this much better. neck needs a bit of work, but probably will add some kind of costume to cover it
the lut texture can be filtered or not - it's just a texture.
with the most basic implementation you're sampling one extra texture so that's a thing someone might choose to argue over. If you dont' have any need for dynamic changes or haven't planned to mix and match luts with the grayscale source textures then you're really only gaining in terms of memory footprint.
This sort of thing is extremely common in vfx and the shader world so I imagine it's not going to be a strange idea to the shader programmer.
in terms of what happens on the GPU I would expect it's very specific to your implementation so i can't possibly comment
I'm a big fan of the technique independent of any efficiency gains - its a hell of a step up visually from multiplying your basecolor over a grayscale map and it opens up all sorts of options if you have customisation systems or want to re-use assets
@Justo I used a triangle because you want this three edges; you also could use a multitude of three and select the three edges later to bevel and resise them along X/Y. (Oh i misses this in the following image )
@hanabirano: Really nice.. now thinking about a more modifier based approach..
Over head fans or central air units = A.C.'s can add to issues, also need natural lighting in the room is a good bonus and figure out a way to deflect the bounce light, a very old program called f.lux i used to use now i do not use anything, no tricks i just be sure to get natural light onto my eyes and viewing long distances more that i used to. There is also a defect the eyes go through when they adjust to the screen which starts to seemingly collapse the eye shell and cause warping, which once i learned about this i took very seriously.
& for those that have had eye surgery/s: eye collapse syndrome
Just a heads up.
eating well is really important for all functions of existence, sadly it would seem that 95% of people living at anytime are not told how to even eat. I won't change this threads topic cause i can bring up all causes of issues and it wouldn't be helpful to the concern currently, wanted to inform about the dysmorphia created with long term screen use.
These are some Vehicles that I made for Spectre Divide while working at Mountaintop Studios. One of the great challenges of the art style was that, being a tactical shooter, we had to make all cover in the game conform to a box shape with straight edges for clear player sight lines. That means even the Vehicles need to be very boxy. These limitations were a fun way to shape a unique art style for our environments. All these Vehicles were textured using trim sheets.