Things are probably getting lost in translation, and that isn't helped by the one-liner first post.
"Is metahuman creator changing the industry?" It is not, because it's not out yet.
"Do you think this software will actually take some 3d character jobs in the future?" This is such a weird question. Just because creating a sculpted realistic human character model from scratch can be fun to some people (in the sense that it can be satisfying, similarly to how model makers enjoy crafting detailed tiny little train models), having such skills doesn't entitle anyone to a job, no matter how good one is at the task. Finding a job is not like obtaining a degree, it doesn't depend on some kind of checklist - it only depends on the needs of a given studio at a given time.
If the MH output is good and usable, then of course some studios will leverage it - just like they can already leverage Daz models, Poser models, and 3d scanned libraries. At the end of the day It's 100% up to people interested in a field to develop a skillset that is relevant to it and in demand.
In practice, the current example models are extremely heavy and annoying to use anyways - but they can already be leveraged in clever ways, like rebaking from them, using them as base meshes, and so on. This stuff is fun to poke at. The irony of it all being that even though Epic is showing it off, the models in their current state are not even compatible with their own standard male and female mannequins/skeletons
The proactive artists who are aware of these these technical details by getting their hands dirty playing with the tech are the ones who are very hireable imho. There is probably someone somewhere already making a clever use out of the sample models, re-purposing then in one of their projects, touching up the textures, re-targeting Mixamo animations to them, and so on. Any clever AD would hire such a profile in a heartbeat.
Also, if it's split in real life, split it in your geometry. It'll not only make your topology a bit cleaner, and have your meshes more accurate/nicer to look at, but it'll be much easier all around to not have to model everything as one solid piece.
Since today is the last day before the next challenge starts I figure I'll upload the rest of the shots of this character as well as the Sketchfab link:model
I also got some different stuff to share. Remember the sdf modeler from the top of this page? I'm revisiting this project, and I have a lot of now stuff to share actually. Its basically a complete rework, so it has different features, some of the orignal ones are missing, but i have some new ones. The most recent update is that now all sdf objects blend and sort correctly with each other and polygonal objects. so you could have a few different sdf textures of objects, and make a layout like this:
These are made out of 3 sdf textures, and the rendering is done using sphere tracing. I added a lot of new shapes and features to the modeler part of it, so now its easy to boolean stuff out. Here is a more clear example of what happens. The box meshes are are laid out in the level, and they use an alpha test material with ray marching the sdf texture, to render the object.
it outputs position, normal, and pixel depth offset, so you can use the default unreal shader to shade it. Dynamic shadowmaps do not work yet, but ssao, ss shadows, and ss reflections work.
So the voxel modeler with smooth voxels is back...And it just became more powerful with the blending of sdf objects.