If you’re building little kick-nacks to go in snow globes, then fear for your job! If not, chill out
The quality is not all that great when you see these up close, the lighting is baked in, there’s no PBR it’s just albedo, no glass/metal/complexity, etc.
It’s early days, so things are likely to keep improving. But as it stands this is still very much a tech demo, with no clear impact on 3d creatives.
If you’re building little kick-nacks to go in snow globes, then fear for your job! If not, chill out
The quality is not all that great when you see these up close, the lighting is baked in, there’s no PBR it’s just albedo, no glass/metal/complexity, etc.
It’s early days, so things are likely to keep improving. But as it stands this is still very much a tech demo, with no clear impact on 3d creatives.
I thought that about 2D generative AI as well (midjourney, stablediffusion etc.) few years back... and now :-)..:-1:
Sure its not perfect, but for a LOT of uses it will be sufficient, and it will keep improving...
There has been quite a few reports of cover art illustrators unable to maintain a steady flow of work. But of course the cynics would argue that it's not "losing a job", it's just them not being able to adapt. But that's straight up edgelord dishonesty and not worth arguing over.
Western studios are probably wiser and not going all-in because they know that a lot is at stake legally. This is actually quite fascinating to follow :
I once worked on a tv show that had a very small number of cast members who are not easy to replace. It's quite expensive and time consuming to find them. We were paid decent but certainly could have been paid much much more - I paid close attention to the expenses of things going on. Anyway, I tried to get the less than 10 people involved talking so that we could all make sure we were getting fair rate. Like just because one person isn't good negotiator, why should they earn less than others? For example, by just 3rd or 4th season I was making nearly 100% more than another even though we both started at same time, did same work. Simply because I negotiated for it and they didn't. They never would have even known if I hadn't brought it up.
They were they only person who was even willing to discuss it all. Lol. Why? People just won't help themselves even when they have every advantage.
If we didn't need people in-house we wouldn't have them now because outsource providers exist and are capable of producing high quality work. This won't change when ML tools catch up with humans.
Utilising OS hasn't reduced the number of staff required at any of the studios I've been at - if anything it's created jobs in terms of managing the increased volume of content.
The real victims in the long term are going to be content farms (OS providers, quixel) and photographers/illustrators who work on a commission basis at the low-mid end
This one is imho the more interesting news. The company behind DAZ is going to make character creation a breeze. With Text to 3D character. Tafi started the open beta now ... https://www.maketafi.com/
This one is imho the more interesting news. The company behind DAZ is going to make character creation a breeze. With Text to 3D character. Tafi started the open beta now ... https://www.maketafi.com/
this effectively allows you to describe a preset you want to start from - I wouldn't turn that option down, particularly if I had a lot of Daz characters to make
this effectively allows you to describe a preset you want to start from - I wouldn't turn that option down, particularly if I had a lot of Daz characters to make
Yeah probably, but are there any actual products outside of the daz3d bubble using these characters?
i think all this a.i. b.s. is to just flood the markets with trash so indies, market place self reliant people can't generate enough revenue to become a competitor in the "bigger" spaces, then people who want to see positive light come in and say its for general things nobody wants to do, to lighten the hits, then everyone again lets their guards down. rinse repeat till they are like what fired for what.
"Oh cause all this time you where logging every keystroke to begin with basically copying my work habits and generating scripts to how i function in the work place to replace with me, with basically a copy of me to do work for you for Free. oh, well i did sign the contracts so... oh well." Like i said better to be offline totally.
Hope whoever is helping these people get all they need.
this effectively allows you to describe a preset you want to start from - I wouldn't turn that option down, particularly if I had a lot of Daz characters to make
Yeah probably, but are there any actual products outside of the daz3d bubble using these characters?
this effectively allows you to describe a preset you want to start from - I wouldn't turn that option down, particularly if I had a lot of Daz characters to make
Yeah probably, but are there any actual products outside of the daz3d bubble using these characters?
i watched part of some youtube video with a some ex-google guy warning about how AI is on level of nuclear bomb - tons of warning but nothing more than that.
i also saw a few articles to same effect.
the sense I got from it all felt a bit weird - sort of like some new form of guerilla marketing. Cause it all says nothing other than "this is REALLY big". I imagine people who write articles will say anything just to get some clicks so it's completely untrustworthy.
So far to me it just looks like better procedural tools. I don't see the intelligence. One of the big talking points is always "look how fast it moved in the last five years!", but that's not a measure of intelligence is it? Like this is an organism with nervous system that will grow new neurons.
it all feels like some weird marketing push to me - probably wanting government money - this has worked out pretty well for Musk despite being an apparent moron.
There is a huge amount of investment grabbing bullshit around ML and I believe it's very harmful to making any sort of useful progress.
and yes - it's fundamentally just procedural generation based on statistical analysis. there's no intelligence, you get the illusion of intelligence from the scale it operates at.
to put it in that context.
perlin noise is a prediction of a value at a point in space. it looks boring because its simple it's fast because it's simple
ML system predicts a value at a point in space too ML systems are complicated ML systems are fast because they're pre-trained
Now imagine a world where your procedural generation didn't all sit on perlin noise and instead sat on something trained on real world structures that could also adapt itself to inputs .
before you get too excited, remember you live in a world where there's no money to be made off training that ML system
Is there enough data to scrape for 3D models? Especially professional 3D models? If I want to prompt 'Kirby wearing Master Chief's Armor,' I would need a Kirby and a really good Master Chief model. Keep in mind, some of the models on Sketchfab are ripped straight from the games and I don't think they really want to tango with big publishers like EA or 2K. Although, I might be wrong since they didn't care if they trained on established stock photo sites like Shutterstock or Getty.
@jeffxfcVA : My suspicion is that scraping of 3d models won't even be needed. Unethical 2D AI generators can already generate fake renders indistinguishable from rendered sculpts. So from there it's only a matter of transferring this 2D information into 3D volumes. This seems orders of magnitudes more straightforward to do than what has already been achieved with 2D.
Is there enough data to scrape for 3D models? Especially professional 3D models? If I want to prompt 'Kirby wearing Master Chief's Armor,' I would need a Kirby and a really good Master Chief model. Keep in mind, some of the models on Sketchfab are ripped straight from the games and I don't think they really want to tango with big publishers like EA or 2K. Although, I might be wrong since they didn't care if they trained on established stock photo sites like Shutterstock or Getty.
There has already been a lot of different Ai companies that have been called out for lying and using low waged workers to produce 3D models. Some company called Common Sense Machines apparently was looking for a freelance 3D artist to hire on to clean up models pumped out by Ai. So that way it could be fed back in for training the Ai to produce models. So naturally who would've guessed they would start coming for 3D as well... though the company itself seems shady and outright trying to lie (shocking) saying that their Ai can produce clean topo, game ready models that can be put into Unity and Unreal Engine.
I saw the job posting, took a look out of curiosity but it obviously was a hard pass because I personally disagree with training AI and the potential for screwing over 3D artists and myself in the long run for a quick buck. It also looks like they are backed by investors and claim they have 3 successful companies under their belt on their site (but they don't say which). Reading into them further it just looked like they are trying to get investor interest by faking good 3D models from the Ai model so they can pawn it off basically to some other investors like most of these shills do.
Sadly, I know someone who took up this job offer to help with cleaning up models. They had an unpaid art test for the position and though they didn't try it because it made no mention of character art being needed, it shows the position is shady and doesn't really communicate what they actually want. Same with the salary range, but I imagine that is garbage as well because these people I don't value art, or the creative process to produce 2D/3D.
Here is the unpaid art test:
Not only is this test laughable but it shows they are trying to mimic clean topo, UV's and an overall well put together model that is usable in game. Pretty sad, and it depresses me that there are people who will take up these offers and help these things along. I get being down bad on money but come on, this stuff helps no one.
game dev has been around for decades, but no one's been able to automate game-ready retopology that matches an artist's hand, I think they're up against a decent challenge. Also whoever wrote that art test listing sounds like a wanker.
I agree with you all the way, I was just pointing out how most of this stuff in particular is building on lies and exploiting. Additionally, It wouldn't surprise me if they had Ai write this art test....
game dev has been around for decades, but no one's been able to automate game-ready retopology that matches an artist's hand, I think they're up against a decent challenge. Also whoever wrote that art test listing sounds like a wanker.
thats only a thing for characters and for now. nanite already showed that topology can be void.
it is still highly effective in many regards, but at certain budgets and projects its the way to go. no retopo needed, ever.
I have my doubts this will stay exclusive to static meshes forever.
yea if we could get nanite for characters and solve the technical aspects of weightmapping and animating them sure that'd be great, if the average player has the hardware to play games with these features.
Well ... even if nearly infinite raw details for characters became a reality ... I think character modelers are in for a rude awakening once the excitement of "not having to retopo" wears out. Creating a full character from scratch, with high detail, and *without* retopo and baking is actually far from simple. Retopo doesn't just get in the way - it also makes the task manageable. I would bet that many artists who wish they could skip retopo/baking actually never attempted a full, raw, highly detailed character (which needs to be UVed anyways).
Now of course retopo is tedious. But at least the resulting asset can be handled/painted in real-time.
Overall, improvement in tech and processing power has *never* resulted in faster overall production times. And while some studios will keep chasing higher and higher fidelity, Nintendo will still be laughing their way to the bank thanks to successful games produced very efficiently, under strict hardware constraints ...
this is what brain waves, interpreted and digitized by AI into images, and then into 3d, will take care of. Then AI will take care of rigging, animation, and textures. Checkmate artsy fartsies, art is finally democratized, decentralized, and no longer being gatekept
Retopology does make the process easier, but it largely exists due to processing power limitations. When AI advances processing power enough those limitations will be done with or reduced so much that you could work with a sculpt directly in game. I doubt we would be seeing 100 million polygon meshes in a game anytime soon, but characters hitting a million or few million polys? I could see that happening.
I think this would change the workflow to something like we see in Zbrush. Texturing for example would involve a lot more vertex painting because it takes advantage of the high vertex count and for areas where vertex painting just can't do the trick, texture painting could still be used. You could handle UV's similarly to Zbrush with a polygroup-like feature where you draw the groups onto the model, then unwrap it. Baking I'd imagine would largely be irrelevant since most people only use it to get detail from the sculpt to the low poly. I'm not a rigging guy and don't quite understand weight painting that much, but I would imagine this would benefit too if you could use vertexes for weight painting.
IMO character artists will benefit from this or maybe that's just me looking at this through optimistic lens unlike a lot of other people who get doom n gloom syndrome over every subject surrounding AI.
We've fairly robust legislation governing copyright law, however big tech still seek too introduce - “the necessary flexibilities” to support the development of AI."
"I think this would change the workflow to something like we see in Zbrush. Texturing for example would involve a lot more vertex painting because it takes advantage of the high vertex count and for areas where vertex painting just can't do the trick, texture painting could still be used."
By all means, try it - and you'll see. It's really not as straightforward as you might think it is.
Vertex painting/polypainting on sculpts has been available for about 15 years now on the asset creation side of things - yet it is barely ever used for game asset creation (even if we take retopo out of the equation). I am not going to go into the reasons why that's the case (as this goes far beyond the scope of this thread), but as said by all means give it a try. For instance you could tackle a full character that way (sculpt + hardsurface modeling + only vertex color painting), then get it converted to a proper game asset (using automatic retopo for instance, to eliminate that step for the sake of the argument).
This has been attempted before of course, and for prototyping it absolutely works ; but when attempting to ramp it up to real assets that need to hold up in front of the camera, it just breaks in all kinds of ways.
Now whether of not generative 3D AI could bridge that gap, who knows. But if it does, and if developed in similarly unethical ways to that of the current generative 2D AI ... the end result would not be an improved workflow, but rather a straight up erasure of 3D modeler as a skill
"I think this would change the workflow to something like we see in Zbrush. Texturing for example would involve a lot more vertex painting because it takes advantage of the high vertex count and for areas where vertex painting just can't do the trick, texture painting could still be used."
By all means, try it - and you'll see. It's really not as straightforward as you might think it is.
Vertex painting/polypainting on sculpts has been available for about 15 years now on the asset creation side of things - yet it is barely ever used for game asset creation (even if we take retopo out of the equation). I am not going to go into the reasons why that's the case (as this goes far beyond the scope of this thread), but as said by all means give it a try. For instance you could tackle a full character that way (sculpt + hardsurface modeling + only vertex color painting), then get it converted to a proper game asset (using automatic retopo for instance, to eliminate that step for the sake of the argument).
This has been attempted before of course, and for prototyping it absolutely works ; but when attempting to ramp it up to real assets that need to hold up in front of the camera, it just breaks in all kinds of ways.
Now whether of not generative 3D AI could bridge that gap, who knows. But if it does, and if developed in similarly unethical ways to that of the current generative 2D AI ... the end result would not be an improved workflow, but rather a straight up erasure of 3D modeler as a skill
Uhh I think you missed my point entirely. Of course such a method can't work today because computer processing power is simply too weak. My point is that when processing power gets improved enough (i.e. you can run a sculpt with a few million polys at 60 FPS), retopology will largely be gone. Our current workflow works the way it does because that's what our technology is limited to. With improved processing power, that will push the limits and affect everything, not just modeling, which will lead to new workflows.
Now I'm not a lawyer, but from my understanding the source of ethics here currently does not matter. The copyright office allows people to legally claim AI art as theirs if they modify it enough from whatever the prompt generator spits out. That would mean the copyright office is only taking into account ethics for the end result of that work, not the source from how it was generated. Essentially I think the copyright office is saying there is nothing that they can do disqualify something based on how it was generated and I'd wager its largely because its hard to prove this kind of thing unless said work is very easily proven to be ripped off of someone elses stuff.
Even with a legal ruling in place that declares AI generated works as officially being unethical and illegal, whats going to stop people from continuing to do it anyways? More so on that front, how you would prove something was AI generated? I think the cold truth is we're going to have to learn to live with generative AI being used unethically regardless of what legal ruling gets made in our favor.
That's not what I mean. What I was trying to explain is that the tools to create source assets the in way you describe (as painted sculpts) have existed for 15 years. Computers have been plenty powerful enough for it for a long, long while. Your computer can handle it already. (I am not talking about the game version here, but specifically the authoring of the source asset). I've done it myself years ago for the highres source of game models (later to be baked down of course), because in that specific case the final context (far camera, and a lot of textured-in information) made it a fitting approach. Back then only Zbrush had the backbone to handle it ; now Blender can do it even faster with materials an order of magnitude more realistic.
But, sources for game assets are generally not created that way, even though they *could be*, today. And the reason why the creation of sources assets isn't done that way isn't because game engines are not powerful enough to use such assets directly. The reason is that such polygon soup assets simply don't hold up up close (+ a few other reasons). That's why I am suggesting that you try and do it yourself, on a full, animated character. You'll soon realize that even if nanite or similar tech automagically allowed such polygon-soup models to be used in games, they would likely still not be created that way ... because it simply doesn't look good beyond renders with very controlled lighting. And furthermore this throws a wrench in the animation pipeline which *does* require cleanly built topology. That's not going to change anytime soon.
I understand that this isn't intuitive or straightforward to understand. I've actually met very clever programmers 10 years ago who where 100% certain that "normalmaps will soon disappear" because "everything will just use displacement soon". The assumption you are making is somewhat similar, and comes from a misunderstanding on how and why game assets are built the way they are, for both visual and technical reasons.
-----
Not going over the rest (legal/ethics of regurgitative AI generation, and whether or not companies working on generative AI tools will still be able to obfuscate their source datasets in the future), as this has been well covered already in countless places. I am simply saying that if AI-generated 3D content becomes a thing in the near future (even if ethically sourced), modelers will not have the luxury to enjoy it as a "tool to improve their workflow" for very long, as it would directly lead to them being not needed anymore.
In that sense I tend to agree with the main topic of the thread ("and we are done gentlemen"). If anything, if generative 3D AI ends up growing similarly to 2D, the only jobs left for modelers would be ... clean up work/retopo, ironically enough.
indeed, topology, uvs and such are just too damn effective. good topology makes UVs (and many other productions steps after) easier, good UVs make texturing easier. even with 3d painting today a good clean line on a model is just nicer to work with in 2d space. normalmaps are a good example, even in movies where you could have the processing power to do all with displacement, bumpmaps and normalmaps are still widely in use. while revolutionizing texture approaches such as PTex are not widely used at all.
Besides the raw processing power, it would just need a whole lot of different tools and workflows to really wipe away how efficient these tools are
my guess is tho, that 3d as we know it will not be the norm eventually. once you can input a bunch of images and every pixel can be generated on the fly, you wont need meshes anymore. there are some quite impressive 3d camera moves in scenes made from a handful of images. this will eventually also happen with animation in these scenes. i guess 3d as we know it might just be another medium and whatever comes next might be a whole lot different to everything we are used to.
One silver lining of all this is that IMHO studios will realize eventually that this push for "always more" (either with or without AI) is just not sustainable. I think we are already at the limit of what it is healthy ressources-wise with the stylized/Fortnite look (and I'd argue that even that is going a bit overboard already) ; and from there, when there is no limit to level of detail or to the amount of content, there is no limit to budget either. I am actually starting to find high-end production value in games (like the latest Horizon game, or anything recent with realistic characters and settings ; or even, visually dense stylized games) to be nearly nauseating to look at - probably because I can see the absurd amount of hours that went into each single asset.
It's mindblowing really, as the alternative is right there in plain sight : Minecraft, Roblox and now Battlebit are insanely successful. IMHO It would be very logical for more studios to fully embrace proactively limited art styles and lean content creation. As a matter of fact it seems like it is already starting to happen outside of the indie world (BoltGun comes to mind) and that really is a breath of fresh air in these times of AI-generated vomit.
[edit] : I just found out that a sequel to Iron Fury (the Duke-like retro FPS) is coming, and it is using a very rarely used artstyle nowadays : 2000/2005 gritty handpainted textures, and it looks incredibly authentic. I often wonder what games would look like today if high end baking never happened, and this is a really good example of that.
https://www.youtube.com/watch?v=K37l3-La-Go I find that this is quite relevant to the topic of this thread, as this to me seems like a very healthy alternative to the "let's do AI !" and "let's Nanite everything !" trends.
my guess is tho, that 3d as we know it will not be the norm eventually. once you can input a bunch of images and every pixel can be generated on the fly, you wont need meshes anymore. there are some quite impressive 3d camera moves in scenes made from a handful of images. this will eventually also happen with animation in these scenes. i guess 3d as we know it might just be another medium and whatever comes next might be a whole lot different to everything we are used to.
This is where a lot my anxieties (concerning ai+art) are. While we're not there yet, it also isn't some far-fetched fantasy to imagine that that could be a possibility.
And to a similar idea regarding topology, UVs and workflow: These systems were all developed primarily from the standpoint of human authorship, right?
If that were the case...then all that stuff could get tossed out the window if it's all authored via GAN.
@pior PS1 era graphics= the new 8-bit How wild is it to think that some time in our dystopian future, today's AAA hyperrealism will be a "low-res" aesthetic style
Well, I genuinely think that fantastic games can be made by cleverly embracing these limitations.
I mean, how fun must it have been to work on BoltGun or that Iron Fury sequel ! And I can't help but imagine the dreadful atmosphere in teams where gEnErAtIvE aRtIfiCiAl iNtElLiGeNcE has been embraced, by contrast. No possible art knowledge sharing between team members, no feeling of accomplishment ... just nothing.
I just started a new playthrough of MGS Peace Walker recently and it really hit me - despite its lowtech simplicity, this game is so incredibly pleasurable to play and so beautifully crisp and readable. Even the raw and simple pupeteered cutscenes contribute to the suspension of disbelief.
I really wish for more visually similar games (barebones simple tech, but crisp visuals) to be made today. If anything it only benefits budget but also gameplay, since less feature creep gets in the way.
- - - - -
15+ years ago or so, some Polycount members were having fun doing HQ 3D remakes of Joust character sprites by the standard of that time (2000-2005 ish game art) with the hope of developing a sequel. Improvement in tech flew by, but not in a weird way I find that this careful crafting of tight, low-tech game art could be more relevant than ever. So perhaps not all hope is lost after all
I think readability is an important thing that is lost when graphics keep getting higher fidelity. There is only so many pixels on the screen, its not same as real life. Has to be some empty space.
I'm probably in a minority but for example I found it stressful on eyes to play some of the modern big AAA shooters lately. Just too much stuff, too much detail. I end up turning down screen resolution to like 75% and feel like it looks better.
I much prefer simpler graphics like the game pior posted above. And it is a pleasure to work in a faster efficient way to - not having every little detail in the project be a big huge technical question.
You seem to be approaching this from the standpoint of 'but that ain't how it works today so your wrong', which is faulty nor is it respond to the point I was making. It doesn't really matter how games are made today. I'm talking new technology coming out tomorrow and beyond that will change how games are developed, possibly in a major way. What is so hard for you to accept or comprehend about that?
But, sources for game assets are generally not created that way, even though they *could be*, today. And the reason why the creation of sources assets isn't done that way isn't because game engines are not powerful enough to use such assets directly. The reason is that such polygon soup assets simply don't hold up up close (+ a few other reasons). That's why I am suggesting that you try and do it yourself, on a full, animated character. You'll soon realize that even if nanite or similar tech automagically allowed such polygon-soup models to be used in games, they would likely still not be created that way ... because it simply doesn't look good beyond renders with very controlled lighting. And furthermore this throws a wrench in the animation pipeline which *does* require cleanly built topology. That's not going to change anytime soon.
I understand that this isn't intuitive or straightforward to understand. I've actually met very clever programmers 10 years ago who where 100% certain that "normalmaps will soon disappear" because "everything will just use displacement soon". The assumption you are making is somewhat similar, and comes from a misunderstanding on how and why game assets are built the way they are, for both visual and technical reasons.
Both paragraphs are wrong from top to bottom. Your whole response is built on the false assumption that you think I don't know what character modeling is or how games work.
That's not going to change anytime soon.
Many 2D artists also had a similar mindset about technology in the form of AI not being able to replace what they do, until they got proven wrong when the AI proved it can do art to their level. NVIDIA for example is showcasing groundbreaking changes in game development as it relates to 3D/graphics technology, perhaps not on the modeling front yet as far as I can see but it will get there.
All I'm saying is, for all we know in a few years we COULD have new groundbreaking technology that completely changes how games are developed. Dunno if that meets your approval of "soon", but don't be surprised if you get proven wrong.
Bit offtopic but I think the focus on art AI is pointless. Senior artist skills are already way above and beyond what is capable with AI at the moment due to technical reasons as well as iterative art creation, and I do think we're sort of reaching a plateau when it comes to art fidelity. What we should really be looking at is using AI to assist decision making; I mean judging by the Diablo 4 subreddit they could've had ChatGPT come up with better patch decisions, meanwhile their art department is literally hard carrying the company.
Imagine dedicating your entire existence to being an art professional, just to realize that your higher ups don't GAF and you get a nice fat juicy 2.9 (and still dropping, as of now) on Metacritic. Or imagine making assets for Overwatch 2, only to have it be canceled but since your art is still under NDA you have nothing to show for the years of hard work. This industry at the higher end is a clown act and a half due to poor decision making.
While we're here busting our balls and nitpicking every facet of good topology flow, proportions, animation, materials or lighting, the executive class is lazy and incompetent, and we should be replacing them instead of them, us.
While we're here busting our balls and nitpicking every facet of good topology flow, proportions, animation, materials or lighting, the executive class is lazy and incompetent, and we should be replacing them instead of them, us.
It was always like that. They focus on ownership, money flow, and publicity. And they get it. When you focus on little niche things you get a small niche presence.
It's getting more obvious that authorship and uniqueness are the only things that are important and get results. Producing a bunch of generic current day 3d models that fit particular technical specifications is getting anyone nowhere. It's not even working for big studious now.
Replies
If you’re building little kick-nacks to go in snow globes, then fear for your job! If not, chill out
It’s early days, so things are likely to keep improving. But as it stands this is still very much a tech demo, with no clear impact on 3d creatives.
I LOVE doing those :-))!
Also this : https://restofworld.org/2023/ai-image-china-video-game-layoffs/
Western studios are probably wiser and not going all-in because they know that a lot is at stake legally. This is actually quite fascinating to follow :
Utilising OS hasn't reduced the number of staff required at any of the studios I've been at - if anything it's created jobs in terms of managing the increased volume of content.
The real victims in the long term are going to be content farms (OS providers, quixel) and photographers/illustrators who work on a commission basis at the low-mid end
https://www.maketafi.com/
https://www.youtube.com/watch?v=6Yam8Fsk-iE
this effectively allows you to describe a preset you want to start from - I wouldn't turn that option down, particularly if I had a lot of Daz characters to make
I guess I need to specialize in something AI proof.
do people really consume daz3d porn? bonkers!
and yes - it's fundamentally just procedural generation based on statistical analysis. there's no intelligence, you get the illusion of intelligence from the scale it operates at.
to put it in that context.
perlin noise is a prediction of a value at a point in space.
it looks boring because its simple
it's fast because it's simple
ML system predicts a value at a point in space too
ML systems are complicated
ML systems are fast because they're pre-trained
Now imagine a world where your procedural generation didn't all sit on perlin noise and instead sat on something trained on real world structures that could also adapt itself to inputs .
before you get too excited, remember you live in a world where there's no money to be made off training that ML system
This is the largest 3D dataset
Its very shadily scraped from sketchfab and anything free they could find, so very sketchy
I saw the job posting, took a look out of curiosity but it obviously was a hard pass because I personally disagree with training AI and the potential for screwing over 3D artists and myself in the long run for a quick buck. It also looks like they are backed by investors and claim they have 3 successful companies under their belt on their site (but they don't say which). Reading into them further it just looked like they are trying to get investor interest by faking good 3D models from the Ai model so they can pawn it off basically to some other investors like most of these shills do.
Sadly, I know someone who took up this job offer to help with cleaning up models. They had an unpaid art test for the position and though they didn't try it because it made no mention of character art being needed, it shows the position is shady and doesn't really communicate what they actually want. Same with the salary range, but I imagine that is garbage as well because these people I don't value art, or the creative process to produce 2D/3D.
Here is the unpaid art test:
Not only is this test laughable but it shows they are trying to mimic clean topo, UV's and an overall well put together model that is usable in game. Pretty sad, and it depresses me that there are people who will take up these offers and help these things along. I get being down bad on money but come on, this stuff helps no one.
Now of course retopo is tedious. But at least the resulting asset can be handled/painted in real-time.
Overall, improvement in tech and processing power has *never* resulted in faster overall production times. And while some studios will keep chasing higher and higher fidelity, Nintendo will still be laughing their way to the bank thanks to successful games produced very efficiently, under strict hardware constraints ...
I think this would change the workflow to something like we see in Zbrush. Texturing for example would involve a lot more vertex painting because it takes advantage of the high vertex count and for areas where vertex painting just can't do the trick, texture painting could still be used. You could handle UV's similarly to Zbrush with a polygroup-like feature where you draw the groups onto the model, then unwrap it. Baking I'd imagine would largely be irrelevant since most people only use it to get detail from the sculpt to the low poly. I'm not a rigging guy and don't quite understand weight painting that much, but I would imagine this would benefit too if you could use vertexes for weight painting.
IMO character artists will benefit from this or maybe that's just me looking at this through optimistic lens unlike a lot of other people who get doom n gloom syndrome over every subject surrounding AI.
https://www.washingtonpost.com/technology/2023/07/16/ai-programs-training-lawsuits-fair-use/
"I think this would change the workflow to something like we see in Zbrush. Texturing for example would involve a lot more vertex painting because it takes advantage of the high vertex count and for areas where vertex painting just can't do the trick, texture painting could still be used."
By all means, try it - and you'll see.
It's really not as straightforward as you might think it is.
Vertex painting/polypainting on sculpts has been available for about 15 years now on the asset creation side of things - yet it is barely ever used for game asset creation (even if we take retopo out of the equation). I am not going to go into the reasons why that's the case (as this goes far beyond the scope of this thread), but as said by all means give it a try. For instance you could tackle a full character that way (sculpt + hardsurface modeling + only vertex color painting), then get it converted to a proper game asset (using automatic retopo for instance, to eliminate that step for the sake of the argument).
This has been attempted before of course, and for prototyping it absolutely works ; but when attempting to ramp it up to real assets that need to hold up in front of the camera, it just breaks in all kinds of ways.
Now whether of not generative 3D AI could bridge that gap, who knows. But if it does, and if developed in similarly unethical ways to that of the current generative 2D AI ... the end result would not be an improved workflow, but rather a straight up erasure of 3D modeler as a skill
Uhh I think you missed my point entirely. Of course such a method can't work today because computer processing power is simply too weak. My point is that when processing power gets improved enough (i.e. you can run a sculpt with a few million polys at 60 FPS), retopology will largely be gone. Our current workflow works the way it does because that's what our technology is limited to. With improved processing power, that will push the limits and affect everything, not just modeling, which will lead to new workflows.
Now I'm not a lawyer, but from my understanding the source of ethics here currently does not matter. The copyright office allows people to legally claim AI art as theirs if they modify it enough from whatever the prompt generator spits out. That would mean the copyright office is only taking into account ethics for the end result of that work, not the source from how it was generated. Essentially I think the copyright office is saying there is nothing that they can do disqualify something based on how it was generated and I'd wager its largely because its hard to prove this kind of thing unless said work is very easily proven to be ripped off of someone elses stuff.
Even with a legal ruling in place that declares AI generated works as officially being unethical and illegal, whats going to stop people from continuing to do it anyways? More so on that front, how you would prove something was AI generated? I think the cold truth is we're going to have to learn to live with generative AI being used unethically regardless of what legal ruling gets made in our favor.
But, sources for game assets are generally not created that way, even though they *could be*, today. And the reason why the creation of sources assets isn't done that way isn't because game engines are not powerful enough to use such assets directly. The reason is that such polygon soup assets simply don't hold up up close (+ a few other reasons). That's why I am suggesting that you try and do it yourself, on a full, animated character. You'll soon realize that even if nanite or similar tech automagically allowed such polygon-soup models to be used in games, they would likely still not be created that way ... because it simply doesn't look good beyond renders with very controlled lighting. And furthermore this throws a wrench in the animation pipeline which *does* require cleanly built topology. That's not going to change anytime soon.
I understand that this isn't intuitive or straightforward to understand. I've actually met very clever programmers 10 years ago who where 100% certain that "normalmaps will soon disappear" because "everything will just use displacement soon". The assumption you are making is somewhat similar, and comes from a misunderstanding on how and why game assets are built the way they are, for both visual and technical reasons.
-----
Not going over the rest (legal/ethics of regurgitative AI generation, and whether or not companies working on generative AI tools will still be able to obfuscate their source datasets in the future), as this has been well covered already in countless places. I am simply saying that if AI-generated 3D content becomes a thing in the near future (even if ethically sourced), modelers will not have the luxury to enjoy it as a "tool to improve their workflow" for very long, as it would directly lead to them being not needed anymore.
In that sense I tend to agree with the main topic of the thread ("and we are done gentlemen"). If anything, if generative 3D AI ends up growing similarly to 2D, the only jobs left for modelers would be ... clean up work/retopo, ironically enough.
It's mindblowing really, as the alternative is right there in plain sight : Minecraft, Roblox and now Battlebit are insanely successful. IMHO It would be very logical for more studios to fully embrace proactively limited art styles and lean content creation. As a matter of fact it seems like it is already starting to happen outside of the indie world (BoltGun comes to mind) and that really is a breath of fresh air in these times of AI-generated vomit.
[edit] : I just found out that a sequel to Iron Fury (the Duke-like retro FPS) is coming, and it is using a very rarely used artstyle nowadays : 2000/2005 gritty handpainted textures, and it looks incredibly authentic. I often wonder what games would look like today if high end baking never happened, and this is a really good example of that.
https://www.youtube.com/watch?v=K37l3-La-Go
I find that this is quite relevant to the topic of this thread, as this to me seems like a very healthy alternative to the "let's do AI !" and "let's Nanite everything !" trends.
This is where a lot my anxieties (concerning ai+art) are. While we're not there yet, it also isn't some far-fetched fantasy to imagine that that could be a possibility.
And to a similar idea regarding topology, UVs and workflow:
These systems were all developed primarily from the standpoint of human authorship, right?
How wild is it to think that some time in our dystopian future, today's AAA hyperrealism will be a "low-res" aesthetic style
I mean, how fun must it have been to work on BoltGun or that Iron Fury sequel ! And I can't help but imagine the dreadful atmosphere in teams where gEnErAtIvE aRtIfiCiAl iNtElLiGeNcE has been embraced, by contrast. No possible art knowledge sharing between team members, no feeling of accomplishment ... just nothing.
I just started a new playthrough of MGS Peace Walker recently and it really hit me - despite its lowtech simplicity, this game is so incredibly pleasurable to play and so beautifully crisp and readable. Even the raw and simple pupeteered cutscenes contribute to the suspension of disbelief.
I really wish for more visually similar games (barebones simple tech, but crisp visuals) to be made today. If anything it only benefits budget but also gameplay, since less feature creep gets in the way.
- - - - -
15+ years ago or so, some Polycount members were having fun doing HQ 3D remakes of Joust character sprites by the standard of that time (2000-2005 ish game art) with the hope of developing a sequel. Improvement in tech flew by, but not in a weird way I find that this careful crafting of tight, low-tech game art could be more relevant than ever. So perhaps not all hope is lost after all
You seem to be approaching this from the standpoint of 'but that ain't how it works today so your wrong', which is faulty nor is it respond to the point I was making. It doesn't really matter how games are made today. I'm talking new technology coming out tomorrow and beyond that will change how games are developed, possibly in a major way. What is so hard for you to accept or comprehend about that?
I understand that this isn't intuitive or straightforward to understand. I've actually met very clever programmers 10 years ago who where 100% certain that "normalmaps will soon disappear" because "everything will just use displacement soon". The assumption you are making is somewhat similar, and comes from a misunderstanding on how and why game assets are built the way they are, for both visual and technical reasons.
Both paragraphs are wrong from top to bottom. Your whole response is built on the false assumption that you think I don't know what character modeling is or how games work.
Many 2D artists also had a similar mindset about technology in the form of AI not being able to replace what they do, until they got proven wrong when the AI proved it can do art to their level. NVIDIA for example is showcasing groundbreaking changes in game development as it relates to 3D/graphics technology, perhaps not on the modeling front yet as far as I can see but it will get there.
All I'm saying is, for all we know in a few years we COULD have new groundbreaking technology that completely changes how games are developed. Dunno if that meets your approval of "soon", but don't be surprised if you get proven wrong.
Imagine dedicating your entire existence to being an art professional, just to realize that your higher ups don't GAF and you get a nice fat juicy 2.9 (and still dropping, as of now) on Metacritic. Or imagine making assets for Overwatch 2, only to have it be canceled but since your art is still under NDA you have nothing to show for the years of hard work. This industry at the higher end is a clown act and a half due to poor decision making.
While we're here busting our balls and nitpicking every facet of good topology flow, proportions, animation, materials or lighting, the executive class is lazy and incompetent, and we should be replacing them instead of them, us.
It's getting more obvious that authorship and uniqueness are the only things that are important and get results. Producing a bunch of generic current day 3d models that fit particular technical specifications is getting anyone nowhere. It's not even working for big studious now.