I'll be honest, I messed up in a huge way by completely forgetting that this is meant to be a dungeon entrance but I still had a really great time! Tried out trimsheets, decals, foliage, and also Substance for the first time and learned a whole bunch.
OK so I'm seeing the overall sentiment is we want to discourage/forbid AI-generated imagery here on Polycount. And I personally agree with that, fits my own preferences to avoid promoting it. I think this makes sense, it doesn't belong here on Polycount. People can share that stuff elsewhere if they want, DeviantArt or Instagram or whatever.
Democratization of tools (or skill) would be to provide access to those tools/knowledge and the economic security to practice that skill to underserved demographics who otherwise can't afford it.
This is not what ML is doing. That's just a bullshit argument and techbros know it. But at this point most things they say are bullshit (AGI right around the corner, anyone? 😂), so that's not surprising.
What they're doing is "socializing" the labour of an already vulnerable to economic downturns class, artists. You study, practice for years, build a portfolio, and they scrape it to avail themselves of the all labour that went into it, by force and without compensation. It's a parasitic approach that goes beyond copyright infringement or not: as an artist you're the one creating the data being used (otherwise they could just use blank pages, right?), the foundation of their business, you're just having no choice or compensation for the trouble. You're working for them for free. For staunch capitalists they sure love some corporate welfare.
The end result is further concentration of capital in the hands of obscenely rich corporations and their boards at the expense of workers. But this isn't as good soundbite as "democratization of tools", and it's easy to try to ride the wave of AI applications that try to actually help workers (eg better denoisers) instead of admitting gen AI is anything but a way further to exploit then replace them.
Even when a decision is made, it doesn't need to be final. It should be perfectly fine to revise rules if they're found lacking, for example, being more permissive in some aspects for now then adopting a stricter posture because too much slop made its way into the forums.
I feel a good approach is to orient those decisions around three key aspects, and these are my thoughts on them:
- Transparency Full disclosure whenever ML tools are used and how they were used.
- Consent Tools should only be used on others' work if they consent to it, and the best way to secure it is to ask.
That includes work by people in this forum and but also modifying work you're using as a concept, because wth, the artists don't stop mattering just because they're not present to see what is being done to their work. It's a shitty move to do this out of their sight so they can't defend their work.
And because this is a corner case for now, it makes no sense to force people to preemptively disclose whether they're okay with it or not, or worse, make it opt-out. At this point, imposing this obligation on people shifts the effort of securing consent away from the few people needing it to the entire community. If the situation changes and using such tools on other people's work becomes common enough, then having something like a field on the profile would work.
- Usage context What's being done with it and the provenience of the training data matters. It being used as a true tool, aiding in technical tasks like retopo or something, is very different than using it to replace a person in the pipeline. Using it to aid in writing your critique because you struggle with English as a second language and want to improve the wording is different than requesting a critique from it.
Basically, is it being used as an aid or as a content generator? Using it as an aid not necessarily is the best course of action, but I feel it's still okay in most cases. Using it to generate content is lazy and goes counter the point of this place, which is about learning to create and getting better at it. If someone can't be assed to do this then what they're doing here? So generated works are definitively out, generated textures are somewhere between icky and out (if the texture work was the point of the piece or post and you outsourced it to a machine, then why?), and so on.
Call it ideological rather than rational, but....I would always rather have a human's feedback than an AI. I mean.....I don't know about anyone else, but when I create, it's humanity I'm chasing. Trying to make things that say something to humans, resonate with humans, feels human. I don't see how a brainless machine could possibly be a better judge for what's going to work for a human than a human can.
Can it articulate something like "Your character's design appears at odds with the way you've posed them, they look like they should be cheerful and nice but you've made them pose like a thug", or "this environment just feels sterile because everything is similarly slick and smooth even in areas that should be rougher", or "while the clutter in this scene does serve to make it look lived in, it's drawing the eye away from the intended focus" or such?
It's not really content aware, so I don't see how it could give actually useful feedback. It would be like me giving food critique without eating it. And pior's definitely right. Crude annotations are perfectly adequate. They're quick, efficient and get the point across. Using AI this way feels to me - like most uses of this tech - like a "solutions" desperately searching for a "problem". Not to mention the energy waste.
And that aside, I just....Really hate the creeping of AI into everything this way. There seems to be this push by AI proponents that we should just outsource our critical thinking to it. It's gross.