Even when a decision is made, it doesn't need to be final. It should be perfectly fine to revise rules if they're found lacking, for example, being more permissive in some aspects for now then adopting a stricter posture because too much slop made its way into the forums.
I feel a good approach is to orient those decisions around three key aspects, and these are my thoughts on them:
- Transparency Full disclosure whenever ML tools are used and how they were used.
- Consent Tools should only be used on others' work if they consent to it, and the best way to secure it is to ask.
That includes work by people in this forum and but also modifying work you're using as a concept, because wth, the artists don't stop mattering just because they're not present to see what is being done to their work. It's a shitty move to do this out of their sight so they can't defend their work.
And because this is a corner case for now, it makes no sense to force people to preemptively disclose whether they're okay with it or not, or worse, make it opt-out. At this point, imposing this obligation on people shifts the effort of securing consent away from the few people needing it to the entire community. If the situation changes and using such tools on other people's work becomes common enough, then having something like a field on the profile would work.
- Usage context What's being done with it and the provenience of the training data matters. It being used as a true tool, aiding in technical tasks like retopo or something, is very different than using it to replace a person in the pipeline. Using it to aid in writing your critique because you struggle with English as a second language and want to improve the wording is different than requesting a critique from it.
Basically, is it being used as an aid or as a content generator? Using it as an aid not necessarily is the best course of action, but I feel it's still okay in most cases. Using it to generate content is lazy and goes counter the point of this place, which is about learning to create and getting better at it. If someone can't be assed to do this then what they're doing here? So generated works are definitively out, generated textures are somewhere between icky and out (if the texture work was the point of the piece or post and you outsourced it to a machine, then why?), and so on.
Call it ideological rather than rational, but....I would always rather have a human's feedback than an AI. I mean.....I don't know about anyone else, but when I create, it's humanity I'm chasing. Trying to make things that say something to humans, resonate with humans, feels human. I don't see how a brainless machine could possibly be a better judge for what's going to work for a human than a human can.
Can it articulate something like "Your character's design appears at odds with the way you've posed them, they look like they should be cheerful and nice but you've made them pose like a thug", or "this environment just feels sterile because everything is similarly slick and smooth even in areas that should be rougher", or "while the clutter in this scene does serve to make it look lived in, it's drawing the eye away from the intended focus" or such?
It's not really content aware, so I don't see how it could give actually useful feedback. It would be like me giving food critique without eating it. And pior's definitely right. Crude annotations are perfectly adequate. They're quick, efficient and get the point across. Using AI this way feels to me - like most uses of this tech - like a "solutions" desperately searching for a "problem". Not to mention the energy waste.
And that aside, I just....Really hate the creeping of AI into everything this way. There seems to be this push by AI proponents that we should just outsource our critical thinking to it. It's gross.
Personally I prefer to do small, non-sillhoutte affecting stuff with textures. The amount of work involved in making changes is significantly lower and it's generally more flexible.