To be fair, there's a lot of self-advertising going on at Polycount. It's kind of inherent in any community of professionals and hobbyists. So I personally don't have any issue with people promoting their projects/portfolios/softwares.
Me neither, as long as it's done transparently. The way it was done here was not as hamfisted as those "Hey guys, check out this cool website I found" posts that crop up now and then, but it's not that much removed from it either, in my opinion. (Always either a way to big screenshot showing the full interface with the logo beyond the image comparison or a direct link, but no disclosure that this is his tool.) Actually, that was the thing that stood out to me initially, and I opted to ignore the can of worms that is AI at first.
The data sources for initial training are as ethical as it's realistic for any company to establish at this point.
Sorry to be cynical, but that unfortunately isn't saying much. But thanks a lot for your clarification regarding the processing. That sounds good to me, at least in theory, and does alleviate some of my concerns.
So just to clarify: Are you against disclosing when you use AI to generate an overpaint or whatever we want to call it? Do you find that to be detrimental or would it keep you from giving feedback?
Polycount exists to offer that kind of feedback. I'm here to get helpful information across. If we start valuing how impressive someone appears doing it over how useful their insight is, then I don’t believe we’ve earned the right to say that we care about artists.
First of all, then why not simply disclose when AI was used, and secondly, I don't think one was pitched against the other, at least I didn't. I just pointed out the obvious fact that if someone invests a lot of time to help us, we tend to value that, even if it doesn't change the feedback itself.
@Eric Chadwick Well, let's say I've had this very topic on my mind for a little while now Especially a few days after a group of elected officials from a nearby town granted an art prize to a scammer spitting out (very, very obvious) AI art by the truckload, and no one seems to be aware of it, or even have any suspicions about it. Peak cultural stagnation.
Also, kudos to d1ver for giving us the details we need to gather our own views on the subject. This is the kind of detail we need, up front, about how AI is used for creating feedback in an artist-driven community.
Some artists will love incorporating AI tooling in their work, while others will not, and that's OK in my mind. You do you . In my opinion, it's really difficult to use the current crop of AI tools because the data provenance is so fraught. But the power of the toolset is pretty clear, if we can resolve this disparity, and as long as we recognize the limitations rather than be taken in by the ease of instant-pretty-pictures.
I hear this concern a lot, and I think it comes from a real place — fear that creativity is being replaced by automation, that we’re trading depth for shortcuts. But in my experience, the opposite is happening.
Most artists never get enough reps on the true fundamentals — light, color, composition — because implementation is so painfully slow. It takes hours, days, sometimes weeks just to see if a visual idea even works. The ratio of learning to labor is wildly unbalanced. In practice, most creatives don’t actually get to exercise their eye. Just their tools.
And if you scratch the surface of most art communities — Polycount included — you’ll find the ratio of troubleshooting and technical talk far outweighs conversations about shapes, light, color, or composition.
I want to see artists making 100 creative decisions a day instead of 5. I want to see them talk about and develop their taste all day, expand their visual libraries, and spend their time thinking about the projects that inspire them, the stories they want to tell — not the latest UV tricks or Blender plugins.
Not just because that’s how you build the real creative muscle — but because that’s how we create real job security for everyone who ever wanted to be creative in the ever-evolving tech landscape.
And honestly, I think we should give artists more credit. I don’t believe that if they have an easier time than we did, they’ll suddenly stop giving a shit or become mindless robots. I think they’ll show us old farts up — and make much more of their careers than we ever dreamed of. I can't wait.
Noren said: So just to clarify: Are you against disclosing when you use AI to generate an overpaint or whatever we want to call it? Do you find that to be detrimental or would it keep you from giving feedback?
It wouldn't prevent me from giving feedback And I think that if I'm uploading artwork (not feedback) and asking for recognition and adoration I absolutely have to disclose how it was produced - no argument there.
However I guess the relevant bit of context here is that I grew up in a dictatorship country and have literally used video game art to fight back against infringements of civil rights. And got a criminal indictment from the government for inciting sedition and a life of exile as a result. (fun story, more here: https://gdcvault.com/play/1027788/How-to-Make-Art-that)
So while your request does not prevent me from offering feedback, you have absolutely not established any credible grounds for policing my feedback or how I choose to provide it. We all take responsibility for creating safe spaces free from censorship and the precedent and principle are too important for me to comply. At least until you can offer a better argument 😉
I think we all agree that feedback can be worth gold. This isn't a matter of allowing or disallowing it. One of the points people have raised is that running their work through a gen tool in the process of giving that feedback without their consent or even disclosing it bothers them, not the act of leaving feedback itself.
No amount of arguing changes the fact that when someone says they don't like something being done to them and the other party disregards it, it shows contempt. It may not be on purpose, and the person doing it may even be "right" - as right as a point of view can be -, but being right or wrong is not the issue here.
It's doing something to people they expressly don't want. When they say no it's not an invitation for debate. If it's for their own good or whatever is irrelevant, we all know how many things are done to people "for their own good". You can either respect someone's decision or not, there's no middle ground.
Comparing gen bashing to actual paintovers is moot because it's quite safe to assume in the current climate the majority of the artists would not want their work ran through a tool, while in the correct context they're okay with actual paintovers. To not ask beforehand while knowing they'll likely say no is to deny people this choice. Again, even if unintentional, when you make choices for people it means their concerns and choices unworthy of consideration.
If the feedback is the goal, muddling it with what can be perceived as hostility detracts from it. How you deliver a critique can be as important as the content of it, it's no accident most art communities have some sort of guide on how to do it. So if helping truly the point, please factor this in your approach to critiques.
About the "paintover" wording, it also find it misleading. It's a well established term that implies drawing or alterations done by hand. Something like generative bashing, gen filter, gen edition or whatever would be more descriptive and transparent, solving two issues at once.
@d1ver: Ok you lost me, there. I haven't watched the video (I still might, but not soonish, as it is an hour), but I can't imagine how that compares or applies other than that you want to say that you have a strong reaction to anything you perceive as censorship. Maybe you can humor me and give a short summary of what I should take home from the video.
I don't think there is any censorship going on, here. If you think this qualifies as policing, then so be it, but I'm simply asking you to do something (or at least make a clear statement regarding it, which you now have done, sort of) that is entirely reasonable to expect and a common courtesy, at least from where I am standing. You acknowledge that people are deeply concerned about this topic, but you don't seem to be willing to accommodate them, something which would have defused the whole situation significantly and early on with no perceivable cost to you.
Regarding creating job security:
Well, obviously things will change, and that's not any one person's fault, but I don't think this will create job security for modelers, concept artists etc. and it will level the playing field between artists and non-artists. While I like the thought of a tool that relies on the user bringing the composition skills etc., I'm kind of skeptical that it couldn't be employed by someone without, or that someone wouldn't create a similar tool without those limitations. One could say a level playing field and a lower barrier to entry is a good thing, but I don't think it's unifiable with job security.
Or if we take a tool like Promethean: How long until you not only use models created by artists, but can request a completely new model to be generated? If you don't do it, someone else will. And while I don't think it will happen tomorrow, I'm fairly sure it will happen.
So the creatives in this scenario are the level and game designers and even they might be replaced one day. And while there still might be specialists for art, they'd likely be a lot fewer than there are today. Art directors without teams, so to speak. I can see the argument that this will lead to many fresh ideas, maybe a return to the beginning of the early golden era of computer games, empowering small teams to achieve big things (to use some of that marketing language). Ultimately, creating a film or a computer game might be like writing a book. But how many authors can live off their work? There is a limited amount of consumers with a limited amount of time out there and the industry will likely shrink significantly, perhaps not to the levels of back then, but I'd be very surprised about any different outcome.
Edit: At the very least, some jobs we have today will see a lot less demand. I understand the sentiment of some of the more technical aspects of our work getting in the way of creativity, but for some, that's their job, and they like it*. Not everyone wants to tell a story, and creativity has many forms.
it's also relinquishing your creative direction and input to algorithms. It's saying, "I, d1ver, former tech director of Naughty Dog, stand behind this generated image." Putting your credibility behind something you ultimately can't control (and that is not AGI and does not understand context), it's why we game developers don't add generative AI to games because then you get players getting Darth Vader to start cussing.
"I want to see artists making 100 creative decisions a day instead of 5. I want to see them talk about and develop their taste all day, expand their visual libraries, and spend their time thinking about the projects that inspire them, the stories they want to tell — not the latest UV tricks or Blender plugins."
There is definitely truth in how current-day game art and CG comes with *massive* inertia for iterations - and as you rightfully put it, this absolutely impacts how many creative decisions one gets to make per day, which in turn slows down progress and ultimately, creativity. This was actually the main motivator for me to pivot (or rather, re-pivot) towards chara design/concept art as a side personal thing and then as a main job (while keeping modeling for UGC as a source of passive income), because I witnessed first hand the singularity point at which modeling a character started to take more time than designing it ; and a few years later the difference grew by more than an order of magnitude, which I find completely absurd. In this context, flexing ones creative muscles as a modeler is nearly impossible, and everyone ends up doing the exact same thing and all games look the same.
But attempting to solve this through a "10x" toolset of AI enhancing just for the sake of being able to churn out something that will likely end up looking like yet another replication of the Fortnite style or [insert name of currently popular chinese developped fantasy Gacha] is IMHO a downward spiral and ultimately not the most create endeavour either (or at least I personally don't think it is). I think a way more satisfying process is to be found within the limitation of retro specs and highly stylized visuals in general, since hard limitations force creativity. And I'd personally much more enjoy travelling an openworld universe crafted in JetSetRadio style with 32*32 textures, than whatever the latest AI-enhanced workflow could (and undoubtebly will) churn out. But of course I understand than most of the industry as well as the public isn't interested in that.
Re. feedback in online spaces and PC : I believe it ties with the above. A decade (or two ?) ago, constuctively critiquing a model was pretty much as interesting as critiquing an illustration. Not so much anymore. So in a backwards way, providing feedback on things like UVs and baking is one small remaining area in which human to human interaction can be found online, with people being genuinely appreciative when they manage to solve their issues thanks to the help of others. I think that's cool. As a matter of fact your comment just made me realize while I still enjoy doing it and find satisfaction in it after all these years. Now discussing tech and addons at nauseum is different, and something I personally don't see much point in, indeed.
As for actual art sharing and feedback : while I obviously miss the days of the incredibly rich online interactions that could be found in underground forums of the past, there are still ways to connect today. One simple way is to proactively share ones work with contacts by direct messages or in group chats, as opposed to posting publicly online ; as well as interacting with like-minded art enthusiasts IRL. The online side being dead isn't such a huge loss at the end of the day, as this is just one of the many ways to connect.
Ok this is getting messy. Let me try and get through the noise:
I think everyone here is entitled to not having their data fed into any data collecting tools. Non-negotiable baseline.
I think everyone here is entitled to having their questions answered if they are worried that that may have happened.
Socially I'm usually strongly opposed to fear being the driving factor for decision making. But it is very important to me that artists feel safe. And at the end of the day saying "this paintover was generated with blah blah" (as long as the other 2 points are maintained) is not a problem. So I do take @Noren 's point.
Outside of that, the way polycount has dealt with the feedback we didn't like over all these years was to ignore it. The precedent that someone can say "I don't like the way you provide your feedback, so you are not allowed to post it/you need to ask my permission first", I believe would not be in the interest or the spirit of this place. Or at least in the form that I have experienced it over the last 20 years. But everything changes. Other than hopefully our shared commitment to empowering artists.
@Eric Chadwick let us know what it's going to be so we can put this matter to rest please.
Oh by the way, I should also probably mention the following as it is a bit of a personal journey : a few years ago (way before the emergence of slop machines like ChatGPT and Midjourney), developments in the field of machine learning made me daydream about the possible ways of, for instance, sketching a rough shape and getting it shaded automatically by leveraging ML training, or getting a 3D primitive oriented according to a gesture, or various ways of overcoming the (still to this day) clunkiness of wacom tablets as all these things sounded like natural evolutions of our digital tools.
Fast forward to today, now that these challenges have been solved and AI software is widely available to all, I personally find such things utterly uninteresting because the "solving" that AI does goes so far that it completely replace any skills. So now I would gladly go back to manually placing primitives in 3D space in a 3D software if I needed ; or painstakingly constructing things by hand using old ass perspective drawing techniques, or taking the time to build a mannequin to take photos of ... not only because it's fun, but also because it does make one improve ! Quite ironic at the end of the day
"As for (actual) art feedback : while I (obviously) miss the days of the
incredibly rich online interactions that could be found in underground
forums of the past"
Quite apart from CG I might add that traditional places still exist, though not necessarily underground. Then again I'd suppose resources the messy paint/charcoal/mineral dust crowd seem too congregate toward for like minded discourse.
A few years ago posted this acrylic on canvas under-painting, over on Wet Canvas (Animal and Wildlife Subjects) and as I remember received a bit of informative critique which was nice to receive from ones peers.
Edit:
I'll try to dig up my login details and link that thread.
That's my feeling on the use of AI in posts here. Personally I dislike most AI-generated content. I find it mushy inconsistent and often tacky. But I do see the benefit of incorporating it into thoughtfully-curated content, it can help speed up the process of generating new content, but this requires a heavy hand in editing.
As a solo indie dev I did a fair amount of research into generative AI, as the technology would absolutely level the playing field against triple A companies. A lot of early concepts for Project Nova were made using early builds of stable diffusion, which I've since removed from my sketchbook.
I came to the realization of "Why am I using generated art for reference when I could just download someone else's work to use for reference?"
Great art already exists. Tools for asset creation already exist. I don't need AI in my pipeline for pre-production or asset creation. It doesn't speed things up or help at all; if generative AI is unable to produce clean, highly optimized models with PBR texture sets, UV layout with 95%+ efficiency, and non-destructive highly iterative scalable workflow, it is worthless to me.
And, if such technology exists, I'll simply wait for the free, open source version that can run locally. Generative AI is not here to help produce art. It is here for the creators of genAI to make money through subscriptions to their ill-gotten services. It is here to reduce overhead costs for companies like Sony Interactive Entertainment and Microsoft.
Well, I’m not the decider here, the community is. We can provide rules of thumb, but you all decide whether you trust those guidelines, or chafe at the restraints.
Personally, I’m ok with people providing feedback however they like, as long as it is respectful.
However, as far as I can see AI image generators have not yet proved they respect artists.
The majority of these systems are trained on huge datasets of copyrighted content, without having acquired any consent.
AFAIK only Adobe has claimed to do this, and yet their dataset was scraped without what I would consider to be truly informed consent. So, they don’t meet the standard.
The irony is that these AI systems would not exist today without the vast amount of training data they started from. Which was in turn never meant to be used for anything other than for purely research purposes.
It seems to me that d1ver’s own AI model, however well-intentioned, is trained using copyrighted materials (movie stills, etc.).
At some point it may come to pass that this kind of scraping falls within fair use, but it’s not quite clear at this point in time that these systems have met that bar.
The creative potential these tools promise to provide is perhaps too beguiling to easily ignore the implications of their (likely, but not proven) unethical provenance.
However I don’t think it’s ultimately my decision whether to allow or disallow the use of AI generators here, at least not quite yet. It’s still very fluid.
But at the very least, I think some caution is warranted. I’d like to hear more opinions, how would you all like this to be handled?
Would it be enough to add a credit line indicating when ai has been used? Maybe also which tool was used, so people could be informed whether it used ethical scraping? Or should the use of AI be prohibited outright?
Well, I need to get to work, but the last thing I'll say for a qualifier is likely consent; so if one does wish to use generative AI to give feedback, one should ask beforehand. Using genAI to give feedback, then promoting one's own services for said genAI is really not cool.
Side note, but I don't like how you're using your background of growing up impoverished to leverage emotional appeal as well as your social rank of having worked at Naughty Dog, and using that to police online behaviour @d1ver .
You have no idea what other people had to go through - I don't go around telling people how I worked on Path of Exile 1 & 2 (Ooooh, wow, amazing), how my mother had to live in constant fear and seclusion in case police forcefully aborted me due to China's one child policy, and how suicide felt like a great idea for 1/3 of my entire life. Oh yeah, I also grew up impoverished. Nice childhood we've had; nostalgic, truly.
Not to downplay our traumatic pasts or accolades, but none of it is relevant to giving meaningful feedback.
@zetheros I'm not offering we police anything. I'm very much for the free flow of education and information. But I am really sorry about your background. That sounds really tough. I applaud the resilience it took you to be here. I'm very glad that you are.
I realize the irony of policing policing policing, but this is you innit? @MBauer17 may have be mistaken for thinking you scraped her data - but I would say reasonably so; considering what we've seen over the years, and the vibes the art community has towards genAI. So, telling her that the community deserves better, and to 'please be better' is an interesting choice
may have be mistaken for thinking you scraped her data - but I would say reasonably so;
May I confirm your stance here. Are you suggesting we shouldn't call out people when they lie about members of this community committing actual crimes that never took place?
I think it's partly because humanity does as humanity do; society as a whole will always take something good and new; decentralized currency (crypto), generative AI, and use these tools selfishly, instead of bettering society as a whole. Mbauer17's knee-jerk reaction is expected, considering what we've seen. I would have apologized, explained how the genAI process of PrometheanAI works differently, ask them to crush that subscribe button on my youtube channel, hit that notification bell icon and skibidi toilet, instead of doubling down and calling out
Well, actually I wouldn't use generative AI for feedback in the first place, because I live on the internet and know how most artists have a pretty visceral reaction to it
I'm sorry you believe your upsetness justifies committing violence on innocent people. I'm going to ask you to be better too. A bunch of tech nerds fucked over a lot of artists because they didn't give a shit about them. Being cruel to the one that actually does care is not somehow going to make this better. But you are welcome to keep going. I'm still going to maintain that this industry deserves much better.
Ok this is getting messy. Let me try and get through the noise:
I think everyone here is entitled to not having their data fed into any data collecting tools. Non-negotiable baseline.
I think everyone here is entitled to having their questions answered if they are worried that that may have happened.
Socially I'm usually strongly opposed to fear being the driving factor for decision making. But it is very important to me that artists feel safe. And at the end of the day saying "this paintover was generated with blah blah" (as long as the other 2 points are maintained) is not a problem. So I do take @Noren 's point.
Thanks for that! Your "The Desert" thread is missing a bunch of pictures, btw. (although the linked article is complete). I think you'll find that this kind of stuff is still very much appreciated, despite its age (or because of it) and it warrants preservation.
Now for the "policing" part:
I get it would be cumbersome to ask each time if people would be happy to get feedback involving generative AI, especially since it might get a negative reaction frequently (but then again maybe not). In the recent cases, that visualization of the feedback was mostly made with a bit of delay anyway, though. The other way round, members could indicate, e.g. in their signature, if they want (or rather not want) to receive this kind of feedback. However, this might be overkill depending on how much we expect this to even happen. Personally, I can't really see a lot of people giving feedback this way, at least in the near future, but I've been wrong before.
Let's say it becomes more common, and ignoring the issue of provenance for now, then d1ver is in the somewhat unique position that he does have the know-how and infrastructure to believably assert the images weren't uploaded to a third party for processing. That won't be necessarily the case for others, though.
Now I might be a bit naive, but I don't expect people to (intentionally) act against the explicit will of others, at least in this community, although that might just be me weaseling out of defining any hard rules.
Almost funny... humankind already had learnt that "doing it the hard way" get stuck the most ( we do not even need psychology for this.. just ask your parents or you grandparents.. ):
..but there are always people who believe "there must be a shortcut". So if one uses AI oneself to be "adviced" then this might be not the most clever idea. This might be different if a pro/veteran/adored artist use this to show someone some (quick made; because her/his time is precious) different ideas.
The problem for the beginner is (as always): how to decide which advice is good.. But then: using some AI and feed it some prompt , wainting for the "result", maybe refine it and download the image and post it may take longer than for example some advice like (just for demonstrational purposes): "Your fore- and background is a bit empty; consider to add some trees/mountains and/or some flowers or bushes to give this some depth/context." ..and maybe doing a very quick and sketchy overpaint ("scribbleing" over a part of the original image) ??? IDK..
Of course this are only my one and a half percent..
( Maybe this is also one of those "generation" problems ?? I never understood this completely.. and i'm over 50.. ¯\_(ツ)_/¯ ..and also this doesn't have to do with `oldschool`.. there should be names for some former concepts to be able to look it up and learn from them instead of de-grading them as old, obsolete, inferior.. the "new" things was was built on this sometimes using some "tricks" of older techniques which are sometimes better unstandable if seeing the former context. But now i'm getting off-topic.. okay.. i'm.. not that young anymore )
I’d like to hear more opinions, how would you all like this to be handled?
I'd really just like some heads up on what tools have been used to generate feedback. If you ran something through ChatGPT with the prompt "what will make this better?" and posted the results, just say so. If you ran it through an image algorithm, just say so. I do appreciate @d1ver's feedback whether or not the images were run through an algorithm. As I mentioned earlier, AI in feedback has its benefits -- I just feel that it needs a professional to interpret the results.
For example, my friend ran her art piece through ChatGPT. Here is the original scene render.
ChatGPT gave her this paintover :
If you ask me, at first glance this is terrible feedback. It took an Overwatch-esque futuristic scene in Hamburg and made it look like a cheap attempt at a stereotypical Mexican cartel movie backdrop.
HOWEVER:
This is where a professional with the experience and a solid art background can tease out some good points it has underneath the slop: Add more water around the fountain. Create some more variation in roughness, such as a worn out street versus a shiny metal building. Add some visual interest to the right side of the scene by putting some more cafe tables and chairs along the street.
As I said earlier, AI can be good, with an emphasis on "can be." Just give us a heads up that you ran it through AI, that's all I ask.
@zetheros I went down that goat testicle rabbit hole. I regret every second of it.
If you ask me, at first glance this is terrible feedback. It took an Overwatch-esque futuristic scene in Hamburg and made it look like a cheap attempt at a stereotypical Mexican cartel movie backdrop.
HOWEVER:
This is where a professional with the experience and a solid art background can tease out some good points it has underneath the slop: Add more water around the fountain. Create some more variation in roughness, such as a worn out street versus a shiny metal building. Add some visual interest to the right side of the scene by putting some more cafe tables and chairs along the street.
Kinda like a Dr. Strange looking at 14,000,605 possible futures in order to figure out how to polish a scene, or like flipping an artwork horizontally to get a fresh look.
Yeah we’ve seen people straight up using ChatGPT/Gemini/CoPilot to post feedback, and it’s pretty gross.
Though I think the feedback d1ver is posting is solid, well thought out and beneficial.
So then I’m thinking we could ask people to include source for their feedback, whenever some sort of automation is being used. Particularly when using AI, but this could include other software-enhanced augmentation I think.
Something like: “I ran your image thru ___ ai, and my thoughts are __”. Or something like “I used Photoshop generative-fill to ___ and my thoughts are ___”. That kind of thing.
I’d like to hear more opinions, how would you all like this to be handled?
I don't see how this would aid the help process. Seems way faster to type a short comment, rather then downloading an image, having it scanned by AI, processed and then re-posting the result, and finally providing a response. d!ver hasn't been clear about what process he intends to use.
I think polycount is a brilliant analog guide. Perhaps there should be an article in the wiki about how to use AI with links plus an outline of the current legal stance on the process (may be used, sold but not copywritten). I thank d1ver for his (her) (their) offer of help but I don't see the benefit in the example posted
Or ... one could simply consider that there is almost never any need for constructive criticism and feedback to be "fancy-looking" anyways, as crude red lines to outline shapes, arrows representing light sources and rough cross-hatching to indicate shadows have always been enough to communicate edits, and also have the benefit of leaving room for interpretation - as opposed to throwing humiliating AI slop (like that ChatGPT "piss-edit" of the scifi street corner piece above) at the face of someone trying to learn. Heck, I have found memories of bouncing feedback back and forth with an AD over screenshare using mouse scribbles, not even a tablet. And if more accurate indications of values and colors need to be done, they can be done as a small thumbnail, a summary of sorts.
Here's a Dragon Ball storyboard followed by a correction made by Akira Toriyama himself. This is for a million-dollar franchise, way more important than someone's attempt at a portfolio piece in Unreal. This isn't even the original storyboard and correction, just a recreation by the artist and it kindof looks like childs drawings. And it doesn't matter, everything's there.
Full-on paintovers are actually of somewhat poor taste if not explicitely sollicited anyways.
At the end of the day, if someone needs an AI to crunch numbers for them in order to provide feedback ... that just means that they simply have no feedback to provide to begin with.
Call it ideological rather than rational, but....I would always rather have a human's feedback than an AI. I mean.....I don't know about anyone else, but when I create, it's humanity I'm chasing. Trying to make things that say something to humans, resonate with humans, feels human. I don't see how a brainless machine could possibly be a better judge for what's going to work for a human than a human can.
Can it articulate something like "Your character's design appears at odds with the way you've posed them, they look like they should be cheerful and nice but you've made them pose like a thug", or "this environment just feels sterile because everything is similarly slick and smooth even in areas that should be rougher", or "while the clutter in this scene does serve to make it look lived in, it's drawing the eye away from the intended focus" or such?
It's not really content aware, so I don't see how it could give actually useful feedback. It would be like me giving food critique without eating it. And pior's definitely right. Crude annotations are perfectly adequate. They're quick, efficient and get the point across. Using AI this way feels to me - like most uses of this tech - like a "solutions" desperately searching for a "problem". Not to mention the energy waste.
And that aside, I just....Really hate the creeping of AI into everything this way. There seems to be this push by AI proponents that we should just outsource our critical thinking to it. It's gross.
Replies
But thanks a lot for your clarification regarding the processing. That sounds good to me, at least in theory, and does alleviate some of my concerns.
So just to clarify: Are you against disclosing when you use AI to generate an overpaint or whatever we want to call it? Do you find that to be detrimental or would it keep you from giving feedback? First of all, then why not simply disclose when AI was used, and secondly, I don't think one was pitched against the other, at least I didn't. I just pointed out the obvious fact that if someone invests a lot of time to help us, we tend to value that, even if it doesn't change the feedback itself.
Some artists will love incorporating AI tooling in their work, while others will not, and that's OK in my mind. You do you
I hear this concern a lot, and I think it comes from a real place — fear that creativity is being replaced by automation, that we’re trading depth for shortcuts. But in my experience, the opposite is happening.
Most artists never get enough reps on the true fundamentals — light, color, composition — because implementation is so painfully slow. It takes hours, days, sometimes weeks just to see if a visual idea even works. The ratio of learning to labor is wildly unbalanced. In practice, most creatives don’t actually get to exercise their eye. Just their tools.
And if you scratch the surface of most art communities — Polycount included — you’ll find the ratio of troubleshooting and technical talk far outweighs conversations about shapes, light, color, or composition.
I want to see artists making 100 creative decisions a day instead of 5. I want to see them talk about and develop their taste all day, expand their visual libraries, and spend their time thinking about the projects that inspire them, the stories they want to tell — not the latest UV tricks or Blender plugins.
Not just because that’s how you build the real creative muscle — but because that’s how we create real job security for everyone who ever wanted to be creative in the ever-evolving tech landscape.
And honestly, I think we should give artists more credit. I don’t believe that if they have an easier time than we did, they’ll suddenly stop giving a shit or become mindless robots. I think they’ll show us old farts up — and make much more of their careers than we ever dreamed of.
I can't wait.
However I guess the relevant bit of context here is that I grew up in a dictatorship country and have literally used video game art to fight back against infringements of civil rights. And got a criminal indictment from the government for inciting sedition and a life of exile as a result.
(fun story, more here: https://gdcvault.com/play/1027788/How-to-Make-Art-that)
So while your request does not prevent me from offering feedback, you have absolutely not established any credible grounds for policing my feedback or how I choose to provide it. We all take responsibility for creating safe spaces free from censorship and the precedent and principle are too important for me to comply. At least until you can offer a better argument 😉
No amount of arguing changes the fact that when someone says they don't like something being done to them and the other party disregards it, it shows contempt. It may not be on purpose, and the person doing it may even be "right" - as right as a point of view can be -, but being right or wrong is not the issue here.
It's doing something to people they expressly don't want. When they say no it's not an invitation for debate. If it's for their own good or whatever is irrelevant, we all know how many things are done to people "for their own good". You can either respect someone's decision or not, there's no middle ground.
Comparing gen bashing to actual paintovers is moot because it's quite safe to assume in the current climate the majority of the artists would not want their work ran through a tool, while in the correct context they're okay with actual paintovers. To not ask beforehand while knowing they'll likely say no is to deny people this choice. Again, even if unintentional, when you make choices for people it means their concerns and choices unworthy of consideration.
If the feedback is the goal, muddling it with what can be perceived as hostility detracts from it. How you deliver a critique can be as important as the content of it, it's no accident most art communities have some sort of guide on how to do it. So if helping truly the point, please factor this in your approach to critiques.
About the "paintover" wording, it also find it misleading. It's a well established term that implies drawing or alterations done by hand. Something like generative bashing, gen filter, gen edition or whatever would be more descriptive and transparent, solving two issues at once.
I don't think there is any censorship going on, here. If you think this qualifies as policing, then so be it, but I'm simply asking you to do something (or at least make a clear statement regarding it, which you now have done, sort of) that is entirely reasonable to expect and a common courtesy, at least from where I am standing. You acknowledge that people are deeply concerned about this topic, but you don't seem to be willing to accommodate them, something which would have defused the whole situation significantly and early on with no perceivable cost to you.
Regarding creating job security:
Well, obviously things will change, and that's not any one person's fault, but I don't think this will create job security for modelers, concept artists etc. and it will level the playing field between artists and non-artists. While I like the thought of a tool that relies on the user bringing the composition skills etc., I'm kind of skeptical that it couldn't be employed by someone without, or that someone wouldn't create a similar tool without those limitations.
One could say a level playing field and a lower barrier to entry is a good thing, but I don't think it's unifiable with job security.
Or if we take a tool like Promethean: How long until you not only use models created by artists, but can request a completely new model to be generated? If you don't do it, someone else will. And while I don't think it will happen tomorrow, I'm fairly sure it will happen.
So the creatives in this scenario are the level and game designers and even they might be replaced one day. And while there still might be specialists for art, they'd likely be a lot fewer than there are today. Art directors without teams, so to speak.
I can see the argument that this will lead to many fresh ideas, maybe a return to the beginning of the early golden era of computer games, empowering small teams to achieve big things (to use some of that marketing language).
Ultimately, creating a film or a computer game might be like writing a book.
But how many authors can live off their work? There is a limited amount of consumers with a limited amount of time out there and the industry will likely shrink significantly, perhaps not to the levels of back then, but I'd be very surprised about any different outcome.
Edit: At the very least, some jobs we have today will see a lot less demand. I understand the sentiment of some of the more technical aspects of our work getting in the way of creativity, but for some, that's their job, and they like it*. Not everyone wants to tell a story, and creativity has many forms.
Edit 2: I think @Celosia put it very well.
Edit 3 (yeah, I'll stop now): *Something I suspect you are familiar with, looking at your former and current job.
- I think everyone here is entitled to not having their data fed into any data collecting tools. Non-negotiable baseline.
- I think everyone here is entitled to having their questions answered if they are worried that that may have happened.
- Socially I'm usually strongly opposed to fear being the driving factor for decision making. But it is very important to me that artists feel safe. And at the end of the day saying "this paintover was generated with blah blah" (as long as the other 2 points are maintained) is not a problem. So I do take @Noren 's point.
Outside of that, the way polycount has dealt with the feedback we didn't like over all these years was to ignore it.The precedent that someone can say "I don't like the way you provide your feedback, so you are not allowed to post it/you need to ask my permission first", I believe would not be in the interest or the spirit of this place. Or at least in the form that I have experienced it over the last 20 years. But everything changes. Other than hopefully our shared commitment to empowering artists.
@Eric Chadwick let us know what it's going to be so we can put this matter to rest please.
Fast forward to today, now that these challenges have been solved and AI software is widely available to all, I personally find such things utterly uninteresting because the "solving" that AI does goes so far that it completely replace any skills. So now I would gladly go back to manually placing primitives in 3D space in a 3D software if I needed ; or painstakingly constructing things by hand using old ass perspective drawing techniques, or taking the time to build a mannequin to take photos of ... not only because it's fun, but also because it does make one improve ! Quite ironic at the end of the day
I came to the realization of "Why am I using generated art for reference when I could just download someone else's work to use for reference?"
Like why the heck am I using this for ref
When I can use this
https://www.artstation.com/artwork/NGd6X5
Great art already exists. Tools for asset creation already exist. I don't need AI in my pipeline for pre-production or asset creation. It doesn't speed things up or help at all; if generative AI is unable to produce clean, highly optimized models with PBR texture sets, UV layout with 95%+ efficiency, and non-destructive highly iterative scalable workflow, it is worthless to me.
And, if such technology exists, I'll simply wait for the free, open source version that can run locally. Generative AI is not here to help produce art. It is here for the creators of genAI to make money through subscriptions to their ill-gotten services. It is here to reduce overhead costs for companies like Sony Interactive Entertainment and Microsoft.
Personally, I’m ok with people providing feedback however they like, as long as it is respectful.
However, as far as I can see AI image generators have not yet proved they respect artists.
The irony is that these AI systems would not exist today without the vast amount of training data they started from. Which was in turn never meant to be used for anything other than for purely research purposes.
It seems to me that d1ver’s own AI model, however well-intentioned, is trained using copyrighted materials (movie stills, etc.).
However I don’t think it’s ultimately my decision whether to allow or disallow the use of AI generators here, at least not quite yet. It’s still very fluid.
But at the very least, I think some caution is warranted. I’d like to hear more opinions, how would you all like this to be handled?
Would it be enough to add a credit line indicating when ai has been used?
Maybe also which tool was used, so people could be informed whether it used
ethical scraping? Or should the use of AI be prohibited outright?
I don’t think we’re there yet.
(lol at all
my qualifiers!)
You have no idea what other people had to go through - I don't go around telling people how I worked on Path of Exile 1 & 2 (Ooooh, wow, amazing), how my mother had to live in constant fear and seclusion in case police forcefully aborted me due to China's one child policy, and how suicide felt like a great idea for 1/3 of my entire life. Oh yeah, I also grew up impoverished. Nice childhood we've had; nostalgic, truly.
Not to downplay our traumatic pasts or accolades, but none of it is relevant to giving meaningful feedback.
I'm not offering we police anything. I'm very much for the free flow of education and information.
But I am really sorry about your background. That sounds really tough. I applaud the resilience it took you to be here. I'm very glad that you are.
I'm also sorry for your background, glad you're here. Now, it's back to work for me before I get fired for wasting time on social media, and have to apply for a position sculpting goat genitalia for a very passionate, aspiring developer https://polycount.com/discussion/237275/looking-for-aspiring-people-for-a-passion-project#latest
May I confirm your stance here. Are you suggesting we shouldn't call out people when they lie about members of this community committing actual crimes that never took place?
Well, actually I wouldn't use generative AI for feedback in the first place, because I live on the internet and know how most artists have a pretty visceral reaction to it
A bunch of tech nerds fucked over a lot of artists because they didn't give a shit about them.
Being cruel to the one that actually does care is not somehow going to make this better. But you are welcome to keep going. I'm still going to maintain that this industry deserves much better.
Now for the "policing" part:
I get it would be cumbersome to ask each time if people would be happy to get feedback involving generative AI, especially since it might get a negative reaction frequently (but then again maybe not). In the recent cases, that visualization of the feedback was mostly made with a bit of delay anyway, though.
The other way round, members could indicate, e.g. in their signature, if they want (or rather not want) to receive this kind of feedback. However, this might be overkill depending on how much we expect this to even happen. Personally, I can't really see a lot of people giving feedback this way, at least in the near future, but I've been wrong before.
Let's say it becomes more common, and ignoring the issue of provenance for now, then d1ver is in the somewhat unique position that he does have the know-how and infrastructure to believably assert the images weren't uploaded to a third party for processing. That won't be necessarily the case for others, though.
Now I might be a bit naive, but I don't expect people to (intentionally) act against the explicit will of others, at least in this community, although that might just be me weaseling out of defining any hard rules.
Edit: How is that for qualifiers?
https://www.psychologytoday.com/us/blog/curiosity-code/202504/why-struggling-the-right-way-helps-you-learn
..but there are always people who believe "there must be a shortcut". So if one uses AI oneself to be "adviced" then this might be not the most clever idea.
This might be different if a pro/veteran/adored artist use this to show someone some (quick made; because her/his time is precious) different ideas.
The problem for the beginner is (as always): how to decide which advice is good..
But then: using some AI and feed it some prompt , wainting for the "result", maybe refine it and download the image and post it may take longer than for example some advice like (just for demonstrational purposes): "Your fore- and background is a bit empty; consider to add some trees/mountains and/or some flowers or bushes to give this some depth/context." ..and maybe doing a very quick and sketchy overpaint ("scribbleing" over a part of the original image) ??? IDK..
Of course this are only my one and a half percent..
( Maybe this is also one of those "generation" problems ?? I never understood this completely.. and i'm over 50.. ¯\_(ツ)_/¯ ..and also this doesn't have to do with `oldschool`.. there should be names for some former concepts to be able to look it up and learn from them instead of de-grading them as old, obsolete, inferior.. the "new" things was was built on this sometimes using some "tricks" of older techniques which are sometimes better unstandable if seeing the former context. But now i'm getting off-topic.. okay.. i'm.. not that young anymore
@zetheros I went down that goat testicle rabbit hole. I regret every second of it.
So then I’m thinking we could ask people to include source for their feedback, whenever some sort of automation is being used. Particularly when using AI, but this could include other software-enhanced augmentation I think.
Or ... one could simply consider that there is almost never any need for constructive criticism and feedback to be "fancy-looking" anyways, as crude red lines to outline shapes, arrows representing light sources and rough cross-hatching to indicate shadows have always been enough to communicate edits, and also have the benefit of leaving room for interpretation - as opposed to throwing humiliating AI slop (like that ChatGPT "piss-edit" of the scifi street corner piece above) at the face of someone trying to learn. Heck, I have found memories of bouncing feedback back and forth with an AD over screenshare using mouse scribbles, not even a tablet. And if more accurate indications of values and colors need to be done, they can be done as a small thumbnail, a summary of sorts.
Here's a Dragon Ball storyboard followed by a correction made by Akira Toriyama himself. This is for a million-dollar franchise, way more important than someone's attempt at a portfolio piece in Unreal. This isn't even the original storyboard and correction, just a recreation by the artist and it kindof looks like childs drawings. And it doesn't matter, everything's there.
Full-on paintovers are actually of somewhat poor taste if not explicitely sollicited anyways.
At the end of the day, if someone needs an AI to crunch numbers for them in order to provide feedback ... that just means that they simply have no feedback to provide to begin with.