Hi there! This came up recently in a couple places here on the forums, so it makes sense for us to make a clear policy on the use of AI search engines like ChatGPT, Gemini, and the like.
Let us know what you think about this. Your thoughts matter, since in the end the contents of these forums are purely built by all of you, each and every day. You are Polycount!
https://polycount.com/discussion/63361/information-about-polycount-new-member-introductions/p1#don-t-use-ai-for-repliesDon't Use AI for Replies
Polycount doesn't allow the use of AI tools to help you generate answers.
We’ve seen it again and again, these algorithms snip and combine sources, often producing inaccurate results, because they lack the necessary context. Then they present their results as facts, and most users accept them without seeking (clearly necessary!) verification.
If you must use these tools, use them smartly. Don't accept their output at face value. Verify sources if provided. If no sources are linked, then it's best to simply treat the answer as pure bullshit. Seek to verify the output by reproducing it manually yourself.
If you see AI-generated text in replies, please use the Flag: Report function to let us know. We want the forums to be a source of usable information, not a bunch of AI slop.
Replies
Some form of AI/LLM is in many tools now, sometimes just as a search engine, but often as a next-step recommendation tool.
It seems to me when people seek info in an area they are not a subject-matter-expert of, the AI result is crap, but the person doesn't have the understanding to discern that it is indeed crap.
Sorry, I didn't understand this, what is this referring to exactly?
But I can see a use case for LLM rewrites: Non-native English speakers.
Being able to read and write in English doesn't automatically translate to being concise and accessible. I say that as a non-native speaker. Depending on your mother tongue your sentences can get very contrived, and LLMs could be used as a tool to extract what you meant to say from what otherwise are very winding slabs of text.
That's a usage much closer to a spellcheck than what's being discussed in the rule because the bot isn't creating the contents of the text, and personally I'm not keen on it due the lack of spontaneity thing, but I still find it acceptable because then it's actually using it as a tool with the end goal of conveying your thoughts, not as a way to offload thinking to a deceptive impression of a brain.
Regarding AI-generated assets I'm on the full disclosure team, and also "it should not be the final asset" team. Using tools that happen to have ML is a gray area, but doing something like generating slop to pass it off as concept art for example, the end goal for a concept artist, is very different.
using a robot to translate your words isn't wrong,
People have completely jumped ahead into making shitty full AI-generated models which are ugly and completely unusable for anything, but nothing for retopology