Thanks for sharing that. I've been putting it to pretty good use while prototyping a new game. Saves a lot of time and is allowing to test out ideas more quickly while getting a little closer to end-product feel.
Well, considering that the example promts consists of things like this …
… it seems pretty clear to me that this is using the same kind of artist-fucking, image-stealing, compensation-dodging tech that powers all other AI image generators.
Furthermore, a proof that it understands what "Ghibli" means, even though the studio explicitely opted out of image generators (which is of course impossible at the deepest level, meaning that MJ for instance only blocks Ghibli as a prompt, but retains the training made off the back of imagery from the studio) :
There's nothing to really gain by embracing AI in the way the tech bros are marketing right now.
Ultimately whatever you accomplish easily gets drowned out by the glut of prompt typers. And at every step along the way a well trained artist will still coax out better work from it and be able to process the output in a studio... Until it finally doesn't matter.
Then at that point no one has a job, and either civilization ends, or we're free to start doing things for the enjoyment of the activity, which also means being able to do it yourself.
Nvidia seems very serious about making 3D environment creation child's play... I'm worried, I just paid €6000 for a 3D art and graphics engine course. I'm afraid I wasted my money because of these advancements that allow anyone to do the same thing I do just by pressing a button while sitting on the toilet.
That's not really true though. AI bros who don't have visual skills will just pump out the same old stuff. Which may look great to the average person on an Instagram feed. But once you need specific results that work within the constraints of an actual game production context, their "skills" will never translate.
When we want to hire creative talent, we look for people with a strong understanding of art fundamentals, and solid problem-solving skills within the constraints of 3d game graphics. Neural networks are not going to fill this kind of role.
Stable diffusion was a thing even before it came to existence in 2021 which didn't change the human factor involved with arts. Everyone is only butt hurt because Chat GPT prompt can be used with it and anyone can ask to make anything and present the results as their own by writing few prompts. My take on this entire thing is, even though it exists, our job will remain the same, the only difference is we can use AI tech for our benefit to cut time, cost and can do a lot more with it.
We should stick to our guns and continue to do what we have been doing. Study fundamentals of arts, make art and continue making arts.
I think this whole thing is overblown when it comes to the legal aspects.
From what I'm understanding, AI learns by studying existing art to look for meaningful patterns, then creates art using those patterns. People learn art in a similar fashion, i.e. breaking down a painting made by another artist to study, then apply those principles for their own paintings. I'm not seeing how the AI is any different than a person here. If the argument is that its unethical for the AI to train itself off artists work because it didn't get permission to do so, then the same would have to apply to people because how many of us ask for permission to study another artists work? None.
Even in the argument of styling....it just doesn't hold any water. Artists openly encourage people to study from other artists to find their own style/voice and on top of that you see many artists asking "how did you paint like that?? I want a style like yours". We don't treat that as being unethical so I don't see why it would be for AI as well.
As far as jobs go, I have no idea how that's going to work. A lot of the world is capitalistic from what I understand so that means a lot of companies will utilize AI that puts people out of a job. That doesn't mean its the end of society, but rather the beginning of a new one that's hopefully better than what we currently have because frankly what we have now (i.e. massive inflation, lots of countries in debt, supply shortages) isn't sustainable.
I think this whole thing is overblown when it comes to the legal aspects.
From what I'm understanding, AI learns by studying existing art to look for meaningful patterns, then creates art using those patterns. People learn art in a similar fashion, i.e. breaking down a painting made by another artist to study, then apply those principles for their own paintings. I'm not seeing how the AI is any different than a person here. If the argument is that its unethical for the AI to train itself off artists work because it didn't get permission to do so, then the same would have to apply to people because how many of us ask for permission to study another artists work? None.
Even in the argument of styling....it just doesn't hold any water. Artists openly encourage people to study from other artists to find their own style/voice and on top of that you see many artists asking "how did you paint like that?? I want a style like yours". We don't treat that as being unethical so I don't see why it would be for AI as well.
As far as jobs go, I have no idea how that's going to work. A lot of the world is capitalistic from what I understand so that means a lot of companies will utilize AI that puts people out of a job. That doesn't mean its the end of society, but rather the beginning of a new one that's hopefully better than what we currently have because frankly what we have now (i.e. massive inflation, lots of countries in debt, supply shortages) isn't sustainable.
I keep hearing this argument and I don't agree. Sure, there are some similarities in how humans and AI learn but what makes humans different is that we can invent entirely new concepts out of nothing. There was a time in history when there was no art at all and then after many many years someone arranged rocks in a pretty cool way. Or while making a flint knife the person making it thought it would be interesting if it had some engravings. 10000 years later we have Renaissance paintings and sculptures. A few hundred years later you have movies, video games and so on.
If you give AI nothing but pictures of rocks, ALL it will be able to do is imitate those kinds of pictures. That's it. Humans, at least for now, are more than just mere imitators. Sure, the likelihood that someone will invent a new concept is incredibly low, but the chance is there. And that chance over a very long time is what generates new ideas.
As for the legal aspects, I work for a huge outsourcing conglomerate and the higher up people said that clients are afraid of using AI because of potential legal problems. That's when I realized that we are unlikely to use these tools very soon until these big lawsuits are resolved because if the way data is used is no longer a free for all, our clients will have to change their workflows mid-production. So far, the only company doing this ethically is Adobe.
"I'm not seeing how the AI is any different than a person here"
It doesn't matter. A human observing a picture to get inspired by is something that is accepted in the social contract, and it doesn't break any law. On the contrary, an entity using a picture that they don't own the rights of and making it a part of a piece of software (in any shape or form : either by using the picture directly, or by deriving training data from it) does break existing copyright laws. The way the picture is processed doesn't matter one bit, even if said processing was exactly the same as what a human does (it isn't).
Furthermore, it also violates fair use as the resulting output competes directly with the work of the persons who have gotten their images stolen from.
Now one could perhaps try to make the case that since the tech is new, there is a legal void around this topic, and the training from copyrighted work can only be made illegal in the future, but not retroactively. But on top of not making sense (see above), that argument doesn't hold either way because the legal precedent *has* actually been set years ago : some countries explicitly allowed ML from copyrighted work as long as it is done for research purpose. This doesn't mean that the resulting models can be used commercially, or that their output magically launders the copyright of the original holders.
About Adobe : they are claiming to do it ethically, but IMHO it's still not clear cut. Their training model is said to be derived from their own stock photos, but if only a single picture in Adobe Stock happens to not fully belong to the uploader, they're in the wrong. Furthermore their new generative fill seems to have a blocker for prompts involving the name of celebrities - but if their training data was indeed ethically sourced, then sure enough their model wouldn't know what Brad Pitt looks like and there would be no need for a prompt blocker. So at the end of the day the only way forward (IMHO) is to make it mandatory for anyone putting out a generative ML model to make the training data human-readable for inspection.
Interestingly enough, even if things were done 100% ethically we could still end up in a situation with generative AI being a threat to many working artists out there. But that's not where we are at the moment IMHO.
I don't think humans do anything more than AI really, if you wanted to talk semantics of it. Monkey see, monkey do, there is no magic. Some monkeys see a little more, some do a little more, but AI has seen more than all and can do virtually infinitely. There are things a human can do like anticipate the future and solve multi-step problems but that's more than just being an artist pushing pencils, that's problem solving. So saying "humans can do other things" I think will wind up being equal to "well you can't be just an artist you also have to do a ton of other things". So then the human employees have to be artist but also have ten years of experience and a masters degree in whatever for a junior position at $15 / hour.
But that entire argument seems like deflection and counter-productive argument for humans to make. Humans concern is robot stealing livelihood. So if one robot can do work of 10,000 humans in an instant, robot is bad guy. It's simple. I think people ought to just ignore arguments like that because it detracts from the point. The only thing that really matters is the human and economic impact. The human impact is people who have jobs now lose them, people seeking jobs in future have more trouble, and those who already have too much pie get more, and from that you just have all the problems that come with gross wealth inequality.
In other words we might say that in fact humans are just silly little monkeys and they need silly little monkey things to do otherwise they are going to go ape-shit and then everybody loses.
I don't really see outrage over theft of the data because employees are almost universally already subject to wage theft and artist especially are shooting each other in back by working for low wages for sake of competition. If I was an artist and I wanted to keep being an artist and I wanted others to be artist after I die I'd set my focus on forming union - keep out the buddy fuckers who want to work for cheaper and have some sort of power to shut things down if boss gets any bright ideas with AI. Laws I doubt can have much effect and it's something average person has zero control or even understanding - but supporting their local union will actually have direct effect on their job and is something they have some control over things in. But I don't think that will ever happen, so many of the people trying to get in are just kids and the only thing they are thinking about is trying to impress what they view as authority figures. They can't think very far beyond that.
CyberdemoN_1542 said: I keep hearing this argument and I don't agree. Sure, there are some similarities in how humans and AI learn but what makes humans different is that we can invent entirely new concepts out of nothing. There was a time in history when there was no art at all and then after many many years someone arranged rocks in a pretty cool way. Or while making a flint knife the person making it thought it would be interesting if it had some engravings. 10000 years later we have Renaissance paintings and sculptures. A few hundred years later you have movies, video games and so on.
If you give AI nothing but pictures of rocks, ALL it will be able to do is imitate those kinds of pictures. That's it. Humans, at least for now, are more than just mere imitators. Sure, the likelihood that someone will invent a new concept is incredibly low, but the chance is there. And that chance over a very long time is what generates new ideas.
As for the legal aspects, I work for a huge outsourcing conglomerate and the higher up people said that clients are afraid of using AI because of potential legal problems. That's when I realized that we are unlikely to use these tools very soon until these big lawsuits are resolved because if the way data is used is no longer a free for all, our clients will have to change their workflows mid-production. So far, the only company doing this ethically is Adobe.
Oh I agree that humans have an inherent skill to be creative and generate new concepts where none previously existed just due to life experiences. Thats something we'll always have over the AI, but my point here was that when it comes to generating what we consider to be original art, what the AI is generating and how it is generating things meets that criteria.
The legal issues you point out I agree on. However I don't know how long the barrier will hold because the ethics argument goes both ways. People who support AI art will certainly hit back by pointing out examples where people have copied off each other or ended up with similar looking results unintendedly, yet no legal action was taken and ones that were ended up failing. Yes Adobe is taking the safe and arguably best route here by using its own content for generative AI. This is what most companies should be doing and arguably game studios should be able to legally anyhow because they already have access to their own art libraries given artists sign away their rights to own whatever they create for the company.
"I'm not seeing how the AI is any different than a person here"
It doesn't matter. A human observing a picture to get inspired by is something that is accepted in the social contract, and it doesn't break any law. On the contrary, an entity using a picture that they don't own the rights of and making it a part of a piece of software (in any shape or form : either by using the picture directly, or by deriving training data from it) does break existing copyright laws. The way the picture is processed doesn't matter one bit, even if said processing was exactly the same as what a human does (it isn't).
Furthermore, it also violates fair use as the resulting output competes directly with the work of the persons who have gotten their images stolen from.
I see what your saying, but that argument has no legs to stands on.
A ton of artists use existing works for practice (or training if you will). Thats encouraged by the community, by schools, self-learners etc. No one simply learns how to do art or apply the various principles if they've never seen it before. I think arguing the AI breaks copyright laws because it draws from existing works would open a can of worms that throws many artists work into question because many have created derivative works. You don't think the vast majority of superheroes just so happen to wear tights and capes, have similar personalities or supernatural powers do you? Imagine how many superhero creators would be opened up to lawsuits over this and that's just the beginning.
Going a step further, take a look at a comic convention. Go up to the tables and see how many artists are using popular characters to sell pictures, t-shirts, etc. You don't think they got permission from all those companies to sell art using their IP do you? Thats breaking copyright laws for all to see, yet the artists will sell it anyways. Marvel, for example, very obviously knows artists at these conventions are making a buck off selling drawings of their characters without permission, but they do nothing about it because they know it benefits their brand by giving them more exposure which means bigger profits.
In short....this is why I'm moreso against legal action against the AI than for it. I think it will come back to bite artists far more than the AI because we're violating each others copyrights more than what the AI is.
pior said: Now one could perhaps try to make the case that since the tech is new, there is a legal void around this topic, and the training from copyrighted work can only be made illegal in the future, but not retroactively. But on top of not making sense (see above), that argument doesn't hold either way because the legal precedent *has* actually been set years ago : some countries explicitly allowed ML from copyrighted work as long as it is done for research purpose. This doesn't mean that the resulting models can be used commercially, or that their output magically launders the copyright of the original holders.
AI users will counter that by pointing out how many humans train using copyrighted art and have been for centuries. Of course humans don't train themselves entirely on others art, but it still makes up a core part of their own ability to generate art.
pior said: About Adobe : they are claiming to do it ethically, but IMHO it's still not clear cut. Their training model is said to be derived from their own stock photos, but if only a single picture in Adobe Stock happens to not fully belong to the uploader, they're in the wrong. Furthermore their new generative fill seems to have a blocker for prompts involving the name of celebrities - but if their training data was indeed ethically sourced, then sure enough their model wouldn't know what Brad Pitt looks like and there would be no need for a prompt blocker. So at the end of the day the only way forward (IMHO) is to make it mandatory for anyone putting out a generative ML model to make the training data human-readable for inspection.
True, but thats going after the needle in a haystack and legally wouldn't hold much water. Of course some photos that are copyrighted will slip through the cracks as no system is perfect. I doubt a court is going to hold Adobe in hot water over this unless a large portion of their database contains copyrighted works, more than whatever the decided threshold becomes.
I think there can be various ways forward depending on the company. Game studios have a big advantage here because they already have big libraries of art that their artists create over the years that is legally theirs. Granted using artists as grain to feed the mills is morally wrong or at the minimum questionable I think, but it wouldn't be illegal.
pior said: Interestingly enough, even if things were done 100% ethically we could still end up in a situation with generative AI being a threat to many working artists out there. But that's not where we are at the moment IMHO.
AI is going to be a threat to a lot of careers, but I think art is amongst the safer ones truth be told DESPITE the hugely rude wakeup call we got. Ultimately no matter how good the AI gets, that's never gonna take away our ability to create art, people will value human art more so as it gets rarer and with AI presenting many technological breakthroughs, I think artists who want to be independent will be more capable of generating their own wealth without need of working for a company or client.
AI is going to be a threat to a lot of careers, but I think art is amongst the safer ones truth be told DESPITE the hugely rude wakeup call we got. Ultimately no matter how good the AI gets, that's never gonna take away our ability to create art, people will value human art more so as it gets rarer and with AI presenting many technological breakthroughs, I think artists who want to be independent will be more capable of generating their own wealth without need of working for a company or client.
I hope you're absolutely right with these words. I'm feeling a lot of anxiety these days. Like many other artists, I've put all my eggs in the basket of 3D art for audiovisual productions of any kind. If this way of making a living is compromised, I don't know if I'll be able to adapt to another field.
A universal basic income for all of humanity would be the best remedy in this situation. We would continue to strive to stay ahead of machines and remain irreplaceable. However, in the possible scenario where companies no longer require artists to work, it wouldn't be seen in such an apocalyptic light as it is now.
AI is going to be a threat to a lot of careers, but I think art is amongst the safer ones truth be told DESPITE the hugely rude wakeup call we got. Ultimately no matter how good the AI gets, that's never gonna take away our ability to create art, people will value human art more so as it gets rarer and with AI presenting many technological breakthroughs, I think artists who want to be independent will be more capable of generating their own wealth without need of working for a company or client.
I hope you're absolutely right with these words. I'm feeling a lot of anxiety these days. Like many other artists, I've put all my eggs in the basket of 3D art for audiovisual productions of any kind. If this way of making a living is compromised, I don't know if I'll be able to adapt to another field.
A universal basic income for all of humanity would be the best remedy in this situation. We would continue to strive to stay ahead of machines and remain irreplaceable. However, in the possible scenario where companies no longer require artists to work, it wouldn't be seen in such an apocalyptic light as it is now.
I'd not worry about it really. 3D looks to be one of the safer fields and probably one that can benefit from AI given it is growing in various areas (i.e. metaverse, VR, 3D printing). I've seen some people outright panic over the AI and its like....we don't even know how the AI is going to affect things when it comes to jobs so there's no point in panicking. It doesn't hurt to have a backup plan of course, but its better to keep moving forward.
I'd only worry if I saw employers routinely and perfectly willingly to run companies or creative departments with zero human labor involved. That is when we should truly be worried, but until then, nah.
I think it would too and we will likely get to UBI at some point, mostly because people won't be able to afford the ever skyrocketing cost of living. Something will have to get done and the only solution to fixing it that I can see is to either go to UBI or become a moneyless society, however that would work. There are downsides of course, but I think the upsides far outweigh them.
Hey everyone! I've had a lot of thoughts about AI lately and hopefully I can chip in a bit here to help people transition a bit. I'm not advocating for AI or saying that it's good or bad... but we have to accept that the genie is out of the bottle and it's not going back in. I've been working in gamex/vfx for over 12 years now and the whole time I've been watching this automation/tech thing proceed. I think it's unlikely that we see a bunch of jobs vanish in the short term, and in the long term I think we'll see a shift in the day to day of artists. However I don't think we're going to see this massive dropoff some people are worried about.
Even if the current models utilize stolen sources it wont be long until someone figures out a way to take a few photos with their own iphone, feed it to the AI and get it to self iterate based on those sources. (I'm pretty sure that is what happened with Alpha Go and even Alpha fold. given a limited dataset, it was able to generate it's own training data and then proceed to solve insane problems and beat the worlds best go player)
So where does that leave us? Well I think all eyes are on this problem and given the huge recent win for us artists ( AI generated imagery cannot be copywrited) we are going to be in a good spot for awhile as we transition into a new paradigm.
One last thought to ponder is this... currently AI is only as good as a lazy artist who copies the mean average of what they see on deviant art and artstation. When you want to do something more complex... guess what! You still need the fundamentals.
(most of these kind of videos are not very useful or in depth but this illistrates the point well enough I think) it's actually just as difficult to get anything specific out of AI as it is to learn how to use photoshop or maya or any other advanced piece of software. You're still going to need to learn anatomy and be able to tell the AI that the arm looks broken or that the muscle groupings need to shift or flex more in certain poses. You're still going to need to learn perspective to pick up on when the lens distortion changes from 35mm to 27mm halfway through the image because AI doesn't have a working model of perspective, it only know pixels.
I'm seeing is an increase in complexity that I believe will continue to expand. This isn't the end, but rather the beginning. We're going to be living in a time period where it takes a fraction of the time to complete any given task compared to 10 years ago... but still up to our eyeballs in work/assignments.
That's my two cents anyway. Hope it helps weight the fear/bias towards a more nuanced look at AI from artists. For anyone feeling worry/anxiety about the industry... learn the fundamentals and wait for the platforms/tools to shift. It's going to be okay. Your future is probably fine (given we don't end up in world war 3)
I was just about to post the same thing. This is a pretty big win for human artists here in the USA. If we can get more courts worldwide to agree then maybe sites like ArtStation will take the issue more seriously and stop polluting their feeds. There are literally newbies posting 3 or 4 high quality renders at a time and none of them are reflective of actual artistic talent, but an ability to manipulate a much more technical system that samples and benefits from previously created (and protectable) human artwork whose creators probably spent ages making.
All this means is work that's claimed to be straight out of ChatGPT, Midjourney, or StableDiffusion have no copyright protections. Those 1,000 AI generated image reference packs have no copyright protections. If a book claimed to use only AI generated images, you could use the images and text in isolation, but the arrangement is still copyright protected because a human put it together, so you couldn't just reprint the book yourself.
Anyway, I greatly appreciate @artquest perspective on this particular topic. In hindsight, it seems to make a lot of sense. As humans, it's easy to be swayed by irrational fear. It's always beneficial to analyze these kinds of changes once the storm has passed and we can examine what happened with calm.
"That's my two cents anyway. Hope it helps weight the fear/bias towards a more nuanced look at AI from artists. For anyone feeling worry/anxiety about the industry... learn the fundamentals and wait for the platforms/tools to shift. It's going to be okay. Your future is probably fine"
I don't think that's the case at all, unfortunately - especially now that 2D image generation has been the canari in the coalmine.
These "AI art tools" were developped by infringing on the copyright (and overall rights to images) of thousands of artists/photographers/social media users ; are being used for profit ; are destroying the spaces where the user/grifters/AIbros are accepted (byebye Artstation) ; and overall they are polluting a field that never asked for it. It's a solution to something that was never a problem. I genuinely don't think that things are going to be fine when the roots are so rotten and the consequences are already affecting the mental health of many.
As predicted, AI-driven sculpt generation from prompt is already here. It wont take much for it to be able to generate voxel-type models indistinguishable from highres sculps made by talented artists/craftsmen. There's no way that this will not negatively affect the field, first by polluting the amateur side of it and quickly spreading to the studios obsessed with "content" as opposed to narrative and craftmanship.
Even the crude example below is already an issue. That's a whole step of the process (blockout with polypaint) removed altogether. A shame really, as that means that the only thing left for the human artist after that is ... remeshing, retopo, baking and UVs. That is to say all the things that 3d artists wished AI could take care of.
In the grand scheme of things the problem I have with all of this is the way it will only make people less willing to share their art because it will be instantly fed into ML models by morons who never had any interest in the field up until 5 mins ago. It's the beginning of an incredibly toxic era really, and I don't think it will be easy to mentally block, at all.
The problem that needed to be solved is that capitalist require labor to make more profits, but labor is whiny and wants things like dignity, paychecks, and water. Which of course is a major insult to an antisocial personality disorder mother fucker who firmly believes that all things on earth belong to them only.
People are working to feed fat psychopaths who hate their guts and want to get rid of them as soon as they can. It doesn't matter how nice your immediate team is, if you work a place earning millions or billions, you are feeding the fat bastard. Probably even if you work on a small team you are as well, because the nature of capitalism is that it forces normal social people to behave anti-socially if they want to "get ahead".
Eventually, people will get exactly what they deserve, whether they knew better or not. There is a price to pay for apathy. At some point when there is ten wolves and a billion sheep walking into their mouths endlessly you have to begin to believe maybe the sheep just deserve it, especially when education is so freely available.
All the things people think matter don't - being honest, being hardworking, being dependable and reliable... capitalism selects for none of these. These are virtues from an era long past. Whoever is most selfish, greedy, and has generational advantage is favored. People will continue to lose because everybody is playing an anti-human game and there is pretty much no modern mode of living which teaches people how to see what is happening in front of their own nose. It is going to get stupider and much harder. I just hope I can finish my projects before I have to fight off people coming for my water.
As terrifying as all of this is, I don't think our clients will use this tech in the near future (2-3 years). It's too risky...for now. If some legislation comes out that data needs to be opt-in and be completely transparent, then our clients will have to rethink their workflows and redo a lot of work because even if they feed in their own art to create more, the generator was trained on ill-gotten data. It's useless trying to ban this. How our data is used is where the REAL fight is.
Also, as far as I know, works entirely generated by AI can't be copyrighted in the US unless a lot of manual input is added on top.
So, while the future looks very bleak from my end, I think we still have a couple or so years to grow and accumulate money.
It's not just about the money. This job is the best thing that ever happened to me and helped me grow so much as a person. Now instead of making people better, it will make soulless machines better. People who want to end all work are weak and idiots. Work is how you gain leverage in society. No work, no power, no power, the people who control these systems can do whatever they want with you, under the guise of progress of course (whatever that word means anyway). So much for the argument that this will free us to do the things that "really matter".
What Plato said about writing sounds eerily similar...We've been through this before and we just don't learn.
as a little side comment, companies/studios are ALREADY using AI art. most of what I have seen in terms of AI content so far at least has been using generated art as a base and getting someone else to "touch-up/clean-up/alter sections/ etc etc..".
Also all the companies/HR/Marketing will say at the moment they won't use it due to its ramifications, but its just a gentleman's agreement. People are already using it in varying hush-hush quantities.
@artquestit's actually just as difficult to get anything specific out of AI as
it is to learn how to use photoshop or maya or any other advanced piece
of software.
How on earth did you reach this conclusion?
as a little side comment, companies/studios are ALREADY using AI art. most of what I have seen in terms of AI content so far at least has been using generated art as a base and getting someone else to "touch-up/clean-up/alter sections/ etc etc..".
Also all the companies/HR/Marketing will say at the moment they won't use it due to its ramifications, but its just a gentleman's agreement. People are already using it in varying hush-hush quantities.
That's concerning if true. I am an outsourcer and none of our clients requested we use AI yet.
as a little side comment, companies/studios are ALREADY using AI art. most of what I have seen in terms of AI content so far at least has been using generated art as a base and getting someone else to "touch-up/clean-up/alter sections/ etc etc..".
Also all the companies/HR/Marketing will say at the moment they won't use it due to its ramifications, but its just a gentleman's agreement. People are already using it in varying hush-hush quantities.
That's concerning if true. I am an outsourcer and none of our clients requested we use AI yet.
you can see it in their mood boards. in some cases we had to sign contracts not to use ai, others send us references clearly made with ai.
Whatever their attempts or solutions they seem to care about money so go after their profits and that is how this situation gets corrected quickly, if some "gov" is attempting to "hear" "the people", then voice that, if they are using materials from living artists, those living artists should be compensated automatically with every "generated" material no matter how small of a % if a lot of people using their thieving scripts then that % adds up. The artists being used to generate any style, similar to or equally equating to should be compensated, hell I'd say anyone who mimics a style every artist who's ever made a similar style should get a %, which means those hosters of the scripts get next to nothing, which i am 1000% fine with. The way i see it, they are attempting to screw the creatives with this and i see the solution is to return in kind the same to them, screw their profits any digital creativity belongs to us. Change my mind.
as a little side comment, companies/studios are ALREADY using AI art. most of what I have seen in terms of AI content so far at least has been using generated art as a base and getting someone else to "touch-up/clean-up/alter sections/ etc etc..".
Also all the companies/HR/Marketing will say at the moment they won't use it due to its ramifications, but its just a gentleman's agreement. People are already using it in varying hush-hush quantities.
That's concerning if true. I am an outsourcer and none of our clients requested we use AI yet.
you can see it in their mood boards. in some cases we had to sign contracts not to use ai, others send us references clearly made with ai.
Using mood boards is quite different from requesting AI textures or even models.
"if they are using materials from living artists, those living artists should be compensated automatically with every "generated" material no matter how small of a % if a lot of people using their thieving scripts then that % adds up"
Hell no. Accepting such a deal would mean accepting the theft in the first place. That's exactly what these companies want.
"if they are using materials from living artists, those living artists should be compensated automatically with every "generated" material no matter how small of a % if a lot of people using their thieving scripts then that % adds up"
Hell no. Accepting such a deal would mean accepting the theft in the first place. That's exactly what these companies want.
100% i am so glad you can see that, i am against it all the way but people are actually saying the genie's out of the bottle and we have to "conform" cause its out there... if the "world" decides its fine then they go ahead its better than having nothing, cause its okay to steal apparently. I rather this was never around and who knows how long it's been around we are only hearing about it now. Now to me challenges pimping to show progress is just feeding this scripts probably...artstation challenges come to mind now and since they are 100% okay with it apparently and are trying to make it a thing, who knows who those artists are, they very well could be artstation bots posting ai art to get us to agree to it for all we know. Epic was the previous owners of a.s.s. right? so they sold us out in my eyes, they must have seen the outcome of this or asked or even started it so they could sell it to, 0.10c's.
as a little side comment, companies/studios are ALREADY using AI art. most of what I have seen in terms of AI content so far at least has been using generated art as a base and getting someone else to "touch-up/clean-up/alter sections/ etc etc..".
Also all the companies/HR/Marketing will say at the moment they won't use it due to its ramifications, but its just a gentleman's agreement. People are already using it in varying hush-hush quantities.
That's concerning if true. I am an outsourcer and none of our clients requested we use AI yet.
you can see it in their mood boards. in some cases we had to sign contracts not to use ai, others send us references clearly made with ai.
Using mood boards is quite different from requesting AI textures or even models.
one step at a time, moodboards used to be painted, sometimes photosourced. now the painting part might be gone.
it's not like, just because you are no concept artist, it doesnt effect you or might not sooner or later.
there are companies and products known for cheap labour, their current artists get tasked with cleaning up after ai creations already.
what does it matter if its cleaning up textures or models or "just mood board" images?
I must say I am genuinely fascinated by the fact that some don't seem to see the issue of "just using it for inspiration" (or moodboards).
I mean, HELLO ? Not only is it a slippery slope as mentionned above, but it is also *litterally pointless*. The point of a moodboard is not just to gather pretty shiny pictures to show to the higher ups. It's mostly to establish the DNA of a project and create a shared set of references between artists. AIvomit completely negates that since it removes all possibility to trace back the art shown in such moodboards to the original source of the work, and the body of work of the inspiring artists behind it.
Prompting "cool asian chick with big titties in front of a cyberpunk city" just ... end there. Whereas actually looking at the source of cool anime scifi art leads to discovering badass artists like Masamune Shirow (and many others), their rich body of work, and so on. If not ... what's the point ?
I'm in general agreement @pior but I think there does remain some practical value to it, though I have only found it to be a tiny bit useful in some weird cases.
Like if I need to get some ideas across quickly to somebody, since I am not a skilled 2d artist I can generate some AI image which gets close to what I have in mind in a few seconds. Compared to if I make a lousy sketch which might take me ten minutes or longer. The quality of the image is not important but since it is fully rendered and has details, so long as it is close to what I have in mind that seems a lot more useful for somebody who needs a concept.
No, this is of course not excuse of theft and in general I haven't found AI (midjourney) to be particularly helpful for the work I do. If I had a strong need for a lot of concept art I have to hire an artist because you need to be able to iterate on concepts and AI cannot do that. Also I imagine part of a concept artist is that you can bounce ideas off of them and get project specific feedback, which of course AI cannot do either.
So far, from a purely practical standpoint, I'd view AI as a way to speed up a concept artist but not replace them. That's just for work as a solo / tiny team developer. I imagine the enormous studios with armies of artist probably lay a lot off now, but I say shame on you for working there in first place. Everybody knows these places are just selling gambling apps disguised as games anyway.
I can generate some AI image which gets close to what I have in mind in a few seconds.
Then you're simply falling victim to the illusion of the minds eye, similarly to how AI prompters are certain than the images they summon represents "tHeIr ImAgInAtIoN".
And regardless - the second someone uses an image generator that relies on stolen imagery scraped without consent, that person is sending a big FU to the original artists who never consented to it, regardless of the way the AIvomit is being used, commercially or not. It doesn't matter one bit if the images are the most useful thing ever, or formless monstrosities - it's still screwing up the original artists who never agreed to it, and it is directly or indirectly supporting the makers of these AI-vomitting machines.
Doing it anyways because it as "practical value" while knowing full well that the people who did the original images are not okay with it is in my view just as morally reprehensible as claiming someone's images as one own. It's a matter of principle.
Taking a step back on this, I think one of the reasons why people tend to (often knowingly) forget how many people this is screwing over, is because the result of a prompt is just a single picture (or a handful of pictures). Yet if the source DB was visible at all times, I bet more would realize how many people they are stomping on the face of.
Here's a simple ping starting from a random colorful 2D picture, revealing stylistically adjacent images from LAION. Every single image represents one artist being silently screwed over, who never agreed to participate in this.
yes I agree but did you care when grocery baggers were put out of work by automation? Or steel workers? Or farmers? Or when cows were enslaved to live the most miserable existence imaginable only to be food for fat ingrates who by all natural law should have no right to exist?
No you only care when it hurts you. So you can imagine how easy it will be for all the people who are not hurt by it to not care at all. That's just the selfish overpopulated world we live in. It is depressing.
I mean it's a big problem for a lot of people but it is not a new problem at all. And if people weren't in the habit of being selfish and short sighted to begin with they'd be better equip to foresee problems that keep happening over and over throughout history.
But I bet if I came here 15 years ago and hammered on about the need for unions people would just laugh at me or say I'm too negative or some shit.
Yeah, and that's still irrelevant whataboutism - this is especially of bad taste when not only done as a general statement, but ever more so as a direct attempt at a "gotcha".
Replies
Evidently, the US copyright office has issued rules pertaining to copyright on ai generated images.
I wasn't aware until I stumbled on a YT video published by Neil Blevins.
To anyone that may be curious:
is there one AI program yet that let you upload say a level design block-out screenshot and get some concept art / paint over done?
I've yet to find any personal use for AI art but that would be really helpful for me.
so much indeed
here's something interesting too:
https://poly.cam/material-generator
Hmm I wonder if it has the same problems as Withpoly.
Thanks for sharing that. I've been putting it to pretty good use while prototyping a new game. Saves a lot of time and is allowing to test out ideas more quickly while getting a little closer to end-product feel.
Well, considering that the example promts consists of things like this …
… it seems pretty clear to me that this is using the same kind of artist-fucking, image-stealing, compensation-dodging tech that powers all other AI image generators.
Make of that what you will.
Furthermore, a proof that it understands what "Ghibli" means, even though the studio explicitely opted out of image generators (which is of course impossible at the deepest level, meaning that MJ for instance only blocks Ghibli as a prompt, but retains the training made off the back of imagery from the studio) :
Yuck.
There's nothing to really gain by embracing AI in the way the tech bros are marketing right now.
Ultimately whatever you accomplish easily gets drowned out by the glut of prompt typers. And at every step along the way a well trained artist will still coax out better work from it and be able to process the output in a studio... Until it finally doesn't matter.
Then at that point no one has a job, and either civilization ends, or we're free to start doing things for the enjoyment of the activity, which also means being able to do it yourself.
https://blogs.nvidia.com/blog/2023/05/02/graphics-research-advances-generative-ai-next-frontier/
When we want to hire creative talent, we look for people with a strong understanding of art fundamentals, and solid problem-solving skills within the constraints of 3d game graphics. Neural networks are not going to fill this kind of role.
We should stick to our guns and continue to do what we have been doing. Study fundamentals of arts, make art and continue making arts.
https://80.lv/articles/nvidia-presents-new-ai-model-that-turns-2d-videos-into-3d-structures?fbclid=IwAR2cAmv8zeJy8s6keWIIFHH4CI7iZRQ_9UjSOEvI13hnof-dMP409ZifyXQ
https://www.youtube.com/watch?v=L6rJA0z2Kag&ab_channel=TickerSymbol%3AYOU
From what I'm understanding, AI learns by studying existing art to look for meaningful patterns, then creates art using those patterns. People learn art in a similar fashion, i.e. breaking down a painting made by another artist to study, then apply those principles for their own paintings. I'm not seeing how the AI is any different than a person here. If the argument is that its unethical for the AI to train itself off artists work because it didn't get permission to do so, then the same would have to apply to people because how many of us ask for permission to study another artists work? None.
Even in the argument of styling....it just doesn't hold any water. Artists openly encourage people to study from other artists to find their own style/voice and on top of that you see many artists asking "how did you paint like that?? I want a style like yours". We don't treat that as being unethical so I don't see why it would be for AI as well.
As far as jobs go, I have no idea how that's going to work. A lot of the world is capitalistic from what I understand so that means a lot of companies will utilize AI that puts people out of a job. That doesn't mean its the end of society, but rather the beginning of a new one that's hopefully better than what we currently have because frankly what we have now (i.e. massive inflation, lots of countries in debt, supply shortages) isn't sustainable.
If you give AI nothing but pictures of rocks, ALL it will be able to do is imitate those kinds of pictures. That's it. Humans, at least for now, are more than just mere imitators. Sure, the likelihood that someone will invent a new concept is incredibly low, but the chance is there. And that chance over a very long time is what generates new ideas.
As for the legal aspects, I work for a huge outsourcing conglomerate and the higher up people said that clients are afraid of using AI because of potential legal problems. That's when I realized that we are unlikely to use these tools very soon until these big lawsuits are resolved because if the way data is used is no longer a free for all, our clients will have to change their workflows mid-production. So far, the only company doing this ethically is Adobe.
It doesn't matter. A human observing a picture to get inspired by is something that is accepted in the social contract, and it doesn't break any law. On the contrary, an entity using a picture that they don't own the rights of and making it a part of a piece of software (in any shape or form : either by using the picture directly, or by deriving training data from it) does break existing copyright laws. The way the picture is processed doesn't matter one bit, even if said processing was exactly the same as what a human does (it isn't).
Furthermore, it also violates fair use as the resulting output competes directly with the work of the persons who have gotten their images stolen from.
Now one could perhaps try to make the case that since the tech is new, there is a legal void around this topic, and the training from copyrighted work can only be made illegal in the future, but not retroactively. But on top of not making sense (see above), that argument doesn't hold either way because the legal precedent *has* actually been set years ago : some countries explicitly allowed ML from copyrighted work as long as it is done for research purpose. This doesn't mean that the resulting models can be used commercially, or that their output magically launders the copyright of the original holders.
About Adobe : they are claiming to do it ethically, but IMHO it's still not clear cut. Their training model is said to be derived from their own stock photos, but if only a single picture in Adobe Stock happens to not fully belong to the uploader, they're in the wrong. Furthermore their new generative fill seems to have a blocker for prompts involving the name of celebrities - but if their training data was indeed ethically sourced, then sure enough their model wouldn't know what Brad Pitt looks like and there would be no need for a prompt blocker. So at the end of the day the only way forward (IMHO) is to make it mandatory for anyone putting out a generative ML model to make the training data human-readable for inspection.
Interestingly enough, even if things were done 100% ethically we could still end up in a situation with generative AI being a threat to many working artists out there. But that's not where we are at the moment IMHO.
I keep hearing this argument and I don't agree. Sure, there are some similarities in how humans and AI learn but what makes humans different is that we can invent entirely new concepts out of nothing. There was a time in history when there was no art at all and then after many many years someone arranged rocks in a pretty cool way. Or while making a flint knife the person making it thought it would be interesting if it had some engravings. 10000 years later we have Renaissance paintings and sculptures. A few hundred years later you have movies, video games and so on.
If you give AI nothing but pictures of rocks, ALL it will be able to do is imitate those kinds of pictures. That's it. Humans, at least for now, are more than just mere imitators. Sure, the likelihood that someone will invent a new concept is incredibly low, but the chance is there. And that chance over a very long time is what generates new ideas.
As for the legal aspects, I work for a huge outsourcing conglomerate and the higher up people said that clients are afraid of using AI because of potential legal problems. That's when I realized that we are unlikely to use these tools very soon until these big lawsuits are resolved because if the way data is used is no longer a free for all, our clients will have to change their workflows mid-production. So far, the only company doing this ethically is Adobe.
Oh I agree that humans have an inherent skill to be creative and generate new concepts where none previously existed just due to life experiences. Thats something we'll always have over the AI, but my point here was that when it comes to generating what we consider to be original art, what the AI is generating and how it is generating things meets that criteria.
The legal issues you point out I agree on. However I don't know how long the barrier will hold because the ethics argument goes both ways. People who support AI art will certainly hit back by pointing out examples where people have copied off each other or ended up with similar looking results unintendedly, yet no legal action was taken and ones that were ended up failing. Yes Adobe is taking the safe and arguably best route here by using its own content for generative AI. This is what most companies should be doing and arguably game studios should be able to legally anyhow because they already have access to their own art libraries given artists sign away their rights to own whatever they create for the company.
It doesn't matter. A human observing a picture to get inspired by is something that is accepted in the social contract, and it doesn't break any law. On the contrary, an entity using a picture that they don't own the rights of and making it a part of a piece of software (in any shape or form : either by using the picture directly, or by deriving training data from it) does break existing copyright laws. The way the picture is processed doesn't matter one bit, even if said processing was exactly the same as what a human does (it isn't).
Furthermore, it also violates fair use as the resulting output competes directly with the work of the persons who have gotten their images stolen from.
I see what your saying, but that argument has no legs to stands on.
A ton of artists use existing works for practice (or training if you will). Thats encouraged by the community, by schools, self-learners etc. No one simply learns how to do art or apply the various principles if they've never seen it before. I think arguing the AI breaks copyright laws because it draws from existing works would open a can of worms that throws many artists work into question because many have created derivative works. You don't think the vast majority of superheroes just so happen to wear tights and capes, have similar personalities or supernatural powers do you? Imagine how many superhero creators would be opened up to lawsuits over this and that's just the beginning.
Going a step further, take a look at a comic convention. Go up to the tables and see how many artists are using popular characters to sell pictures, t-shirts, etc. You don't think they got permission from all those companies to sell art using their IP do you? Thats breaking copyright laws for all to see, yet the artists will sell it anyways. Marvel, for example, very obviously knows artists at these conventions are making a buck off selling drawings of their characters without permission, but they do nothing about it because they know it benefits their brand by giving them more exposure which means bigger profits.
In short....this is why I'm moreso against legal action against the AI than for it. I think it will come back to bite artists far more than the AI because we're violating each others copyrights more than what the AI is.
Now one could perhaps try to make the case that since the tech is new, there is a legal void around this topic, and the training from copyrighted work can only be made illegal in the future, but not retroactively. But on top of not making sense (see above), that argument doesn't hold either way because the legal precedent *has* actually been set years ago : some countries explicitly allowed ML from copyrighted work as long as it is done for research purpose. This doesn't mean that the resulting models can be used commercially, or that their output magically launders the copyright of the original holders.
AI users will counter that by pointing out how many humans train using copyrighted art and have been for centuries. Of course humans don't train themselves entirely on others art, but it still makes up a core part of their own ability to generate art.
About Adobe : they are claiming to do it ethically, but IMHO it's still not clear cut. Their training model is said to be derived from their own stock photos, but if only a single picture in Adobe Stock happens to not fully belong to the uploader, they're in the wrong. Furthermore their new generative fill seems to have a blocker for prompts involving the name of celebrities - but if their training data was indeed ethically sourced, then sure enough their model wouldn't know what Brad Pitt looks like and there would be no need for a prompt blocker. So at the end of the day the only way forward (IMHO) is to make it mandatory for anyone putting out a generative ML model to make the training data human-readable for inspection.
True, but thats going after the needle in a haystack and legally wouldn't hold much water. Of course some photos that are copyrighted will slip through the cracks as no system is perfect. I doubt a court is going to hold Adobe in hot water over this unless a large portion of their database contains copyrighted works, more than whatever the decided threshold becomes.
I think there can be various ways forward depending on the company. Game studios have a big advantage here because they already have big libraries of art that their artists create over the years that is legally theirs. Granted using artists as grain to feed the mills is morally wrong or at the minimum questionable I think, but it wouldn't be illegal.
Interestingly enough, even if things were done 100% ethically we could still end up in a situation with generative AI being a threat to many working artists out there. But that's not where we are at the moment IMHO.
AI is going to be a threat to a lot of careers, but I think art is amongst the safer ones truth be told DESPITE the hugely rude wakeup call we got. Ultimately no matter how good the AI gets, that's never gonna take away our ability to create art, people will value human art more so as it gets rarer and with AI presenting many technological breakthroughs, I think artists who want to be independent will be more capable of generating their own wealth without need of working for a company or client.
I hope you're absolutely right with these words. I'm feeling a lot of anxiety these days. Like many other artists, I've put all my eggs in the basket of 3D art for audiovisual productions of any kind. If this way of making a living is compromised, I don't know if I'll be able to adapt to another field.
A universal basic income for all of humanity would be the best remedy in this situation. We would continue to strive to stay ahead of machines and remain irreplaceable. However, in the possible scenario where companies no longer require artists to work, it wouldn't be seen in such an apocalyptic light as it is now.
I'd only worry if I saw employers routinely and perfectly willingly to run companies or creative departments with zero human labor involved. That is when we should truly be worried, but until then, nah.
I think it would too and we will likely get to UBI at some point, mostly because people won't be able to afford the ever skyrocketing cost of living. Something will have to get done and the only solution to fixing it that I can see is to either go to UBI or become a moneyless society, however that would work. There are downsides of course, but I think the upsides far outweigh them.
Even if the current models utilize stolen sources it wont be long until someone figures out a way to take a few photos with their own iphone, feed it to the AI and get it to self iterate based on those sources. (I'm pretty sure that is what happened with Alpha Go and even Alpha fold. given a limited dataset, it was able to generate it's own training data and then proceed to solve insane problems and beat the worlds best go player)
So where does that leave us? Well I think all eyes are on this problem and given the huge recent win for us artists ( AI generated imagery cannot be copywrited) we are going to be in a good spot for awhile as we transition into a new paradigm.
One last thought to ponder is this... currently AI is only as good as a lazy artist who copies the mean average of what they see on deviant art and artstation. When you want to do something more complex... guess what! You still need the fundamentals.
Check out some of the more recent "how to" AI videos like this one...
Next level AI art Control | My workflow - YouTube
Vizcom - YouTube
(most of these kind of videos are not very useful or in depth but this illistrates the point well enough I think)
it's actually just as difficult to get anything specific out of AI as it is to learn how to use photoshop or maya or any other advanced piece of software.
You're still going to need to learn anatomy and be able to tell the AI that the arm looks broken or that the muscle groupings need to shift or flex more in certain poses. You're still going to need to learn perspective to pick up on when the lens distortion changes from 35mm to 27mm halfway through the image because AI doesn't have a working model of perspective, it only know pixels.
I'm seeing is an increase in complexity that I believe will continue to expand. This isn't the end, but rather the beginning. We're going to be living in a time period where it takes a fraction of the time to complete any given task compared to 10 years ago... but still up to our eyeballs in work/assignments.
That's my two cents anyway. Hope it helps weight the fear/bias towards a more nuanced look at AI from artists. For anyone feeling worry/anxiety about the industry... learn the fundamentals and wait for the platforms/tools to shift. It's going to be okay. Your future is probably fine (given we don't end up in world war 3)
A federal judge on Friday upheld a finding from the U.S. Copyright Office that a piece of art created by AI is not open to protection.
https://80.lv/articles/this-ai-model-can-turn-text-prompts-into-3d-models/
I don't think that's the case at all, unfortunately - especially now that 2D image generation has been the canari in the coalmine.
These "AI art tools" were developped by infringing on the copyright (and overall rights to images) of thousands of artists/photographers/social media users ; are being used for profit ; are destroying the spaces where the user/grifters/AIbros are accepted (byebye Artstation) ; and overall they are polluting a field that never asked for it. It's a solution to something that was never a problem. I genuinely don't think that things are going to be fine when the roots are so rotten and the consequences are already affecting the mental health of many.
As predicted, AI-driven sculpt generation from prompt is already here. It wont take much for it to be able to generate voxel-type models indistinguishable from highres sculps made by talented artists/craftsmen. There's no way that this will not negatively affect the field, first by polluting the amateur side of it and quickly spreading to the studios obsessed with "content" as opposed to narrative and craftmanship.
Even the crude example below is already an issue. That's a whole step of the process (blockout with polypaint) removed altogether. A shame really, as that means that the only thing left for the human artist after that is ... remeshing, retopo, baking and UVs. That is to say all the things that 3d artists wished AI could take care of.
In the grand scheme of things the problem I have with all of this is the way it will only make people less willing to share their art because it will be instantly fed into ML models by morons who never had any interest in the field up until 5 mins ago. It's the beginning of an incredibly toxic era really, and I don't think it will be easy to mentally block, at all.
Also, as far as I know, works entirely generated by AI can't be copyrighted in the US unless a lot of manual input is added on top.
So, while the future looks very bleak from my end, I think we still have a couple or so years to grow and accumulate money.
It's not just about the money. This job is the best thing that ever happened to me and helped me grow so much as a person. Now instead of making people better, it will make soulless machines better. People who want to end all work are weak and idiots. Work is how you gain leverage in society. No work, no power, no power, the people who control these systems can do whatever they want with you, under the guise of progress of course (whatever that word means anyway). So much for the argument that this will free us to do the things that "really matter".
What Plato said about writing sounds eerily similar...We've been through this before and we just don't learn.
http://oldsite.english.ucsb.edu/faculty/ayliu/unlocked/plato/plato-myth-of-theuth.pdf
Also all the companies/HR/Marketing will say at the moment they won't use it due to its ramifications, but its just a gentleman's agreement. People are already using it in varying hush-hush quantities.
How on earth did you reach this conclusion?
you can see it in their mood boards. in some cases we had to sign contracts not to use ai, others send us references clearly made with ai.
Hell no. Accepting such a deal would mean accepting the theft in the first place. That's exactly what these companies want.
100% i am so glad you can see that, i am against it all the way but people are actually saying the genie's out of the bottle and we have to "conform" cause its out there... if the "world" decides its fine then they go ahead its better than having nothing, cause its okay to steal apparently. I rather this was never around and who knows how long it's been around we are only hearing about it now. Now to me challenges pimping to show progress is just feeding this scripts probably...artstation challenges come to mind now and since they are 100% okay with it apparently and are trying to make it a thing, who knows who those artists are, they very well could be artstation bots posting ai art to get us to agree to it for all we know. Epic was the previous owners of a.s.s. right? so they sold us out in my eyes, they must have seen the outcome of this or asked or even started it so they could sell it to, 0.10c's.
I mean, HELLO ? Not only is it a slippery slope as mentionned above, but it is also *litterally pointless*. The point of a moodboard is not just to gather pretty shiny pictures to show to the higher ups. It's mostly to establish the DNA of a project and create a shared set of references between artists. AIvomit completely negates that since it removes all possibility to trace back the art shown in such moodboards to the original source of the work, and the body of work of the inspiring artists behind it.
Prompting "cool asian chick with big titties in front of a cyberpunk city" just ... end there. Whereas actually looking at the source of cool anime scifi art leads to discovering badass artists like Masamune Shirow (and many others), their rich body of work, and so on. If not ... what's the point ?
Then you're simply falling victim to the illusion of the minds eye, similarly to how AI prompters are certain than the images they summon represents "tHeIr ImAgInAtIoN".
And regardless - the second someone uses an image generator that relies on stolen imagery scraped without consent, that person is sending a big FU to the original artists who never consented to it, regardless of the way the AIvomit is being used, commercially or not. It doesn't matter one bit if the images are the most useful thing ever, or formless monstrosities - it's still screwing up the original artists who never agreed to it, and it is directly or indirectly supporting the makers of these AI-vomitting machines.
Doing it anyways because it as "practical value" while knowing full well that the people who did the original images are not okay with it is in my view just as morally reprehensible as claiming someone's images as one own. It's a matter of principle.
Here's a simple ping starting from a random colorful 2D picture, revealing stylistically adjacent images from LAION. Every single image represents one artist being silently screwed over, who never agreed to participate in this.
But I bet if I came here 15 years ago and hammered on about the need for unions people would just laugh at me or say I'm too negative or some shit.