The laion dataset usage copyright is for research and learning. It wasn't supposed to be used to train AI for commercial purposes since that infringes on individual copyrights.
The thing is when the lawsuits start this is likely the angle it will take and if it holds up it will require stability AI, Midjourney and Dall-E to destroy their AI training models, effectively ending their business.
No idea how they plan to rectify the damage done by previous models, it will likely lead to a local underground market for AI art. They will definitely have to pay massive damages and the precedent will spawn even more litigation which will become even worse when bigger players like Disney join in.
At this time its likely that they will sell their business to pay back creditors and disappear effectively ending the open source aspect of it.
They way these companies will justify what they did is to hinge on how the AI is simply learning and being inspired as people do. In effect they will have to prove that AI is a person and not a tool, kinda like that episode of Star Trek Voyager where the Doctor who's a hologram had to justify the rights over his authorship by proving sentience.
So far the courts and copyright authority are refusing to consider that AI owns the copyright to the image it generates since it isn't a person.
It also refuses to acknowledge the user of an AI service as an owner with exclusivity rights effectively considering it public domain.
I am thinking that the reason that the major players arent suing Ai companies ate because its the holidays and since for the moment they have the full right to use AI art created by prompts from others using no reprucussions what so ever. They dont even have to credit anyone.
like I saw some AI generated versions of black panther, Disney could use these, not credit and pay nothing and nothing would stop them.
This wouldn't be the case with human created art.
So in that manner it is hurtful to artists know that their work was used to train the models without their consent which absolutely is copyright infringement.
It will be an interesting case certainly.
Artstation ought to ban AI art outright given its sourcing, but they are also likely waiting on a ruling since it's a large market share of people who have not art skills that are providing its site with marketplace and advertising revenue through content
Something is getting lost in translation here. No one is talking about "forbidding learning". The idea is simply to not use anthropological terms like "learning" and reframing it as "using" in order to get a clearer view on what's going on.
On law : law is not something that is divinely dictated and get set it in stone forever, otherwise all humanity would be stuck in dark ages doctrines.
Current laws may not be fitting to legislate on whether or not the data collection required for the existence of these datasets is an infringement of intellectual property or not. Since all law is subject to interpretation, in some case it may actually be. But the point to understand is that current law was drafted in a context when AI images generators didn't exist. It actually doesn't matter at all if the process is similar, exactly the same, or different from what humans do, this is completely irrelevant. What is relevant is that if enough people agree that such use of image data should be outlawed, then it could eventually become so. And new law on copyright protection isn't required to be triggered by a copyright infringement case settled on way or another.
For instance, tomorrow a given country could legislate that AI image generators need to come with a human-readable version of the dataset being used, and said dataset could be, for instance, subject to government approval before being released to the market, just like it is the case for food and drugs. I am not saying that this is the way it should be done of course - just giving one very simple example/scenario that only takes a minute to think about.
This is all very similar to human cloning : "Human cloning does nothing else than humans does. Just a bit more efficient and faster". Yet once it became a reality, laws evolved accordingly to express the will of the people on the matter of human dignity. Similarly to how in some countries it is allowed to have one's child gestated by someone else, yet in some other countries it isn't.
So when people present to you their reasoning as to whether or not this kind of art theft is/should be illegal, they are expressing their conviction and the direction they want the "artistic social contract" to go towards - one in which the unauthorized use of imagery in a data set used for image generation would be illegal. The fact that it currently is or isn't isn't the end goal of the conversation, it is the starting point.
The laion dataset usage copyright is for research and learning. It wasn't supposed to be used to train AI for commercial purposes since that infringes on individual copyrights.
It doesn't. That's the dilemma at the moment. Some of you eagerly wants it to. But it simply is not.
The longer its a free for all the higher the damages would be.
Logically it would be sensible for all Ai generators to stop using the data sets of limit their prompt usage and allow artists to opt out so any litigation will focus on the relevance of older models and damages that they would incur to copyright holders.
Stability has taken that stance for version 3, not that it's off the hook for previous versions.
I'm a way I'm glad that it's motivating some people to learn photoshop if only to correct broken fingers.
But there are those people that are purely looking at it as a business, to generate art, brand themselves as AI art creators on social media and sell merchandise.
That works with personalized commissions in a certain style and fan art. Not sure if it would allow for a sustainable business model since the ones commissioning can just as easily create it themselves.
In that sense it feels like a fad that will die out, but it will impact artists if it is used in house at larger studios.
The reason it should be regulated is because of unauthorized use of source material for profit
That all it is, is a url repository of the source. That a "researcher" needs to download the data set for use, and then the copyright of that data set would apply since they aren't an image hosting service.
By wording it this way they have absolutely limited their accountability to usage of the data, and I'm certain the courts will hinge on this fact logically reasoning that the original copyrights would apply.
That's why I reasoned that any copyright through Laion only covers research. What AI services are doing isn't that so they have to consider individual copyrights of the images used to train their AI model.
Now if Laion had provided the AI model without defining copyright that would have made them accountable first, but legally its only the service providers that will be on the hook.
Any damages will likely be proportional to their revenue I'm guessing.
Tiles, you are repeatedly mixing up law (what gets written in the law books, and determines practices that are illegal), and court decisions. A new law doesn't require a previous court decision to go one way or another - it's the other way around. And cases in which a court decision then becomes the norm because the law is ambiguous is called a "precedent".
As a matter of fact, this is precisely why some defense lawyers are still interested in cases where they get to defend something immoral - because this can then outline a loophole in dire need of being patched later by a new law. If a murderer gets away freely thanks to a technicality, it becomes an incentive for new laws to be drafted, proposed and voted in.
The example of the French law about touched up models didn't require any court case either.
They would like be liable for unauthorized use of copyrighted images in training their AI models.
Penalty will likely be a destruction of their previous AI models and if they released it openly without considering liability then they will face financial penalties.
It's like raiding a publishing house and giving away 1st editions. True you wouldn't make any money but you'd still be financially liable for damages because of loss of revenue.
Thing is this isn't stealing from a single publisher, its plagiarising on a massive scale so not sure how they would calculate any damages, but the fallback would likely alienate any large-scale monetization of AI art by larger companies.
Let's for a moment assume that is the case. And court follows your argumentation and all your fantasies comes true. Stable Diffusion allows training with own material already. And has already kicked quite a few name prompts out of their weight. You cannot longer type in a name of an artist from artstation. It will not work anymore. And also the term Artstation has no effect anymore in the new weights. Means, even when you put this part under (for me arbitrary and not justtified) illegality, SD would be already at the legal side again.
And this means, the dilemma remains. You can do what you want, AI will become part of our life and remain your competitor. That you want to make the work of the AI makers harder does not change the end result. It may need a bit time now until the quality is back to where it was. SD 2.0 is weaker than 1.5 with the removed weights. But it's just a matter of time until it has catched up.
Yes, no one knows, and that too is besides the point. For instance one might believe that a country that goes full throttle into eugenics and human cloning would become populated by world-conquering super-humans within a few years, so why resist it, might as well do the same ; but that wouldn't prevent other countries to take their own stance on human dignity and stand by it.
To people convinced that this kind of tech could harm the fabric of society (back to talking about image-generating AIs here, not superhumans :D ), even one year or one month or one week of gained time is worth it.
Well, for me it is no harm, but something fantastic. This is the point where we still heavily disagree. For me it is simply a fantastic new tool in the belt. It allows me to create what was impossible before. I create all my music videos with it at the moment for my upcoming new album. In weeks instead of years. You might know what time and manpower goes into a single music video in the traditional way. I would not want to miss the AI tool anymore.
I do wonder what they would do if a legal precedent hits, what would happen to previous training models, would work derived from them be included in a repository that future AI models can derive from.
Usually when it's this complicated, it comes down to a settlement and some conditions.
I feel like the conditions would be not to use the artists work if they opt out. New models be based on entirely new data set within creative commons or art that has been authorized with consent and maybe some levity on what happens to all the art that's already been generated, I.e it can only be public domain but you could claim some credit and commercial use depending on trademarks and copyright.
For example, I commission AI art to make barbarian standing next to TV which I can use on a t-shirt as can anyone else
Vs
I commission AI art to make mickey mouse standing next to TV that is going to screw me over if I sell it on a t shirt, though if Disney uses the design that came from my prompt doesn't credit me and profits from it and even claims ownership I can do nothing.
Basically it comes down to the enduse case.
I wouldn't mind using ethically sourced AI art for ideation, but profiting from unregulated AI art is a copyright minefield.
Is A.I. trained and can work without Image Libary because it has some fundamental understanding when i mean "chainmail" or lookd it every time for reference pictures?
Some AI. works looks fantastic. Looks like a dozen Screenshots of a movie BUT never was the Character or Monster etc... the same everytime you see small differences. Looks nice for 20 different Robocop designs but you see never 20 times the same Robocop in different scenes.
I have seen Fury Road with Muppets. When the A.I. used this image with "Tag" "Fury Road" can and will it use it as Reference? Doenst it blur everything over time?
@Tiles : I would say that you explaining your actual intent with your use of this tech is far more conductive to discussion than any earlier Photoshop analogy or attempts at anthropomorphizing (right word ?) the tech.
So, this tech using/relying on millions of images without consent (regardless of this practice being currently illegal or not) is widely rejected by the very people who created said images, because this practice is in direct violation of the accepted consensus within communities of people who love working on their craft (ie the golden rule of respecting each other's work, and the fragile equilibrium consisting of having faith in other people to not be jerks). But at the same time, this tech allows you do to something that you could previously only ever dream of doing, that is to say having a set of music videos with imagery that fits "what you see in your minds eye" and matching a high standard of polished rendering, similar to (previously) trending art posted on Artstation.
Isn't this a rather big moral dilemma ? And aren't you worried that this might actually distract your potential audience from the main star of the show, which I assume is the music itself ? I would bet that people who love your music are more interested in what you can create on your own regardless of it being highly visually polished or not. There is a certain beauty in weaknesses and errors, and some great strength to be found in sketches that take only minutes to create. I still want to believe that some people are able to see it.
Well, for me it is simply a tool like Photoshop :)
You still underestimate the effort to create something really good looking with AI. It is not enough to type in a few words. You still, and even more than in traditional art, need the artistic skills and knowledge to steer the AI where you want it to be. Else it will go completely wild. You need a very good idea what you want to achieve, and need to know about styles and the right keywords. And setting everything up for offline work is still highly techy. That's why i still scatch my head when somebody says that AI users are no artists. They are. They decide what end result they want to produce. It is quite an effort. And you can clearly see if a image is a beginners work or somebody has an idea about how AI works.
What is not covered at the examples page is video to video, which allows you to do quite a few funny things with the original content.
When you want to use it offline then i suggest to install the automatic1111 web ui stable diffusion fork, and use the deforum offline addon then. Makes life much easier. Google will lead you.
This kind of movies is simply impossible to achieve in a traditional way. And it already has a big fanbase. The first music video made with this tech has currently over a million clicks at Youtube after two months. I hope to get my videos out of the door as soon as possible to participate at least a little bit of the hype :)
Which answers your question about the users. They simply don't care how something is achieved. When it's wow content, then they make wow.
It is both, the music and the art. Good music needs good graphics to become a success. And vice versa. The thriller video was what made Michael Jackson's song so famous and big. It was revolutionary at its time. And the kids nowadays cannot be fascinated by a still in a video with some music bars going up and down. You need graphical content. And the more extraordinary, the better it is.
So, this tech using/relying on millions of images without consent
This is the point where we go in circles. You don't need a consent for looking at public available images :)
is widely rejected by the very people who created said images
Is this really the case? Is it really nearly everybody? I see the protest at Artstation too. But i also see that quite a few traditional artists uses it happily. Like me. The userbase of AI is really big already. Which stands in contradiction to the protest of "everybody". Since it is mainly artists who uses AI. Somebody without any relation to art has no need for it. Artstation is also not the only source for LAION. And again, everything is legal. You can be sure that the lawyers of these companies does a good job to make it sure.
And we are again at the point where you claim a copyright breach where there is none. The images were used for training. Not a single pixel of these images makes it in the end result of an AI image. This is technically impossible.
moral dilemma
Our society has long decided that it is no dilemma. We use robots, we use computers, we use cars instead of horses. We use everything that makes our life easier. And the AI journey has just started. There are already tools in the making to create complete videos by text prompts. I cannot wait to get my hands at it. https://makeavideo.studio/
And again, as a AI user, who knows now the limits of AI, i can assure you that there is still a need for traditional art for a long time. AI will not replace everything. Game content will for a long time still require somebody with the right skills instead of a text prompt. What it possibly speeds up is the concept part. But also here, you need somebody who decides if the concept fits and is is even achievable. You still cannot let a janitor do this job.
I do wonder what they would do if a legal precedent hits, what would happen to previous training models, would work derived from them be included in a repository that future AI models can derive from.
As told before, Stable Diffusion did react already. They will simply adopt. AI ... uh ... will find a way :)
SD has pulled the material in question out of the AI weight. But not because it was illegal, but because they don't want to be the bad boys. They listened to the concerns and understood them. DallE and Midjourney still works with the material in question.
Is A.I. trained and can work without Image Libary because it has some fundamental understanding when i mean "chainmail" or lookd it every time for reference pictures?
No. That's technically impossible. There is no image data in the AI. AI uses a so called weight file. A weight is the brain and the memory of the AI. There it is defined what makes a chair, a table and so on. Such a weight of SD has between 4 and 7 gb. It needs to fit into the ram of a graphics card. One training file of LAION is around 10 Terabyte of data. And that's just one set then. There is simply no compression algorithm that can compress images so small. So the answer is yes, AI has understood. That's why you need to train it. It is a little child that learned what is a chair and what is a table.
Some AI. works looks fantastic. Looks like a dozen Screenshots of a movie BUT never was the Character or Monster etc... the same everytime you see small differences. Looks nice for 20 different Robocop designs but you see never 20 times the same Robocop in different scenes.
That's a limit of AI. It creates a noise image, and from there it tries to get close to the desired result. And this every time from scratch. You can use the same seed and keywords to create identical images. But when you use different seeds or different keywords, then you will always get a different result. What you cannot do is things like to rotate the content. There is no control over the camera. Since there is none.
I have seen Fury Road with Muppets. When the A.I. used this image with "Tag" "Fury Road" can and will it use it as Reference? Doenst it blur everything over time?
See point two. You might distinguish between still images and movies here. Most users creates stills. Movies is a completely different and specialized chapter with its very own limits and needed tricks. I use as told the deforum fork to create movies. This works by inreasing the seed by one step over time. A hack to create these "falling into the image" animations.
Our society has long decided that it is no dilemma. We use robots, we use computers, we use cars instead of horses. We use everything that makes our life easier.
This concerns me. Machine Learning and AI has been on the mainstream for the better part of a decade now, and I am yet to see really serious improvements to our material quality of life come from it. Certainly not to the same revolutionary scale as robots, computers or cars. Maybe it's still early days, but every new wave of progress I see with AI still seems to be stuck at being a 'toy'. These recent products are still ultimately toys. I genuinely don't think it's worth the trouble to our society just to let us... speak to our cars and phones? Generate pastiche images of our favourite superheroes?
See I'm not sure that we have decided it's 'no dilemma'. We use robots, yes, but no-one asked the labourers they replaced. We use cars, yes, but no one asked the communities we carved up for the highways and roads cars need. People aren't too happy about some of the social consequences the internet (via computerisation) has had. I don't think these technologies were mistakes in the grand scheme of things, but it's hardly true that society was or is unanimous about their adoption. Indeed, concerning cars many places are reversing some of the planning decisions made to accommodate them.
I'd be more hopeful for AI if I saw commercialisation in the places that matter like medicine. It hasn't seemed to be able to leave the university lab in that respect. The world doesn't need more advanced ways to target advertising or generate media.
I still can not understand the reason why somebody would call it AI "art" in the first place other than for cynical manipulation of social narratives.
For sociopaths everything can be a "tool", even another person. No wonder they are so loud right now in defending AI with rather very weak arguments, e.g. that it is fun for THEM, the reward is so FAST, oh, it's such a cool toy etc.
Ethics is something beyond their understanding.
They simply can not grasp, that many CG artists aren't against AI in general, but against being treated unfair.
Oh, i fully understand that it feels unfair that you might even help to train your competitor that might replace you. This was and is never the question. This is what Pior means with the ethical dilemma. But i heavily dislike the neo luddistic behaviour that tries to kill AI and that treats every AI artist as a clueless moron that steals your job, and sees AI as the pure evil. Completely undifferenciated. I use it as told for videos that weren't possible before.
Stable Diffusion tries to go the middle way. They have listened to the concerns, and have reacted. The end result will be the same though. You can't stop the future.
@Phiona This is actually an incredibly relevant remark IMHO. When asked what they do for a living or even just as hobby, many artists would not even call themselves artists, but rather go straight to describe what they love to do, often in a slight self-derogatory/humble way even. For instance someone who draws manga might not say "I am a Manga Artist" but rather say "I love drawing manga". And hopefully this can lead to a fun exchange of shared interests ... if the person asking the question is not a jerk jumping on any occasion to be snarky. And a creature concept artist working on the production of hugely popular movie franchises might not answer the question by "I am an Artist Working on Concept Art For Million Dollar Movies", but would perhaps say "I dig drawing monsters !".
Now of course no one is the arbiter of what is art and what isn't. But the fact that AI bros just love throwing the word around is indeed incredibly telling. I would agree with your statement that this could indeed be qualified as a cynical manipulation of social narratives.
the future is almost certainly death of the entire human race and nearly all other forms of life, so it kinda needs to be stopped.
No, I am not suggesting that AI art is what kills us. Anybody wants to argue about what is unanimous verdict of virtually every scientific community you have to demonstrate that you've actually read something about it before I'm going to say much more about that. But it leads to point I want to make: broader picture, the non-forward-thinking, non-learning-from-history mindset that blindly assumes technology = progress is the thing source of all of our problems.
It's just like a basic law in the universe: nothing new is created. You can only rearrange what was already there. So every action has some reaction. You might have easier time getting your art needs for music videos now - but knowing how things work, you have to ask, "what is the trade-off?"
A simpleton only looks into that question as far as they can see immediate effects on themselves. Somebody who actually solves problems before the disaster tries to figure out what the long cause-effect chain will result in before they do the thing.
None of us can point to something precise, because the equation is too big. But it seems pretty clear to me and seems like most professional artist agree - it's not good at all for people who made a living by making art.
the future is almost certainly death of the entire human race and nearly all other forms of life, so it kinda needs to be stopped.
True. But i prefer to have nevertheless some fun before this happens :)
it's not good at all for people who made a living by making art.
And people who makes a living from art explicitely excludes everybody who uses AI to make a living from, right? Reminds me of the real scotsman argument ^^
You will have winners and losers with the new tool. As always.
People who had been winning by merit of their own hard won skills now become the losers, and people who've done nothing now are allowed to be winners. But they've sacrificed nothing. That's why the word fair keeps coming up.
So, if you were not able to afford to pay artist to make art before, but now you can get the same thing for free because you used a machine which stole their art, rearranged it...what does that make you? Empowered? Maybe, but it is not dissimilar to the way I could become empowered if I stole a rich old ladies purse and suddenly had a big investment for my next game project, lol. At least in that case, I'd need to have the physical courage to confront her, so there is like, the tiniest nugget of something that might be respected.
But just pushing buttons on a computer to steal from the collective pool of art and thinking that makes you anything is just silly. And it won't give you special advantage because everybody else can do the same thing. So you never escape the need to be special somehow - only thing that has really changed is that people who were special are now robbed of their livelihood.
People who had been winning by merit of their own hard won skills now become the losers
Why? What does these people stop to use AI too? And to repeat it again, it's not that you can let your cleaning woman do the job now. It's the opposite, you need even more knowledge about art than before. You cannot simply start to draw. I've googled my ass off the last three months. And game content will for a long time not be done by AI. Your job as a game artist is safe.
Only who cannot adopt to new situations becomes losers. It is not written in stone. And hate is definitely the wrong way to adopt to new situations.
stole their art, rearranged it...
Could you please finally stop to make these false claims? No artwork is stolen, not a single copyright is broken by AI. And it is also not simply rearranged art. That's not how it works.
And to repeat it the dozenth time, Stable Diffusion has removed the weights from Artstation already. Your wrong claimed "stolen" material is not longer in use there. But this changes exactly nothing at the situation.
But just pushing buttons on a computer to steal from the collective pool of art and thinking that makes you anything is just silly.
Pushing a button is what photographers do since decades. Pushing a button is exactly what everybody does who renders an image. Pushing a button is everybody who bakes normals and textures. Pushing a button is simply part of the art show since decades. And you "steal" your knowledge from the people who has invented this software, their knowledge and skills. I am pretty sure that even you has also learned how to draw, model, texture etc. from other artists. And not invented the wheel from scratch. So who are you to claim that somebody steals your work by looking at your images and learning how to draw?
Silly is to make false claims without to understand the issue. Use AI for a while, and you will pretty quick understand the limits, and understand why the majority of artists has simply nothing to fear. The AI witchhunt is irrational. It usually needs not more than half an hour until you realize what is possible and what not.
Well to be fair, artists hired in game/entertainment production jobs are likely not going to lose their jobs to this imho, simply because ADs are clever enough to know that the job of an artist in house or freelance isn't just to create "pretty pictures". Unfortunately, similarly to how Kotaku readers and urinalists have no clue about how games are made, AI bros have no clue about the reality of the job as an illustrator or designer and what it takes to work on a project as a team. And this is not a demeaning comment in any - *no one* can know the reality of the job if they never got their foot in the door. I guess the best case scenario is for the euphoria to last about as long as braindead "NFT" and "wEbThReE mEtAvErSe" trends. A year or two perhaps ? Or perhaps that's the new normal - people broadcasting their narcissism more than ever before.
So as far as I am concerned the biggest problem is simply the straight up insult of the scraping process. But insulting someone is not illegal ; and it is actually quite useful that it isn't, because it helps telling apart the jerks from the people one wants to interact with.
At the end of the day the real saddening thing is to realize how much people simply lack basic empathy. And even someone keenly aware of the toxicity of social media (even before this) can't filter it out this time, because of the way it explosively infiltrated a previously safe haven like Artstation.
From there I am starting to feel like online art communities shrinking back down to, say, mid 2000s scale would be a very good thing. I'd love to see people build their own websites again FFS. Contraray to popular belief, artists do not need Artstation to get noticed or get a job. Its value consisted only of it being a very clean standardized design.
Unfortunately, similarly to how Kotaku readers and urinalists have no clue about how games are made, AI bros have no clue about the reality of the job as an illustrator or designer and what it takes to work on a project as a team.
You currently talk with one of these AI bros that completely does not fit into this pattern. I am, or better was a game developer before i decided to stop digging for gold and started to sell shovels with Bforartists. Just saying ;)
But yeah, i see you already denying again that i am an artist at all since i use AI and such. But fact is that the very most AI people that i know are all highly skilled artists with years of background. AI is an artists tool.
So at the end of the day the real saddening thing is to realize how much people simply lack empathy.
To repeat, it is not that i lack of empathy or do not understand where you friends come from. I just see a big chance where you see the end of the world.
From there I am starting to feel like online art communities shrinking back down to, say, early 2000s scale would be a very good thing. I'd love to see people build their own websites again FFS.
More bubbles? Facebook has them already. And not for good :/
i think you just want to use the stuff and not think about it, which is fine. But i believe there is zero chance you will convince any artist that AI art isn't ripping them off.
Yes i know, since the very most are not willing to accept the truth. Claiming that AI is stealing to discredit the competitor is so much simpler.
Well, maybe we simply have reached the end of the discussion. You will never convince me from your point of view since i know what AI does and is and how it works. And i have seen too much tools coming and going over the years. For me your fraction is the ones who has stopped thinking. Progress was and is one of the driving forces in art.
Anyways. It was an interesting discussion for sure. Thanks :)
The most interesting example of this being a euphoria is in a group that was linked in another post that focuses on Afro futurism for the Black Community,
so all the art there until the advent of AI usually revolves around showing Black identity and black people in a positive light, and with the inclusion of AI you have a lot of Black users who are proudly proclaiming themselves as Black Artists disrupting the million dollar gatekeeping realm of White artists.
Many of these users usually fall in the category of young millenial/gen z, homemakers and senior citizens.
If you try to tell them what the issue is with AI and if they indeed are artists, they don't understand, fight and block you and come accross as utterly ignorant narcissists.
To them this is the way to riches. So in a sense I feel that this is the group that AI art caters to, primarily casual users that feel elated that they are now artists and stand to make millions selling t-shirts and art prints and branding through cool avatars on instagram. A good many of them actually feel that they can sell their AI generated designs of black panther and iron man to disney for millions.
The fact that Disney and DC can simply appropriate their black super hero designs with no credit or payment and even sue them for copyright infringment is something they are totally clueless about.
The strangest comments are from users saying that they appreciate all the black people art, but resent that its being made using white people technology, so its interesting that despite all the advancement they are still locked in their systemic racism bubble.
Do definitely need to separate when a specific image is used as an input vs generating from just noise and a prompt. It can be used as a filter if the user wants.
Lol the above example of art theft is especially interesting because it doesn't even require for the art on the left to be part of the training set - AI Bro basically did the equivalent of a photoshop paintover and claimed it his own. This is basically the most childish behavior possible. It's not even cynical, it's just plain stupid :D
That said it is a great preview of what will happen once AIJerks get their hands even on the most ethically "trained" model.
It's just funny to see how good "AI art" is at revealing the worst facets of peoples personalities. Social media "likes" really are a hell of a drug.
this seems like a good time in our sci-fi story where someone makes a watermark that clobbers an AI to produce the same image regardless of its prompt.
A new report from the trenches. One new microtrend amongst AI Bros seems to be "Niji Journey", a generator trained on anime/east asian images more specifically. Here is a tutorial page on how to use it, directly from "AItuts". The wording is clear as day.
The more I read about this, the more I feel sad about ... the users themselves. This really is consumerism/instant gratification taken to the next level.
I ran out of credits on dreamstudio. Is there a free version of this instant gratification machine?
but check this out, I'm a 2d artist now! amazeballs poggers! Also a fantastic environment artist apparently, who knew I had such latent talent just waiting to burst into the public domain!
I guess it stands to reason that how many of these users would have potentially commissioned an artist to create art for them in the first place.
I really do see this as a fad that will go the way for NFT's, oversaturation followed by rollback and liquidation.
I mean people that are artistic will likely use AI to create something meaningful, but the more casual users will likely end up in their own echo chambers and die out.
For instance I don't see Afrofuturism AI art to solve systemic racism, but it does give many black users some motivation to keep going in the belief that they are finally going to become multimillionaire artists. So some good may come of this.
I'd seen a user use Midjourney to depict social phobias which was an interesting use of the medium,
That said I did get banned from one AI group for explaining legalities when one of the users wanted to coin a new term for AI art and i suggested Plagarisography given the non consensual nature of the source material lol.
Right, why commission an artist? In fact, why not accept commissions in their stead?
When a market is saturated it loses its value and profitability.
People who are artistic actually don't need to use the software to make something meaningful. In fact, creating art is a form of meditation, it's how they become artistic and develop creative skills. Art comes from time. Using a software like this prevents them from improving.
The black community and their artists are doing fine without AI. They've a deep cultural art history and a supportive base who love it. Their community have many talented artists to champion and inspire others from within it.
I did notice some AI art users looking to learn photoshop after using AI (likely to clean those funky hands) so in a way its motivating people to learn.
The issue is only when large companies start using it, though I don't see what use case they would have for art generated this way.
So far the use cases I'm seeing is T-shirts, posters and art for self published book covers and RPG guides.
Sure there are the crackpots selling 300+ head variants as sculpting reference, but other than that I don't see what it really accomplishes for the end user in the commercial space. Its just more ideation and reference for artists that can do something with it.
I wanted to stop discussing here since it became a bit too heated for my flavour. But this fake news here draws me back in.
I am pretty sure that this image was created to provoke this "theft" look by using the image to image method. And not by letting it create by AI from scratch. Image to Image allows you to input a source image. And when you use the lowest value, then you can output even an identical unmodified image.
It's like, hey, look i copied and pasted a super mario character with Photoshop and painted a few strokes across it, and it still looks like a Super Mario character! Photoshop is stealing artwork !!!
When you would know the AI solutions, then you would also know this fact. But you don't even know your enemy. Instead there is one false claim after another. The whole "against AI" argumentation here in this thread is based at lies and ignorance so far. So easily disproveable point by point. This makes me simply sad.
Friends, hate AI, but please for the right reasons. And not because of fake and lies and false claims. Hate rage and pitchforks should not replace knowledge and common sense.
The offline version of Stable Diffusion is free. And then you can create unlimited images. You just need a good graphics card with i would say minimum 8 gb graphics ram (It should also work with 6). I suggest to use the Automatic1111 fork. It runs in your browser then. It requires some technical skills to set it all up, but is one of the most comfortable solutions. And the required steps are well described.
What false claim ? If you mean that the description given of the process is not technically detailled or accurate enough to you, then this is completely irrelevant. Similarly to a how a legislation proposal about human cloning wouldn't describe the whole process like a scientific paper would.
What matters is that in one way or another the process uses images as in input without consent. Put differently : the resulting regulation could very well cover latent diffusion as it is indeed being done by MJ and the likes, just as well as any "micro stitching" - even is such "micro stitching" is indeed not the way the tech being discussed here operates. The fine discussions happen later when experts are being brought in to dive into details as needed anyways. And in the case of american politics, lobbies get involved - which in and of itself is a whole other problem of course, but this is relevant here because that's what this CF campaign is for.
Similarly, it really doesn't matter at all that the process works "just like human learning just faster", or that to tool is "just like photoshop", so this too is completly besides the point. Or rather : these arguments could very well be brought up during the debates, of course. But they will be weighted against the argument stating that this violates the earlier consensus that artists were operating under, by openly sharing their artwork for peoples respectful appreciation. And once such social consensus breaks down (for instance because of disruptive tech), then new law get proposed and debated.
In short, IMHO this actually way goes beyond copyright and deep into ethics. It is always extremely sad when ethics and human decency needs to be debated and put into law, but that's where we are now.
- - - - -
And if the result of all this is that AI generators go completely unregulated ... then artists will simply not post their stuff anymore, and art communities interested in self improvement and the pleasure of putting time into doing things for others will just move underground, use encryption, and so on. Art directors will only hire poeple they personally know about, because AI Jerks will be everywhere trying to get hired for writing prompts (and probably claim victimhood if they dont get what they want). And perhaps they'll go on making their own AI-made movies using AI-generated scripts of course, and their own AI-designed games and AI-designed superhero tshirts. Good for them ! But the tone I see breeding in these "communities" is already incredibly telling.
- - - - -
As for the way the money of the campaign is supposed to be used : all you have to do is to click on the page with the help of a non-AI-driven mouseclick.
THIS false claim. You input (and i talk about input, Image to Image is a feature where you can load an image for further manipulation) a copyrighted image and claim the AI to break the copyright. Seriously? What lie comes next?
And the false claim of broken copyright.
And the false claim of broken law.
And the false claim that AI steals everybody's work.
And the false claim that images are stored in the AI.
And the false claim that AI users are no artists. Followed by the false claim that AI users have no artistic background. Followed by the false claim that AI generated content is no art.
And the false claim that every artist is against AI.
And the false claim that the end is near when AI is not stopped.
Your whole witchhunt against AI is based at lies. You have still not the slightest idea how these tools works and what they are capable of. But judges it, based at your self invented wrong assumptions.
What matters is
Law and common sense. This is what matters. Not your religious views. The first part is not broken by AI, no matter how hard you try to prove the opposite. And the latter is the part that you lose in this discussion more and more. What you do here is neo luddism. A wild blind hate rage against the machines. Sorry when i name it so drastic.
I said it above already. Hate AI, but hate it for the right reason. And not by sticking a cardboard nose with a wart on it and calling it a witch.
You're mixing things up again. The AI-fying done on the drawing with the red background is of course a different thing altogether from the topic of data collection/scraping without consent for the fabrication of a latent diffusion model. Many would indeed agree that the "tool" is not responsible for what this AI-Jerk did (AI-fying a drawing originally done by someone else and calling the result their own), and indeed most of the blame is on the art thief here, and not on the software. It just so happens that this case of art theft happenned to have been done using to an unethically developped piece of tech.
But this is still relevant to the overall discussion IMHO for the following reason : it is an interesting window letting us peak into the minds of AI-Jerks. The making of this picture in and of itself is indeed absolutely not illegal if not sold commercially, and similarly the police isn't going to knock on anyone's door when they practice drawing by doing a master copy. But claiming the end result as an original piece is ... well, the most childish and socially inept thing ever, and this practice becoming widespread thanks to AI software (ethically developped or not) will just cause artists to keep their work to themselves, their famility, their trusted friends, and coworkers.
Also, the fact that this is part of the discussion is a good thing because it allows anyone involved to sharpen their arguments and correct their wordings if needed. Indeed, the fact that this tech that can "photoify" a drawing isn't illegal in itself, and I would bet that many technical oriented-artists are interested in such tech feats (I certainly am - it's impressive tech). But reflecting on it and how it relates to the bigger topic is absolutely useful.
Lastly, while the two are obviously related, the AS protest isn't to be confused with the pieces of legislation that may or may not come out of all this. By that I mean : *even if* AI scraping of non-public-domain data is written into law as completely legal in some countries, artists will still want to have a place where they can exchange and work together without the backround noise caused by AI Jerks. So even if nothing AS does is illegal, it still already lost the trust of its userbase by remaining silent on the issue. Artists were just hoping that the platform they contributed so much to would side with them, but it didn't. It's completely normal for people to want their piece of mind back.
I don't think that i mix up things by pointing at the facts. This "AI Jerk" could have also done this with photoshop and a filter. But yet you abuse this false example to discredit AI, and furthermore to discredit all AI users as jerks. And this is how you should not discuss.
I can just repeat, hate Ai, but hate it for a real and existing reason. And not by sticking a cardboard nose with a wart on it and calling it a witch.
It will for example for sure cost some jobs, like every tool that raises the productivity. That's what you can hate it for. But it will also create new ones. Like every tool that raises productivity.
it still already lost the trust of its userbase
Another false claim. The user base is not just made of AI haters. The very most users simply doesn't care. It's not the ones who cries loudest that are right.
It's completely normal for people to want their piece of mind back.
Next false claim. You cannot want back what you never lost.
Replies
The laion dataset usage copyright is for research and learning. It wasn't supposed to be used to train AI for commercial purposes since that infringes on individual copyrights.
The thing is when the lawsuits start this is likely the angle it will take and if it holds up it will require stability AI, Midjourney and Dall-E to destroy their AI training models, effectively ending their business.
No idea how they plan to rectify the damage done by previous models, it will likely lead to a local underground market for AI art. They will definitely have to pay massive damages and the precedent will spawn even more litigation which will become even worse when bigger players like Disney join in.
At this time its likely that they will sell their business to pay back creditors and disappear effectively ending the open source aspect of it.
They way these companies will justify what they did is to hinge on how the AI is simply learning and being inspired as people do. In effect they will have to prove that AI is a person and not a tool, kinda like that episode of Star Trek Voyager where the Doctor who's a hologram had to justify the rights over his authorship by proving sentience.
So far the courts and copyright authority are refusing to consider that AI owns the copyright to the image it generates since it isn't a person.
It also refuses to acknowledge the user of an AI service as an owner with exclusivity rights effectively considering it public domain.
I am thinking that the reason that the major players arent suing Ai companies ate because its the holidays and since for the moment they have the full right to use AI art created by prompts from others using no reprucussions what so ever. They dont even have to credit anyone.
like I saw some AI generated versions of black panther, Disney could use these, not credit and pay nothing and nothing would stop them.
This wouldn't be the case with human created art.
So in that manner it is hurtful to artists know that their work was used to train the models without their consent which absolutely is copyright infringement.
It will be an interesting case certainly.
Artstation ought to ban AI art outright given its sourcing, but they are also likely waiting on a ruling since it's a large market share of people who have not art skills that are providing its site with marketplace and advertising revenue through content
Something is getting lost in translation here. No one is talking about "forbidding learning". The idea is simply to not use anthropological terms like "learning" and reframing it as "using" in order to get a clearer view on what's going on.
On law : law is not something that is divinely dictated and get set it in stone forever, otherwise all humanity would be stuck in dark ages doctrines.
Current laws may not be fitting to legislate on whether or not the data collection required for the existence of these datasets is an infringement of intellectual property or not. Since all law is subject to interpretation, in some case it may actually be. But the point to understand is that current law was drafted in a context when AI images generators didn't exist. It actually doesn't matter at all if the process is similar, exactly the same, or different from what humans do, this is completely irrelevant. What is relevant is that if enough people agree that such use of image data should be outlawed, then it could eventually become so. And new law on copyright protection isn't required to be triggered by a copyright infringement case settled on way or another.
For instance, tomorrow a given country could legislate that AI image generators need to come with a human-readable version of the dataset being used, and said dataset could be, for instance, subject to government approval before being released to the market, just like it is the case for food and drugs. I am not saying that this is the way it should be done of course - just giving one very simple example/scenario that only takes a minute to think about.
This is all very similar to human cloning : "Human cloning does nothing else than humans does. Just a bit more efficient and faster". Yet once it became a reality, laws evolved accordingly to express the will of the people on the matter of human dignity. Similarly to how in some countries it is allowed to have one's child gestated by someone else, yet in some other countries it isn't.
So when people present to you their reasoning as to whether or not this kind of art theft is/should be illegal, they are expressing their conviction and the direction they want the "artistic social contract" to go towards - one in which the unauthorized use of imagery in a data set used for image generation would be illegal. The fact that it currently is or isn't isn't the end goal of the conversation, it is the starting point.
one in which the unauthorized use of imagery in a data set used for image generation would be illegal.
But why should it be illegal? This is still the casus knaxus ^^
It will definitely be interesting what interpretations a court will follow in the future. But months are gone, and no case yet :)
The laion dataset usage copyright is for research and learning. It wasn't supposed to be used to train AI for commercial purposes since that infringes on individual copyrights.
It doesn't. That's the dilemma at the moment. Some of you eagerly wants it to. But it simply is not.
The longer its a free for all the higher the damages would be.
Logically it would be sensible for all Ai generators to stop using the data sets of limit their prompt usage and allow artists to opt out so any litigation will focus on the relevance of older models and damages that they would incur to copyright holders.
Stability has taken that stance for version 3, not that it's off the hook for previous versions.
I'm a way I'm glad that it's motivating some people to learn photoshop if only to correct broken fingers.
But there are those people that are purely looking at it as a business, to generate art, brand themselves as AI art creators on social media and sell merchandise.
That works with personalized commissions in a certain style and fan art. Not sure if it would allow for a sustainable business model since the ones commissioning can just as easily create it themselves.
In that sense it feels like a fad that will die out, but it will impact artists if it is used in house at larger studios.
The reason it should be regulated is because of unauthorized use of source material for profit
Stable Diffusion is free and open source :)
It mentions in its FAQs
That all it is, is a url repository of the source. That a "researcher" needs to download the data set for use, and then the copyright of that data set would apply since they aren't an image hosting service.
By wording it this way they have absolutely limited their accountability to usage of the data, and I'm certain the courts will hinge on this fact logically reasoning that the original copyrights would apply.
That's why I reasoned that any copyright through Laion only covers research. What AI services are doing isn't that so they have to consider individual copyrights of the images used to train their AI model.
Now if Laion had provided the AI model without defining copyright that would have made them accountable first, but legally its only the service providers that will be on the hook.
Any damages will likely be proportional to their revenue I'm guessing.
Tiles, you are repeatedly mixing up law (what gets written in the law books, and determines practices that are illegal), and court decisions. A new law doesn't require a previous court decision to go one way or another - it's the other way around. And cases in which a court decision then becomes the norm because the law is ambiguous is called a "precedent".
As a matter of fact, this is precisely why some defense lawyers are still interested in cases where they get to defend something immoral - because this can then outline a loophole in dire need of being patched later by a new law. If a murderer gets away freely thanks to a technicality, it becomes an incentive for new laws to be drafted, proposed and voted in.
The example of the French law about touched up models didn't require any court case either.
They would like be liable for unauthorized use of copyrighted images in training their AI models.
Penalty will likely be a destruction of their previous AI models and if they released it openly without considering liability then they will face financial penalties.
It's like raiding a publishing house and giving away 1st editions. True you wouldn't make any money but you'd still be financially liable for damages because of loss of revenue.
Thing is this isn't stealing from a single publisher, its plagiarising on a massive scale so not sure how they would calculate any damages, but the fallback would likely alienate any large-scale monetization of AI art by larger companies.
Let's for a moment assume that is the case. And court follows your argumentation and all your fantasies comes true. Stable Diffusion allows training with own material already. And has already kicked quite a few name prompts out of their weight. You cannot longer type in a name of an artist from artstation. It will not work anymore. And also the term Artstation has no effect anymore in the new weights. Means, even when you put this part under (for me arbitrary and not justtified) illegality, SD would be already at the legal side again.
And this means, the dilemma remains. You can do what you want, AI will become part of our life and remain your competitor. That you want to make the work of the AI makers harder does not change the end result. It may need a bit time now until the quality is back to where it was. SD 2.0 is weaker than 1.5 with the removed weights. But it's just a matter of time until it has catched up.
Yes, no one knows, and that too is besides the point. For instance one might believe that a country that goes full throttle into eugenics and human cloning would become populated by world-conquering super-humans within a few years, so why resist it, might as well do the same ; but that wouldn't prevent other countries to take their own stance on human dignity and stand by it.
To people convinced that this kind of tech could harm the fabric of society (back to talking about image-generating AIs here, not superhumans :D ), even one year or one month or one week of gained time is worth it.
Well, for me it is no harm, but something fantastic. This is the point where we still heavily disagree. For me it is simply a fantastic new tool in the belt. It allows me to create what was impossible before. I create all my music videos with it at the moment for my upcoming new album. In weeks instead of years. You might know what time and manpower goes into a single music video in the traditional way. I would not want to miss the AI tool anymore.
I do wonder what they would do if a legal precedent hits, what would happen to previous training models, would work derived from them be included in a repository that future AI models can derive from.
Usually when it's this complicated, it comes down to a settlement and some conditions.
I feel like the conditions would be not to use the artists work if they opt out. New models be based on entirely new data set within creative commons or art that has been authorized with consent and maybe some levity on what happens to all the art that's already been generated, I.e it can only be public domain but you could claim some credit and commercial use depending on trademarks and copyright.
For example, I commission AI art to make barbarian standing next to TV which I can use on a t-shirt as can anyone else
Vs
I commission AI art to make mickey mouse standing next to TV that is going to screw me over if I sell it on a t shirt, though if Disney uses the design that came from my prompt doesn't credit me and profits from it and even claims ownership I can do nothing.
Basically it comes down to the enduse case.
I wouldn't mind using ethically sourced AI art for ideation, but profiting from unregulated AI art is a copyright minefield.
A few questions?
@pior
300 3D heads lmao this is crazy, didn't knew we were in this stage already
Edit: Okay they are not 3d, they are just rendered by AI to look 3d
@Tiles : I would say that you explaining your actual intent with your use of this tech is far more conductive to discussion than any earlier Photoshop analogy or attempts at anthropomorphizing (right word ?) the tech.
So, this tech using/relying on millions of images without consent (regardless of this practice being currently illegal or not) is widely rejected by the very people who created said images, because this practice is in direct violation of the accepted consensus within communities of people who love working on their craft (ie the golden rule of respecting each other's work, and the fragile equilibrium consisting of having faith in other people to not be jerks). But at the same time, this tech allows you do to something that you could previously only ever dream of doing, that is to say having a set of music videos with imagery that fits "what you see in your minds eye" and matching a high standard of polished rendering, similar to (previously) trending art posted on Artstation.
Isn't this a rather big moral dilemma ? And aren't you worried that this might actually distract your potential audience from the main star of the show, which I assume is the music itself ? I would bet that people who love your music are more interested in what you can create on your own regardless of it being highly visually polished or not. There is a certain beauty in weaknesses and errors, and some great strength to be found in sketches that take only minutes to create. I still want to believe that some people are able to see it.
Well, for me it is simply a tool like Photoshop :)
You still underestimate the effort to create something really good looking with AI. It is not enough to type in a few words. You still, and even more than in traditional art, need the artistic skills and knowledge to steer the AI where you want it to be. Else it will go completely wild. You need a very good idea what you want to achieve, and need to know about styles and the right keywords. And setting everything up for offline work is still highly techy. That's why i still scatch my head when somebody says that AI users are no artists. They are. They decide what end result they want to produce. It is quite an effort. And you can clearly see if a image is a beginners work or somebody has an idea about how AI works.
Short side trip to my music videos. What i use is a offline version of this fork here: https://github.com/lmmx/deforum-stable-diffusion
Here you can find some examples. And yes, the word Artstation was used in the prompts, which i personally avoid: https://replicate.com/deforum/deforum_stable_diffusion/examples
What is not covered at the examples page is video to video, which allows you to do quite a few funny things with the original content.
When you want to use it offline then i suggest to install the automatic1111 web ui stable diffusion fork, and use the deforum offline addon then. Makes life much easier. Google will lead you.
This kind of movies is simply impossible to achieve in a traditional way. And it already has a big fanbase. The first music video made with this tech has currently over a million clicks at Youtube after two months. I hope to get my videos out of the door as soon as possible to participate at least a little bit of the hype :)
Which answers your question about the users. They simply don't care how something is achieved. When it's wow content, then they make wow.
It is both, the music and the art. Good music needs good graphics to become a success. And vice versa. The thriller video was what made Michael Jackson's song so famous and big. It was revolutionary at its time. And the kids nowadays cannot be fascinated by a still in a video with some music bars going up and down. You need graphical content. And the more extraordinary, the better it is.
So, this tech using/relying on millions of images without consent
This is the point where we go in circles. You don't need a consent for looking at public available images :)
is widely rejected by the very people who created said images
Is this really the case? Is it really nearly everybody? I see the protest at Artstation too. But i also see that quite a few traditional artists uses it happily. Like me. The userbase of AI is really big already. Which stands in contradiction to the protest of "everybody". Since it is mainly artists who uses AI. Somebody without any relation to art has no need for it. Artstation is also not the only source for LAION. And again, everything is legal. You can be sure that the lawyers of these companies does a good job to make it sure.
And we are again at the point where you claim a copyright breach where there is none. The images were used for training. Not a single pixel of these images makes it in the end result of an AI image. This is technically impossible.
moral dilemma
Our society has long decided that it is no dilemma. We use robots, we use computers, we use cars instead of horses. We use everything that makes our life easier. And the AI journey has just started. There are already tools in the making to create complete videos by text prompts. I cannot wait to get my hands at it. https://makeavideo.studio/
And again, as a AI user, who knows now the limits of AI, i can assure you that there is still a need for traditional art for a long time. AI will not replace everything. Game content will for a long time still require somebody with the right skills instead of a text prompt. What it possibly speeds up is the concept part. But also here, you need somebody who decides if the concept fits and is is even achievable. You still cannot let a janitor do this job.
I do wonder what they would do if a legal precedent hits, what would happen to previous training models, would work derived from them be included in a repository that future AI models can derive from.
As told before, Stable Diffusion did react already. They will simply adopt. AI ... uh ... will find a way :)
SD has pulled the material in question out of the AI weight. But not because it was illegal, but because they don't want to be the bad boys. They listened to the concerns and understood them. DallE and Midjourney still works with the material in question.
No. That's technically impossible. There is no image data in the AI. AI uses a so called weight file. A weight is the brain and the memory of the AI. There it is defined what makes a chair, a table and so on. Such a weight of SD has between 4 and 7 gb. It needs to fit into the ram of a graphics card. One training file of LAION is around 10 Terabyte of data. And that's just one set then. There is simply no compression algorithm that can compress images so small. So the answer is yes, AI has understood. That's why you need to train it. It is a little child that learned what is a chair and what is a table.
In case you are interested, here is the paper. But beware, it is no easy read ^^ : https://ommer-lab.com/research/latent-diffusion-models/
That's a limit of AI. It creates a noise image, and from there it tries to get close to the desired result. And this every time from scratch. You can use the same seed and keywords to create identical images. But when you use different seeds or different keywords, then you will always get a different result. What you cannot do is things like to rotate the content. There is no control over the camera. Since there is none.
See point two. You might distinguish between still images and movies here. Most users creates stills. Movies is a completely different and specialized chapter with its very own limits and needed tricks. I use as told the deforum fork to create movies. This works by inreasing the seed by one step over time. A hack to create these "falling into the image" animations.
Our society has long decided that it is no dilemma. We use robots, we use computers, we use cars instead of horses. We use everything that makes our life easier.
This concerns me. Machine Learning and AI has been on the mainstream for the better part of a decade now, and I am yet to see really serious improvements to our material quality of life come from it. Certainly not to the same revolutionary scale as robots, computers or cars. Maybe it's still early days, but every new wave of progress I see with AI still seems to be stuck at being a 'toy'. These recent products are still ultimately toys. I genuinely don't think it's worth the trouble to our society just to let us... speak to our cars and phones? Generate pastiche images of our favourite superheroes?
See I'm not sure that we have decided it's 'no dilemma'. We use robots, yes, but no-one asked the labourers they replaced. We use cars, yes, but no one asked the communities we carved up for the highways and roads cars need. People aren't too happy about some of the social consequences the internet (via computerisation) has had. I don't think these technologies were mistakes in the grand scheme of things, but it's hardly true that society was or is unanimous about their adoption. Indeed, concerning cars many places are reversing some of the planning decisions made to accommodate them.
I'd be more hopeful for AI if I saw commercialisation in the places that matter like medicine. It hasn't seemed to be able to leave the university lab in that respect. The world doesn't need more advanced ways to target advertising or generate media.
Well, you have always the choice not to use it. But do you really want to ride a horse instead of a car? And yikes, no smartphone anymore? :D
I don't use it, but that's not my point :D
I still can not understand the reason why somebody would call it AI "art" in the first place other than for cynical manipulation of social narratives.
For sociopaths everything can be a "tool", even another person. No wonder they are so loud right now in defending AI with rather very weak arguments, e.g. that it is fun for THEM, the reward is so FAST, oh, it's such a cool toy etc.
Ethics is something beyond their understanding.
They simply can not grasp, that many CG artists aren't against AI in general, but against being treated unfair.
Oh, i fully understand that it feels unfair that you might even help to train your competitor that might replace you. This was and is never the question. This is what Pior means with the ethical dilemma. But i heavily dislike the neo luddistic behaviour that tries to kill AI and that treats every AI artist as a clueless moron that steals your job, and sees AI as the pure evil. Completely undifferenciated. I use it as told for videos that weren't possible before.
Stable Diffusion tries to go the middle way. They have listened to the concerns, and have reacted. The end result will be the same though. You can't stop the future.
@Phiona This is actually an incredibly relevant remark IMHO. When asked what they do for a living or even just as hobby, many artists would not even call themselves artists, but rather go straight to describe what they love to do, often in a slight self-derogatory/humble way even. For instance someone who draws manga might not say "I am a Manga Artist" but rather say "I love drawing manga". And hopefully this can lead to a fun exchange of shared interests ... if the person asking the question is not a jerk jumping on any occasion to be snarky. And a creature concept artist working on the production of hugely popular movie franchises might not answer the question by "I am an Artist Working on Concept Art For Million Dollar Movies", but would perhaps say "I dig drawing monsters !".
Now of course no one is the arbiter of what is art and what isn't. But the fact that AI bros just love throwing the word around is indeed incredibly telling. I would agree with your statement that this could indeed be qualified as a cynical manipulation of social narratives.
@Tiles
the future is almost certainly death of the entire human race and nearly all other forms of life, so it kinda needs to be stopped.
No, I am not suggesting that AI art is what kills us. Anybody wants to argue about what is unanimous verdict of virtually every scientific community you have to demonstrate that you've actually read something about it before I'm going to say much more about that. But it leads to point I want to make: broader picture, the non-forward-thinking, non-learning-from-history mindset that blindly assumes technology = progress is the thing source of all of our problems.
It's just like a basic law in the universe: nothing new is created. You can only rearrange what was already there. So every action has some reaction. You might have easier time getting your art needs for music videos now - but knowing how things work, you have to ask, "what is the trade-off?"
A simpleton only looks into that question as far as they can see immediate effects on themselves. Somebody who actually solves problems before the disaster tries to figure out what the long cause-effect chain will result in before they do the thing.
None of us can point to something precise, because the equation is too big. But it seems pretty clear to me and seems like most professional artist agree - it's not good at all for people who made a living by making art.
the future is almost certainly death of the entire human race and nearly all other forms of life, so it kinda needs to be stopped.
True. But i prefer to have nevertheless some fun before this happens :)
it's not good at all for people who made a living by making art.
And people who makes a living from art explicitely excludes everybody who uses AI to make a living from, right? Reminds me of the real scotsman argument ^^
You will have winners and losers with the new tool. As always.
@Tiles
well yeah winners and losers is the thing.
People who had been winning by merit of their own hard won skills now become the losers, and people who've done nothing now are allowed to be winners. But they've sacrificed nothing. That's why the word fair keeps coming up.
So, if you were not able to afford to pay artist to make art before, but now you can get the same thing for free because you used a machine which stole their art, rearranged it...what does that make you? Empowered? Maybe, but it is not dissimilar to the way I could become empowered if I stole a rich old ladies purse and suddenly had a big investment for my next game project, lol. At least in that case, I'd need to have the physical courage to confront her, so there is like, the tiniest nugget of something that might be respected.
But just pushing buttons on a computer to steal from the collective pool of art and thinking that makes you anything is just silly. And it won't give you special advantage because everybody else can do the same thing. So you never escape the need to be special somehow - only thing that has really changed is that people who were special are now robbed of their livelihood.
People who had been winning by merit of their own hard won skills now become the losers
Why? What does these people stop to use AI too? And to repeat it again, it's not that you can let your cleaning woman do the job now. It's the opposite, you need even more knowledge about art than before. You cannot simply start to draw. I've googled my ass off the last three months. And game content will for a long time not be done by AI. Your job as a game artist is safe.
Only who cannot adopt to new situations becomes losers. It is not written in stone. And hate is definitely the wrong way to adopt to new situations.
stole their art, rearranged it...
Could you please finally stop to make these false claims? No artwork is stolen, not a single copyright is broken by AI. And it is also not simply rearranged art. That's not how it works.
And to repeat it the dozenth time, Stable Diffusion has removed the weights from Artstation already. Your wrong claimed "stolen" material is not longer in use there. But this changes exactly nothing at the situation.
But just pushing buttons on a computer to steal from the collective pool of art and thinking that makes you anything is just silly.
Pushing a button is what photographers do since decades. Pushing a button is exactly what everybody does who renders an image. Pushing a button is everybody who bakes normals and textures. Pushing a button is simply part of the art show since decades. And you "steal" your knowledge from the people who has invented this software, their knowledge and skills. I am pretty sure that even you has also learned how to draw, model, texture etc. from other artists. And not invented the wheel from scratch. So who are you to claim that somebody steals your work by looking at your images and learning how to draw?
Silly is to make false claims without to understand the issue. Use AI for a while, and you will pretty quick understand the limits, and understand why the majority of artists has simply nothing to fear. The AI witchhunt is irrational. It usually needs not more than half an hour until you realize what is possible and what not.
Well to be fair, artists hired in game/entertainment production jobs are likely not going to lose their jobs to this imho, simply because ADs are clever enough to know that the job of an artist in house or freelance isn't just to create "pretty pictures". Unfortunately, similarly to how Kotaku readers and urinalists have no clue about how games are made, AI bros have no clue about the reality of the job as an illustrator or designer and what it takes to work on a project as a team. And this is not a demeaning comment in any - *no one* can know the reality of the job if they never got their foot in the door. I guess the best case scenario is for the euphoria to last about as long as braindead "NFT" and "wEbThReE mEtAvErSe" trends. A year or two perhaps ? Or perhaps that's the new normal - people broadcasting their narcissism more than ever before.
So as far as I am concerned the biggest problem is simply the straight up insult of the scraping process. But insulting someone is not illegal ; and it is actually quite useful that it isn't, because it helps telling apart the jerks from the people one wants to interact with.
At the end of the day the real saddening thing is to realize how much people simply lack basic empathy. And even someone keenly aware of the toxicity of social media (even before this) can't filter it out this time, because of the way it explosively infiltrated a previously safe haven like Artstation.
From there I am starting to feel like online art communities shrinking back down to, say, mid 2000s scale would be a very good thing. I'd love to see people build their own websites again FFS. Contraray to popular belief, artists do not need Artstation to get noticed or get a job. Its value consisted only of it being a very clean standardized design.
Unfortunately, similarly to how Kotaku readers and urinalists have no clue about how games are made, AI bros have no clue about the reality of the job as an illustrator or designer and what it takes to work on a project as a team.
You currently talk with one of these AI bros that completely does not fit into this pattern. I am, or better was a game developer before i decided to stop digging for gold and started to sell shovels with Bforartists. Just saying ;)
But yeah, i see you already denying again that i am an artist at all since i use AI and such. But fact is that the very most AI people that i know are all highly skilled artists with years of background. AI is an artists tool.
So at the end of the day the real saddening thing is to realize how much people simply lack empathy.
To repeat, it is not that i lack of empathy or do not understand where you friends come from. I just see a big chance where you see the end of the world.
From there I am starting to feel like online art communities shrinking back down to, say, early 2000s scale would be a very good thing. I'd love to see people build their own websites again FFS.
More bubbles? Facebook has them already. And not for good :/
@Tiles
that seems like a lot of word play to me.
i think you just want to use the stuff and not think about it, which is fine. But i believe there is zero chance you will convince any artist that AI art isn't ripping them off.
Yes i know, since the very most are not willing to accept the truth. Claiming that AI is stealing to discredit the competitor is so much simpler.
Well, maybe we simply have reached the end of the discussion. You will never convince me from your point of view since i know what AI does and is and how it works. And i have seen too much tools coming and going over the years. For me your fraction is the ones who has stopped thinking. Progress was and is one of the driving forces in art.
Anyways. It was an interesting discussion for sure. Thanks :)
The most interesting example of this being a euphoria is in a group that was linked in another post that focuses on Afro futurism for the Black Community,
https://www.facebook.com/groups/33445301753/?hoisted_section_header_type=recently_seen&multi_permalinks=10162551203046754
so all the art there until the advent of AI usually revolves around showing Black identity and black people in a positive light, and with the inclusion of AI you have a lot of Black users who are proudly proclaiming themselves as Black Artists disrupting the million dollar gatekeeping realm of White artists.
Many of these users usually fall in the category of young millenial/gen z, homemakers and senior citizens.
If you try to tell them what the issue is with AI and if they indeed are artists, they don't understand, fight and block you and come accross as utterly ignorant narcissists.
To them this is the way to riches. So in a sense I feel that this is the group that AI art caters to, primarily casual users that feel elated that they are now artists and stand to make millions selling t-shirts and art prints and branding through cool avatars on instagram. A good many of them actually feel that they can sell their AI generated designs of black panther and iron man to disney for millions.
The fact that Disney and DC can simply appropriate their black super hero designs with no credit or payment and even sue them for copyright infringment is something they are totally clueless about.
The strangest comments are from users saying that they appreciate all the black people art, but resent that its being made using white people technology, so its interesting that despite all the advancement they are still locked in their systemic racism bubble.
Hmm, looks like theft to me.
Do definitely need to separate when a specific image is used as an input vs generating from just noise and a prompt. It can be used as a filter if the user wants.
Lol the above example of art theft is especially interesting because it doesn't even require for the art on the left to be part of the training set - AI Bro basically did the equivalent of a photoshop paintover and claimed it his own. This is basically the most childish behavior possible. It's not even cynical, it's just plain stupid :D
That said it is a great preview of what will happen once AIJerks get their hands even on the most ethically "trained" model.
It's just funny to see how good "AI art" is at revealing the worst facets of peoples personalities. Social media "likes" really are a hell of a drug.
this seems like a good time in our sci-fi story where someone makes a watermark that clobbers an AI to produce the same image regardless of its prompt.
my god, they've already automated furry art. There goes my plan B
Hehe :D
A new report from the trenches. One new microtrend amongst AI Bros seems to be "Niji Journey", a generator trained on anime/east asian images more specifically. Here is a tutorial page on how to use it, directly from "AItuts". The wording is clear as day.
https://aituts.com/how-to-use-niji-journey/
The more I read about this, the more I feel sad about ... the users themselves. This really is consumerism/instant gratification taken to the next level.
I ran out of credits on dreamstudio. Is there a free version of this instant gratification machine?
but check this out, I'm a 2d artist now! amazeballs poggers! Also a fantastic environment artist apparently, who knew I had such latent talent just waiting to burst into the public domain!
I guess it stands to reason that how many of these users would have potentially commissioned an artist to create art for them in the first place.
I really do see this as a fad that will go the way for NFT's, oversaturation followed by rollback and liquidation.
I mean people that are artistic will likely use AI to create something meaningful, but the more casual users will likely end up in their own echo chambers and die out.
For instance I don't see Afrofuturism AI art to solve systemic racism, but it does give many black users some motivation to keep going in the belief that they are finally going to become multimillionaire artists. So some good may come of this.
I'd seen a user use Midjourney to depict social phobias which was an interesting use of the medium,
https://www.facebook.com/groups/officialmidjourney/permalink/472276801730556/
That said I did get banned from one AI group for explaining legalities when one of the users wanted to coin a new term for AI art and i suggested Plagarisography given the non consensual nature of the source material lol.
Right, why commission an artist? In fact, why not accept commissions in their stead?
When a market is saturated it loses its value and profitability.
People who are artistic actually don't need to use the software to make something meaningful. In fact, creating art is a form of meditation, it's how they become artistic and develop creative skills. Art comes from time. Using a software like this prevents them from improving.
The black community and their artists are doing fine without AI. They've a deep cultural art history and a supportive base who love it. Their community have many talented artists to champion and inspire others from within it.
I did notice some AI art users looking to learn photoshop after using AI (likely to clean those funky hands) so in a way its motivating people to learn.
The issue is only when large companies start using it, though I don't see what use case they would have for art generated this way.
So far the use cases I'm seeing is T-shirts, posters and art for self published book covers and RPG guides.
Sure there are the crackpots selling 300+ head variants as sculpting reference, but other than that I don't see what it really accomplishes for the end user in the commercial space. Its just more ideation and reference for artists that can do something with it.
Geez :/
I wanted to stop discussing here since it became a bit too heated for my flavour. But this fake news here draws me back in.
I am pretty sure that this image was created to provoke this "theft" look by using the image to image method. And not by letting it create by AI from scratch. Image to Image allows you to input a source image. And when you use the lowest value, then you can output even an identical unmodified image.
It's like, hey, look i copied and pasted a super mario character with Photoshop and painted a few strokes across it, and it still looks like a Super Mario character! Photoshop is stealing artwork !!!
Video about image2image: https://www.youtube.com/watch?v=VdDhGjkKD1A
When you would know the AI solutions, then you would also know this fact. But you don't even know your enemy. Instead there is one false claim after another. The whole "against AI" argumentation here in this thread is based at lies and ignorance so far. So easily disproveable point by point. This makes me simply sad.
Friends, hate AI, but please for the right reasons. And not because of fake and lies and false claims. Hate rage and pitchforks should not replace knowledge and common sense.
The offline version of Stable Diffusion is free. And then you can create unlimited images. You just need a good graphics card with i would say minimum 8 gb graphics ram (It should also work with 6). I suggest to use the Automatic1111 fork. It runs in your browser then. It requires some technical skills to set it all up, but is one of the most comfortable solutions. And the required steps are well described.
https://github.com/AUTOMATIC1111/stable-diffusion-webui
Have you seen this campaign: https://gofund.me/2df3dc07 ?
I think they've described pretty acurate what's wrong with today's AI generated images.
They make the same very false claims than in this thread.
But just curious, what happens with this money then?
What false claim ? If you mean that the description given of the process is not technically detailled or accurate enough to you, then this is completely irrelevant. Similarly to a how a legislation proposal about human cloning wouldn't describe the whole process like a scientific paper would.
What matters is that in one way or another the process uses images as in input without consent. Put differently : the resulting regulation could very well cover latent diffusion as it is indeed being done by MJ and the likes, just as well as any "micro stitching" - even is such "micro stitching" is indeed not the way the tech being discussed here operates. The fine discussions happen later when experts are being brought in to dive into details as needed anyways. And in the case of american politics, lobbies get involved - which in and of itself is a whole other problem of course, but this is relevant here because that's what this CF campaign is for.
Similarly, it really doesn't matter at all that the process works "just like human learning just faster", or that to tool is "just like photoshop", so this too is completly besides the point. Or rather : these arguments could very well be brought up during the debates, of course. But they will be weighted against the argument stating that this violates the earlier consensus that artists were operating under, by openly sharing their artwork for peoples respectful appreciation. And once such social consensus breaks down (for instance because of disruptive tech), then new law get proposed and debated.
In short, IMHO this actually way goes beyond copyright and deep into ethics. It is always extremely sad when ethics and human decency needs to be debated and put into law, but that's where we are now.
- - - - -
And if the result of all this is that AI generators go completely unregulated ... then artists will simply not post their stuff anymore, and art communities interested in self improvement and the pleasure of putting time into doing things for others will just move underground, use encryption, and so on. Art directors will only hire poeple they personally know about, because AI Jerks will be everywhere trying to get hired for writing prompts (and probably claim victimhood if they dont get what they want). And perhaps they'll go on making their own AI-made movies using AI-generated scripts of course, and their own AI-designed games and AI-designed superhero tshirts. Good for them ! But the tone I see breeding in these "communities" is already incredibly telling.
- - - - -
As for the way the money of the campaign is supposed to be used : all you have to do is to click on the page with the help of a non-AI-driven mouseclick.
What false claim ?
THIS false claim. You input (and i talk about input, Image to Image is a feature where you can load an image for further manipulation) a copyrighted image and claim the AI to break the copyright. Seriously? What lie comes next?
And the false claim of broken copyright.
And the false claim of broken law.
And the false claim that AI steals everybody's work.
And the false claim that images are stored in the AI.
And the false claim that AI users are no artists. Followed by the false claim that AI users have no artistic background. Followed by the false claim that AI generated content is no art.
And the false claim that every artist is against AI.
And the false claim that the end is near when AI is not stopped.
Your whole witchhunt against AI is based at lies. You have still not the slightest idea how these tools works and what they are capable of. But judges it, based at your self invented wrong assumptions.
What matters is
Law and common sense. This is what matters. Not your religious views. The first part is not broken by AI, no matter how hard you try to prove the opposite. And the latter is the part that you lose in this discussion more and more. What you do here is neo luddism. A wild blind hate rage against the machines. Sorry when i name it so drastic.
I said it above already. Hate AI, but hate it for the right reason. And not by sticking a cardboard nose with a wart on it and calling it a witch.
I have to take back my words that game artists work is not in danger yet. AI generated 3d models at its way <3
Well, sort of ^^
https://lumalabs.ai/
You're mixing things up again. The AI-fying done on the drawing with the red background is of course a different thing altogether from the topic of data collection/scraping without consent for the fabrication of a latent diffusion model. Many would indeed agree that the "tool" is not responsible for what this AI-Jerk did (AI-fying a drawing originally done by someone else and calling the result their own), and indeed most of the blame is on the art thief here, and not on the software. It just so happens that this case of art theft happenned to have been done using to an unethically developped piece of tech.
But this is still relevant to the overall discussion IMHO for the following reason : it is an interesting window letting us peak into the minds of AI-Jerks. The making of this picture in and of itself is indeed absolutely not illegal if not sold commercially, and similarly the police isn't going to knock on anyone's door when they practice drawing by doing a master copy. But claiming the end result as an original piece is ... well, the most childish and socially inept thing ever, and this practice becoming widespread thanks to AI software (ethically developped or not) will just cause artists to keep their work to themselves, their famility, their trusted friends, and coworkers.
Also, the fact that this is part of the discussion is a good thing because it allows anyone involved to sharpen their arguments and correct their wordings if needed. Indeed, the fact that this tech that can "photoify" a drawing isn't illegal in itself, and I would bet that many technical oriented-artists are interested in such tech feats (I certainly am - it's impressive tech). But reflecting on it and how it relates to the bigger topic is absolutely useful.
Lastly, while the two are obviously related, the AS protest isn't to be confused with the pieces of legislation that may or may not come out of all this. By that I mean : *even if* AI scraping of non-public-domain data is written into law as completely legal in some countries, artists will still want to have a place where they can exchange and work together without the backround noise caused by AI Jerks. So even if nothing AS does is illegal, it still already lost the trust of its userbase by remaining silent on the issue. Artists were just hoping that the platform they contributed so much to would side with them, but it didn't. It's completely normal for people to want their piece of mind back.
I don't think that i mix up things by pointing at the facts. This "AI Jerk" could have also done this with photoshop and a filter. But yet you abuse this false example to discredit AI, and furthermore to discredit all AI users as jerks. And this is how you should not discuss.
I can just repeat, hate Ai, but hate it for a real and existing reason. And not by sticking a cardboard nose with a wart on it and calling it a witch.
It will for example for sure cost some jobs, like every tool that raises the productivity. That's what you can hate it for. But it will also create new ones. Like every tool that raises productivity.
it still already lost the trust of its userbase
Another false claim. The user base is not just made of AI haters. The very most users simply doesn't care. It's not the ones who cries loudest that are right.
It's completely normal for people to want their piece of mind back.
Next false claim. You cannot want back what you never lost.
We go in circles ...