the problem with it is. they act as if they are NFT bros, rambling about democratizing a genre that was gate kept by very experienced artists who have to draw EVERY . SINGLE . FRAME by hand.
they are not better than the nfts bros, they do not talk about gate keeping by corporations but by talented experienced artists creating those pieces. and then go on showing how they can do it, using their experience and skillset and tools to sell their subscriptions.
admittedly a lot cheaper than classical animators doing it frame by frame, but that was not the point they made in their video.
Mind you, Corridor Digital also trained the generator on stolen frames from a Vampire Hunter D movie, and their response to the backlash was to passive-aggressively 'like' posts defending image generators on Twitter instead of engaging in any sort of dialogue.
And then they made a tongue-in-cheek follow-up where they used generators again, still without responding at all to the backlash.
People who say AI art "decentralizes" art, and that artists have been "gatekeeping" are pretty stupid imo. Art was never centralized in the first place, and if spending years of your life to make a skill marketable is 'gatekeeping' (which it is not), then basically anyone with a marketable skill is gatekeeping.
Not hard when simply recording your strokes and movements on the Operating system of choice is probably the "back-door" to stealing how to achieve aspects of this "a.i.". (terminally online and all that right, never unplug your routers right? to do work? and working in a studio do they stay online always?) Not hard to see what is "assisting" this thing.
Not hard when simply recording your strokes and movements on the Operating system of choice is probably the "back-door" to stealing how to achieve aspects of this "a.i.". (terminally online and all that right, never unplug your routers right? to do work? and working in a studio do they stay online always?) Not hard to see what is "assisting" this thing.
Except there is no theft. AI is not able to steal, that's technically impossible. You cannot compress 240 terabyte into 8 gb vram, not even download it to an average PC. Not a single pixel of the originals is in the AI weights. And that would just be one training set, that's the size of the Laion5 dataset. And there is also nothing copyright relevant in the training sets. Training sets like Laion are link collections. There is a reason why there are just a few copyright lawsuits around. And why none of it succeeded so far.
What is in the AI is the concept. What makes an apple, how does this and that style look, lighting, golden ratio and so on. The craft part. And to learn and to know this part is not forbidden. That's how artists works since eons. They look at the material of all the famous artists before, learn, then reuse the concepts and techniqes and styles that they have learned. And don't tell me that you did not have a look at HR Giger before for your game props. Of course, you can also create copyright protected material with AI. Like you can with Photoshop, or even just with a pencil. Shall we really forbid pencils then?
The real problem is the ethical dilemma to train your direct competitor with your work, which most probably will make you obsolete in the long run. AI is at its best way to become a super artist that knows all styles including yours. And it's yeah, a big dilemma. I completely understand where you come from. It's not okay from your point of view. That's why you cry theft. And i understand it.
But AI will not go away. You cannot stop it, like you could not stop cars or book printing or wheels or any other important invention in history. All these inventions that has cost so much jobs in the past. But at the same time generated so much more then. And i see the same with AI happen at the moment. The world becomes once more a bit richer and more various. Not poorer.
AI is part of our life now. And permanent change was always part of working as an artist. The permanent search for optimization, for better and faster results. Sculpting, Remeshing, Landscape generators and so on. Art has always adopted to new tech. Just remember, it wasn't that long ago that there were no computers that could do 3D graphics. Or code.
It is 100% theft, because the models are trained on datasets full of stolen / scraped images. The entire business model of these companies is based around the illegal use of millions of scraped images.
"There is a reason why there are just a few copyright lawsuits around."
because they all do it, and don't want someone else to fuck em over... there are reasons why youtube lets ais crawl their users data, because they also do it and use it.
there are also reasons why Laion isnt meant for commercial use and yet everyone uses it this way regardless...
it has a lot of money pumped into it because billionaires see it as the next big step to freeing themselves from the need of labor, so they are in an arms race to develop it fast to get try and form a monopoly on it.
if you know anything about history it goes like this: whoever gets advantage in new market first, wins.
the amount of firepower your enemy has does not make them the good guys. America can easily bomb afghanistan and iraq. Are we therefore the good guys? US government killed nearly all of the native americans and put them on reservations, therefore we are the good guys? Hitler nearly subdued the entirety of Europe, not too many people are saying he is the good guy. And Ghengis Khan crushed the entire known world, was he a good guy?
Labor saving devices do not make the world richer. They come from plunder of places you just happen to not live in. They make a minority of people richer, and that group is destined to become smaller and smaller because resources are finite.
In this case part of the plunder was from creative people, and of course the raw material has always come from the exploitation of those places you were lucky not to be born in. You know, places where people do all the real, actual work so that spoiled soft babies in western countries can do stupid shit like make digital art and feel like they are special for it.
Recognizing that your team is losing and deciding to play for the other team is a way a person might survive but it also makes you like the most contemptible type of character there can be.
00:00 Introduction 07:35 “The AI just collects references from the internet the same way artists do.” 15:26 “AI is just a new tool.” 23:12 “Artists will just need to focus on telling stories.” 29:17 “These companies cannot manipulate our access to these systems because of open-source projects.” 31:32 “Don’t people do the same thing with references as the AIs do?” 34:44 “The AI can never replace the soul of an artist.” 36:56 The Dance Diffusion Problem 41:41 Conclusion
No amount of truth will seem to sway fans like Tiles, but the arguments and answers posted in this video were impressive to me.
Which truth do you mean? Yours? The yet again completely false assumptions and conclusions in this video, which fits so perfectly to your point of view?
Eric, i understand that you hate AI with a passion. But I am no "fan". I am neither fanatic nor militant. I am just a user and a messenger. And as a developer i stand with both legs in reality. I prefer to rely at facts instead of "truth".
You know what makes me really sad here? Polycount was once a place where artists has met and discussed new techniques and ideas. Also controversal, that's human. But afterwards everything was fine again, and we still helped each other. Now it is a place of AI haters. I watched in horror as people were repeatedly driven out of the AI discussion and off the site completely. For me Polycount has lost its relevance. I'll be quiet again now.
While true that in this thread everything is still civilized, thank you for that, this is not what i meant. May i point your attention to the "other thread". Have you really not noticed that only people who are against AI are posting over there since months? It's an echo chamber. It's the same thread where i was called with names before for pointing one time too often at the facts and went quiet then.
It's not long ago that in this other thread somebody was happily removed by the mods for being pro AI, while laughing at him. That's not what i call a discussion. And it was also not the first time that somebody was made quiet over there. Including myself.
I feel highly uncomfortable here nowadays. One wrong move and im am pretty sure the same happens to me too.
See, that's what i mean with facts versus truth. For Eric and you it's the truth that the datasets are stolen. But fact is, the courts see it differently so far. And this is what counts.
might be because it seems like the pro AI people lack conviction and good arguments. i've spent a total of maybe a few hours reading on the subject and have pretty much no dogs in the fight, shouldnt be difficult to prove me wrong on anything. In fact half of the common AI bro arguments i even agree with. It is the same as the way a person works... in principle. But none of that matters. The core issue is theft. And the core purpose for the tool is not sam altman doing you a personal favor so you can finally realize your personal project, it is so that his companies can stop hiring people and make more profit. THat is it. It is an attack in a class war you seem to want to believe doesn't exist.
i think the person who got banned was a day one account who did nothing but make weird inflammatory comments with no substance.
you keep saying it is not a fact and but not why. Is the only reason you think that is because some courts have ruled something in favor of OpenAI/similar?
If laws change that could make the word "illegal" technically incorrect to use at some point but theft is theft no matter what the king of the land at the time calls it. then we can just say "unethical" which means that the majority of reasonable people recognize it is an anti-social, destructive behavior.
[quote]might be because it seems like the pro AI people lack conviction and good arguments.[/quote]
Is that so? ^^ Forgive me, but i will not answer anymore. We go in circles, trapped between truth and facts. But this guy had some valid points before he got gone.
"See, that's what i mean with facts versus truth. For Eric and you it's
the truth that the datasets are stolen. But fact is, the courts see it
differently so far. And this is what counts."
yeah i mean its a handy situation that a for research database is like "we are not liable for any shit", and all the other players are also like "welp its their database, we are not liable for any shit"
everyone pointing fingers at others, or at least away.
same for users, if the output is somewhat decent it's their work, might even need to be protected, if its a fail the ai is at fault. sweet mindsets, all around.
Tiles, so I don't hate AI. However I still haven't found any use for AI in developing a game, and I've looked hard. As a game dev I maintain my stance that it's not very useful outside of storyboarding, one-off pieces (not particularly useful in games) and sometimes generating alpha maps for use in Zbrush. For instance here's some mutated fingerprints that belong to nobody. At the same time, it's easy enough to find fingerprint alpha packs online.
Dunno why you have to sound so butthurt that people aren't keen on getting their work getting stolen, Polycount is still very relevant for artists. In any case I'd rather maintain artist's copyright protection (everyone's copyright protection, for that matter) over having a somewhat gimmicky technology that doesn't really serve anyone.
The idea that ai is doing the same as a person is a stretch. That is because it is using digital copies of existing art. That is like arguing that it is ok for me to take and sell photos of your work, because I combined it with photos of other peoples work to sell.
You cannot compress 240 terabyte into 8 gb vram, not even download it to
an average PC. Not a single pixel of the originals is in the AI
weights.
But it cant adapt anything if there is nothing to adapt. Right? So really it would be the same as me selling you a printing press with currency plates and facilitating the possibility for you to print money. I didn't print it so I'm not liable. But where are you going to put the press? Ok to make it easier I allow you to phone in and order the currency but the go button is remote and under your finger. So it does get convoluted quickly if you want.
I read the transcript, not seeing what this indicates? or how it refutes points made in previous video from the youtube artist.
the AI takes an input image (in many cases, a copyrighted image which cannot be reused for commercial purposes without permission). Then it transmogrifies the image and does various other things. Doesn't really seem to matter what it does exactly, does it? Because the first step is always and has to be the input image. That's where the theft occurs, because the image is used for commercial purposes. The commercial thing being sold is not the outputs but the use of the model itself, so it doesnt matter what manner of transmogrification occurs. The model depends on the images and the model is sold commercially.
the most outrageous thing is that these companies could easily pay for rights to a lot of the work. pennies for them. They just didn't want to bother with the hassle because they feel they are in a race against time to beat the other guys. Such is the nature of psychopathy.
The way they'll squirm away from legal trouble is two ways: 1. make it difficult to point to a wrong doer because of how the companies have been set up, separating the data from the use 2. corruption (bribes and so on) Both of these are MO's for bad guys. Criminals. Villians. Bastards.
The reason the companies have been structured to avoid responsibility for how the database has been used is pretty clear indication that they set it up that way because they knew it was illegal and unethical. Same reason I wear a mask before I rob banks. It's common sense. Before doing a crime you make sure it's hard to pinpoint whodunnit.
@Tiles already seems to know it is a villanous enterprise, which is why the argue changes from obfuscation like, "it works the same as people do" to "it is inevitable, let us worship the new gods." Which is fine, you can be a sycophant all you want but being one for the richest people on earth doesnt make you a member of a persecuted class, it puts you on team anti-human.
FWIW here are the very clear guidelines and rules that the EGAIR is campaigning for in Europe. I find this manifesto extremely useful when discussing the subject with people who are not aware of the deep issues of the tech (the main ones being massive intellectual property and identity theft).
I just knew it. That for we are still good with discussion. Stupid me. Trick me once your fault, but trick me twice ...
@Alex_J , This is now the second time that you indirectly offend me with telling me that i am anti-human and asocial and whatnot for using AI. And you wonder why i refuse to continue the discussion? Stay in your Anti AI bubble. I will not stop you. But leave me alone with your offendings my friend. This part is the asocial part here. When you cannot discuss in an adult way then we won't discuss anymore.
i am not convinced that you care much about the truth, rather you care about getting some validation for your viewpoint. I am not sure that is what makes an adult and adult or not. Politeness and sugar-coated words is what you do for children, isn't it?
Here is the transcript from the video.
When I started experimenting with AI-generated images,
it was still a small niche, but now they are everywhere!
Generative AI has come a long way since my last video on the topic. So, it’s time for a new video!
In this video, we'll take a deep dive into the inner workings of diffusion models,
the state-of-the-art approach for generating realistic and diverse images.
We will cover their key concepts and techniques,
and provide a concise and intuitive understanding for anyone who is interested in how they work.
How Diffusion Models Work
So, how do diffusion models work?
The gist of it is that during training, we take an image and gradually add noise until
there is nothing but noise. When we want to generate images, we reverse this process,
start with noise, and gradually remove noise until we have a clean image.
In the forward diffusion process, the entropy of the images increases,
as adding noise makes them more and more homogeneous. This is somewhat
similar to diffusion in thermodynamics, which is what they are inspired by and named after.
Adding noise to images is easy; the more interesting part is how we reverse this process.
As you can guess, we use neural networks to do that. Let's take a closer look.
Denoising Images with U-Net
Denoising models usually use a fully convolutional network. These networks are
image-in, image-out models that can process images of varying sizes and produce dense,
pixel-wise predictions, rather than a single image label or bounding boxes.
One popular architecture for such image-in, image-out tasks is the U-Net architecture.
As its name suggests, the U-Net architecture resembles the shape
of the letter "U," with a series of downsampling layers followed by a
corresponding series of upsampling layers. U-Net uses skip-connections to bridge the
downsampling and upsampling layers at the same resolution. This structure allows the
network to capture both high-level semantic information and low-level texture details,
making it particularly effective for tasks like image denoising, segmentation, and restoration.
U-Net has many variants and improved versions, incorporating attention blocks, different types
of activation functions, skip connections, and so on, but this is the basic idea.
Noise Prediction and Removal
It's fairly straightforward to train a U-Net-based model to denoise an image that
is only a little bit noisy, but how do you go from complete noise to a fully realized,
coherent image? Yes, diffusion models gradually remove the noise in a series
of steps, instead of trying to remove it all at once, but how?
The naive approach would be to train a model to take a noisy image as input and output an
image that is a little less noisy. But that’s not how it’s done! In stead of training the model to
predict denoised images directly, diffusion models learn to predict the noise itself—all of it. So,
if the predictions were perfectly accurate, all we needed to do would be to subtract them from
noisy images and be done in one step. But removing all the noise from an image in one step is hard.
The noisier the image, the more unreliable the predicted noise will be.
So, what diffusion models do is that they scale the predicted
noise before subtracting it from the noisy input during inference.
This is of course the most naive way to implement it. There are more advanced samplers that use
sophisticated solvers to get to the clean images in fewer steps but you get the idea.
You may wonder why we train the models to predict the entirety of the noise all at once,
even though we don’t remove the noise at once during inference. Why not train the
model with noisy inputs and a little less noisy targets?
The problem with noisy targets is that they have a lot more variance than clean ground truth.
Because there are so many more ways an image can be noisy than it can be clean.
Sampling in Inference and Training
During inference a sampler denoises a sample one step at a time. But during training, we don’t
really need to do it sequentially like that. In practice, the model inputs a batch of noisy images
with different amounts of noise added to them and it tries to predict the noise that was added.
The amount of noise is parametrized by the time step using a noise scheduler, rather than adding
noise iteratively for that many time steps. This way we can get noisy images at a given step in one
shot, without adding noise sequentially. This makes the training easier and more efficient.
Time Step Encoding
The time step is given to the model as an input so that the model doesn’t need
to guess how much noise there is in the inputs and have an idea of how much noise to remove.
We don’t use the time step as-is though. It goes through an embedding process which
makes it continuous and more neural-network friendly before it is fed into the model.
In its simplest form, we basically pass the raw position indices through a bunch
of sine and cosine functions having different frequencies, and what we get are the embeddings.
Stable Diffusion and Others
Diffusion models are overall a lot more stable than GANs. GANs require a delicate
balance between the generator and discriminator and are highly sensitive to even minor changes,
while in a diffusion step, it's harder to fail that catastrophically.
There are many popular diffusion-based image generators, including DALL-E 2 from OpenAI, Imagen
from Google Research, but we'll focus on Stable Diffusion in this video because it's open source.
The denoising process is more or less the same in all these models,
but there are differences in how things are done.
Latent Diffusion
Let's first take a look at the Latent Diffusion approach which is what the Stable Diffusion
is based on. One of the shortcomings of the diffusion process I described
earlier is that it's slow. Really, really slow compared to GANs and Variational Autoencoders.
Latent Diffusion aims to speed up this process by offloading some of
the high-resolution processing into a variational autoencoder.
The variational autoencoder is essentially an encoder-decoder network that is trained
to encode images in a lower-dimensional space and then decode them back to reconstruct the original,
high-resolution images. We can consider this as a kind of image compression model.
The variational autoencoder is trained separately before training a latent diffusion model. Once
it's trained, it's frozen, and a diffusion model is trained in its lower-dimensional latent space.
During training, instead of adding noise to images, latent diffusion first runs
them through the encoder of the variational autoencoder to move them to the latent space.
Then, it adds noise in this lower-dimensional space and trains a model to reverse this process.
Once the model is trained, again, we start with pure noise,
just like we did with images, and gradually denoise it. At this point,
what we are denoising are not really images but lower-dimensional feature maps.
The diffusion process here turns noise into valid latent vectors, so that we can decode
them into high-resolution images using the decoder part of the pre-trained variational autoencoder.
We don't need the encoder part of the autoencoder to generate images from scratch,
Image to Image, Inpainting, Outpainting
but it can still be used in image-to-image tasks. To modify an image, for example,
we can pass it through the encoder and run the diffusion process for some number of steps on
this encoded latent vector rather than starting with pure noise. We can inpaint or expand images
this way too. We can mask images, encode them, and run the diffusion process to fill in the gaps.
So far, we've covered how to generate images,
Generating Images with Text Prompts
but how do we get them to generate what we want? How do we go from text prompts to images?
The short answer is that we use a tokenizer and text encoder to turn text into tokens and
then into embedding vectors. Then we condition the diffusion model on those text embeddings.
Stable Diffusion uses OpenAI's CLIP as its text encoder.
CLIP is a text and image encoder that was pre-trained on text-image pairs to learn a
shared embedding space for text and images, where related images and text are close to each other.
In this context, though, this multi-modality is not strictly necessary. It's possible to
replace CLIP with a text-only encoder, which is what Google's Imagen did.
One advantage of using a multimodal encoder like CLIP is that it allows for both text and images as
inputs. That's more or less how DALL-E 2 takes an image as input and generates variations of
it. DALL-E 2 further aligns the text and image embeddings using a prior, but let's not digress.
Unlike DALL-E 2, Stable Diffusion uses CLIP purely as a text encoder. It uses the embeddings from
the layer before the last one, which is not shared with the image encoder at that point.
These embeddings are used to condition the U-Net based diffusion model on the input text prompt.
Classifier-free Guidance and Negative Prompts
If you've tried Stable Diffusion before, you may have noticed a parameter named the guidance scale.
The higher the scale, the stronger the effect the prompt has on the generation,
while a lower scale results in a more subtle influence.
Under the hood, this is how it works: given a random noise vector as input, we run the
diffusion process twice – one conditioned on a prompt and the other run unconditionally.
At every step, we get the noise prediction for those two, take the difference,
multiply it with the guidance scale, and add it back to the original prediction.
This method is called classifier-free guidance and it essentially amplifies
the effect of the prompt on the results, by moving further in the direction of the prompt.
With a small tweak, we can use this technique to have negative prompts as well.
Instead of having an unconditional sample as a baseline, we can have a negative prompt there.
We can push the images further away from the negative prompts by multiplying the
difference between the images generated using positive and negative prompts. Negative prompts
can be used to remove objects, properties, style, or qualities of generated images.
Conclusion
Alright, that was pretty much it. I hope you found
it helpful and interesting. Thanks for watching, and see you next time.
I told it before and i tell it again, i prefer facts over "truth" any time. Flat earthers claims to have the truth. Donald Trump claims to have the truth. And you claim to have the truth. With what background and based at what knowledge and experience? I don't think that you even know how to spell AI. Was that polite and sugar-coated enough that you understand it? Or shall i use shorter sentences and wear a clown mask?
I just pointed at facts. In this case a video that explains why it is technically simply impossible that AI copies or steals. To understand at least one of the basics, how image diffusion works. For those who want to learn about the process so that they are able to have an opinion at the matter at all. And this after i was once again dragged back into a discussion that i did not want to continue since i exactly KNEW what will follow. But the admin of this page asked me. So who am i not to answer then.
The rest is history. Thanks for proving my point that a civilized discussion about AI is not longer possible here. I have by god better things to do than to deal with toxic people like you.
i read the things people posted here, lol. I listened to parts of the video that eric shared and skimmed the transcript. I read the transcript on the one you shared. I was able to at least paraphrase what I've read back in my own words, which would be some indication that i understood what I read. You haven't done that much though. If you can't paraphrase how that video proves your point, then it seems like you felt convinced by it during your conclusion shopping, but you didn't actually do much thinking on your own. Is that true? If you had reached your own conclusion through your own thought then I think you wouldnt find it too difficult to make your own case using your own words, and you probably also wouldnt give a shit if I suggested you were playing ball for the wrong team because when you reach conclusion on your own you wont feel so insecure about it. It's not a big deal if you end up being wrong
I'm sorry you feel attacked here, that is not my intent. I do think ethics are one of the main issues with the diffusion image generators, so yes, people will call others out on a perceived lack of ethics. And when someone is accused of ethical lapses, they should feel free to refute it, as long as people don't devolve into ad-hominem attacks. Let's please keep a rational head here, all around.
Image diffusion (as currently created by corporations) is stealing in my opinion, regardless of whether they store the original content in their trained algorithms. They must steal our content to train their algorithms.
This is fundamentally different from how a single human artist trains their brains on others' content, which is piece by piece, and a completely different organic process. AI generators perform theft on a massive scale, with computational precision, and allow massive numbers of humans to profit from the associations, which in turn is causing huge upsets in markets and livelihoods, both ethically and legally.
From the EGAIR manifesto, posted above by pior:
The quality of a generative AI is defined by the quality of its dataset – for example, in regard to images, the more pictures and illustrations an AI learns on, the more styles the AI is able to replicate and the more things it can do. Therefore, the products sold by AI companies are the result of operations on datasets, which contain all sorts of data, including millions of copyrighted images, private pictures and other sensitive material. These files were collected by indiscriminately scraping the internet without the consent of the owners and people portrayed in them and are currently being used by AI companies for profit.
From the transcript of the video I posted earlier:
AIs do not collect references off the internet the same way that artists do, and they are using them in ways that you as a normal person would not be allowed to. You would not be afforded the legal privileges of a research non-profit when it comes to collecting and utilizing copyrighted works without consent, much less when putting that towards for-profit ends. Little old you would, of course, be swiftly and summarily penalized for any infraction of the sort.
Replies
the problem with it is. they act as if they are NFT bros, rambling about democratizing a genre that was gate kept by very experienced artists who have to draw EVERY . SINGLE . FRAME by hand.
they are not better than the nfts bros, they do not talk about gate keeping by corporations but by talented experienced artists creating those pieces. and then go on showing how they can do it, using their experience and skillset and tools to sell their subscriptions.
admittedly a lot cheaper than classical animators doing it frame by frame, but that was not the point they made in their video.
Mind you, Corridor Digital also trained the generator on stolen frames from a Vampire Hunter D movie, and their response to the backlash was to passive-aggressively 'like' posts defending image generators on Twitter instead of engaging in any sort of dialogue.
And then they made a tongue-in-cheek follow-up where they used generators again, still without responding at all to the backlash.
I'm pretty annoyed with them, myself.
Also hello I'm back
Recent dissection of the ethical concerns around AI mining of content.
https://creativecommons.org/2021/03/04/should-cc-licensed-content-be-used-to-train-ai-it-depends/
People who say AI art "decentralizes" art, and that artists have been "gatekeeping" are pretty stupid imo. Art was never centralized in the first place, and if spending years of your life to make a skill marketable is 'gatekeeping' (which it is not), then basically anyone with a marketable skill is gatekeeping.
as soon as I see fad terms used like that I assume the person using them is a moron and/or douche
As some people here have guessed, AI has improved significantly since we started this discussion.
Its clear: AI is here to stay...
I mean obviously this is a meme, so probably not.
wait what?
So I guess you're ok with theft from creatives, like um, us.
What is in the AI is the concept. What makes an apple, how does this and that style look, lighting, golden ratio and so on. The craft part. And to learn and to know this part is not forbidden. That's how artists works since eons. They look at the material of all the famous artists before, learn, then reuse the concepts and techniqes and styles that they have learned. And don't tell me that you did not have a look at HR Giger before for your game props. Of course, you can also create copyright protected material with AI. Like you can with Photoshop, or even just with a pencil. Shall we really forbid pencils then?
The real problem is the ethical dilemma to train your direct competitor with your work, which most probably will make you obsolete in the long run. AI is at its best way to become a super artist that knows all styles including yours. And it's yeah, a big dilemma. I completely understand where you come from. It's not okay from your point of view. That's why you cry theft. And i understand it.
But AI will not go away. You cannot stop it, like you could not stop cars or book printing or wheels or any other important invention in history. All these inventions that has cost so much jobs in the past. But at the same time generated so much more then. And i see the same with AI happen at the moment. The world becomes once more a bit richer and more various. Not poorer.
AI is part of our life now. And permanent change was always part of working as an artist. The permanent search for optimization, for better and faster results. Sculpting, Remeshing, Landscape generators and so on. Art has always adopted to new tech. Just remember, it wasn't that long ago that there were no computers that could do 3D graphics. Or code.
if you know anything about history it goes like this: whoever gets advantage in new market first, wins.
the amount of firepower your enemy has does not make them the good guys. America can easily bomb afghanistan and iraq. Are we therefore the good guys? US government killed nearly all of the native americans and put them on reservations, therefore we are the good guys? Hitler nearly subdued the entirety of Europe, not too many people are saying he is the good guy. And Ghengis Khan crushed the entire known world, was he a good guy?
Labor saving devices do not make the world richer. They come from plunder of places you just happen to not live in. They make a minority of people richer, and that group is destined to become smaller and smaller because resources are finite.
In this case part of the plunder was from creative people, and of course the raw material has always come from the exploitation of those places you were lucky not to be born in. You know, places where people do all the real, actual work so that spoiled soft babies in western countries can do stupid shit like make digital art and feel like they are special for it.
Recognizing that your team is losing and deciding to play for the other team is a way a person might survive but it also makes you like the most contemptible type of character there can be.
https://www.youtube.com/watch?v=tjSxFAGP9Ss
07:35 “The AI just collects references from the internet the same way artists do.”
15:26 “AI is just a new tool.”
23:12 “Artists will just need to focus on telling stories.”
29:17 “These companies cannot manipulate our access to these systems because of open-source projects.”
31:32 “Don’t people do the same thing with references as the AIs do?”
34:44 “The AI can never replace the soul of an artist.”
36:56 The Dance Diffusion Problem
41:41 Conclusion
No amount of truth will seem to sway fans like Tiles, but the arguments and answers posted in this video were impressive to me.
Eric, i understand that you hate AI with a passion. But I am no "fan". I am neither fanatic nor militant. I am just a user and a messenger. And as a developer i stand with both legs in reality. I prefer to rely at facts instead of "truth".
You know what makes me really sad here? Polycount was once a place where artists has met and discussed new techniques and ideas. Also controversal, that's human. But afterwards everything was fine again, and we still helped each other. Now it is a place of AI haters. I watched in horror as people were repeatedly driven out of the AI discussion and off the site completely. For me Polycount has lost its relevance. I'll be quiet again now.
discussion doesnt mean agreement
also pretty sure i read eric or another mod here before say there is nothing against machine learning, just the illegally obtained datasets
It's not long ago that in this other thread somebody was happily removed by the mods for being pro AI, while laughing at him. That's not what i call a discussion. And it was also not the first time that somebody was made quiet over there. Including myself.
I feel highly uncomfortable here nowadays. One wrong move and im am pretty sure the same happens to me too.
.Alex_J said: See, that's what i mean with facts versus truth. For Eric and you it's the truth that the datasets are stolen. But fact is, the courts see it differently so far. And this is what counts.
But i wanted to be quiet. Which i am now.
In fact half of the common AI bro arguments i even agree with. It is the same as the way a person works... in principle. But none of that matters. The core issue is theft. And the core purpose for the tool is not sam altman doing you a personal favor so you can finally realize your personal project, it is so that his companies can stop hiring people and make more profit. THat is it. It is an attack in a class war you seem to want to believe doesn't exist.
i think the person who got banned was a day one account who did nothing but make weird inflammatory comments with no substance.
you keep saying it is not a fact and but not why. Is the only reason you think that is because some courts have ruled something in favor of OpenAI/similar?
If laws change that could make the word "illegal" technically incorrect to use at some point but theft is theft no matter what the king of the land at the time calls it. then we can just say "unethical" which means that the majority of reasonable people recognize it is an anti-social, destructive behavior.
Is that so? ^^
Forgive me, but i will not answer anymore. We go in circles, trapped between truth and facts. But this guy had some valid points before he got gone.
Dunno why you have to sound so butthurt that people aren't keen on getting their work getting stolen, Polycount is still very relevant for artists. In any case I'd rather maintain artist's copyright protection (everyone's copyright protection, for that matter) over having a somewhat gimmicky technology that doesn't really serve anyone.
EDIT, video removed. I will not continue here. Lesson learned.
the AI takes an input image (in many cases, a copyrighted image which cannot be reused for commercial purposes without permission). Then it transmogrifies the image and does various other things. Doesn't really seem to matter what it does exactly, does it? Because the first step is always and has to be the input image. That's where the theft occurs, because the image is used for commercial purposes. The commercial thing being sold is not the outputs but the use of the model itself, so it doesnt matter what manner of transmogrification occurs. The model depends on the images and the model is sold commercially.
the most outrageous thing is that these companies could easily pay for rights to a lot of the work. pennies for them. They just didn't want to bother with the hassle because they feel they are in a race against time to beat the other guys. Such is the nature of psychopathy.
The way they'll squirm away from legal trouble is two ways:
1. make it difficult to point to a wrong doer because of how the companies have been set up, separating the data from the use
2. corruption (bribes and so on)
Both of these are MO's for bad guys. Criminals. Villians. Bastards.
The reason the companies have been structured to avoid responsibility for how the database has been used is pretty clear indication that they set it up that way because they knew it was illegal and unethical. Same reason I wear a mask before I rob banks. It's common sense. Before doing a crime you make sure it's hard to pinpoint whodunnit.
@Tiles already seems to know it is a villanous enterprise, which is why the argue changes from obfuscation like, "it works the same as people do" to "it is inevitable, let us worship the new gods."
Which is fine, you can be a sycophant all you want but being one for the richest people on earth doesnt make you a member of a persecuted class, it puts you on team anti-human.
https://www.egair.eu/resources/EGAIR_Manifesto_EN.pdf
@Alex_J , This is now the second time that you indirectly offend me with telling me that i am anti-human and asocial and whatnot for using AI. And you wonder why i refuse to continue the discussion? Stay in your Anti AI bubble. I will not stop you. But leave me alone with your offendings my friend. This part is the asocial part here. When you cannot discuss in an adult way then we won't discuss anymore.
Here is the transcript from the video.
I just pointed at facts. In this case a video that explains why it is technically simply impossible that AI copies or steals. To understand at least one of the basics, how image diffusion works. For those who want to learn about the process so that they are able to have an opinion at the matter at all. And this after i was once again dragged back into a discussion that i did not want to continue since i exactly KNEW what will follow. But the admin of this page asked me. So who am i not to answer then.
The rest is history. Thanks for proving my point that a civilized discussion about AI is not longer possible here. I have by god better things to do than to deal with toxic people like you.
Have a nice one.
I was able to at least paraphrase what I've read back in my own words, which would be some indication that i understood what I read. You haven't done that much though. If you can't paraphrase how that video proves your point, then it seems like you felt convinced by it during your conclusion shopping, but you didn't actually do much thinking on your own. Is that true?
If you had reached your own conclusion through your own thought then I think you wouldnt find it too difficult to make your own case using your own words, and you probably also wouldnt give a shit if I suggested you were playing ball for the wrong team because when you reach conclusion on your own you wont feel so insecure about it. It's not a big deal if you end up being wrong
I'm sorry you feel attacked here, that is not my intent. I do think ethics are one of the main issues with the diffusion image generators, so yes, people will call others out on a perceived lack of ethics. And when someone is accused of ethical lapses, they should feel free to refute it, as long as people don't devolve into ad-hominem attacks. Let's please keep a rational head here, all around.
Image diffusion (as currently created by corporations) is stealing in my opinion, regardless of whether they store the original content in their trained algorithms. They must steal our content to train their algorithms.
This is fundamentally different from how a single human artist trains their brains on others' content, which is piece by piece, and a completely different organic process. AI generators perform theft on a massive scale, with computational precision, and allow massive numbers of humans to profit from the associations, which in turn is causing huge upsets in markets and livelihoods, both ethically and legally.
From the EGAIR manifesto, posted above by pior: