This was announced yesterday, Beta for AI-assisted creative tools.
Apparently, ethically sourced from "Adobe Stock images, openly licensed content and public domain content, where copyright has expired." https://firefly.adobe.com/faq
Lots of neat extensions for editing generated content, curious to see where this goes. Would be really neat to train it on my own content. eventually.
Thoughts?
Replies
Furthermore, if it was indeed ethical ... Adobe wouldn't have had the need to implement a prompt blocker in the first place. Can't have it both ways.
It's the same "tech naiveté" all over again : enthousiasts taking jargon at face value instead of questionning what's actually under the hood. Artists with no work experience got duped by "NFTs" because they believed that a "sMaRt cOnTrAct" was like an actual contract ; And here, AI enthusiasts are swallowing the whole "ethical" claim without any way to actually verify it.
And even if we assume that Adobe only used Stock as their source ... they definitely opted in content by default by doing a Terms of Use rugpull. For instance : if someone uploaded something to Stock 5 years ago, and didn't proactively opted it out last year ... their work would be used for Adobe training without their knowledge or actual consent. It may be legal for Adobe to perform such a rugpull, but it is absolutely, 100% unethical.
The actual, ethical way for them to go at it would have been to obtain proactive consent from the author/owner of every single image they used. They did not.
And of course these images can then become part of the next version of the training dataset. Just another layer of art laundering.
There was probably a balance earlier on, when image generation required either skills (artists) and/or ressources (photography). But their system is bound to crumble under the load of AI noise now, with most (all ?) of it being grifters attempting to game the system for a quick buck. It already happened on Artstation and on the Unreal Marketplace (for 2d assets for now). It's basically the equivalent of social media boosting, applied to IP theft.
I do wonder if this photo is AI generated though ...
Anyways - Adobe is basically painting themselves into an impossible corner here by going all-in on AI while avoiding any form of checks or transparency. This will not end well for them *especially* since Photoshop is not the be-all end-all anymore. And it's hard to give them the benefit of the doubt since all of the above was 100% predictable and was brought up by artists on day one. Use your brain people !
https://petapixel.com/2023/07/31/adobe-staff-worry-their-ai-could-kill-the-jobs-of-their-own-customers/?fbclid=IwAR1PtcCPRzwoGI8pZeW2MeLy3zuUJY0tlJehJ5oh5cbxm2ktXNUpc4x9UX8
Ah, no longer magic, but very crappy behavior. With any luck it shall bite them in the ass.
those "" around clean did a heavy lifting till now...
Just saw the updates to this thread, a few answers / precisions.
The AI generated images on Stock on which artists are tagged or part of the prompt go against Stock ToS and are being removed as soon as they get reported.
Stock contains 250 millions images, and there are hundreds of thousands of artists out there which name could potentially be included in a prompt, it's virtually impossible to catch them all and sometimes bad actors slip through the cracks...
Regarding the article, they quote a leaked Slack thread of which I was a part of (they quote one of my messages directly), and cherry picked clickbaity messages in an otherwise healthy conversation around AI ethics..
I'd be more worried if these conversations didn't exist at all.
I know it may be hard to believe from the outside but everybody involved at Adobe is trying to do the right thing.
"Stock contains 250 millions images, and there are hundreds of thousands of artists out there which name could potentially be included in a prompt, it's virtually impossible to catch them all and sometimes bad actors slip through the cracks..."
But that's the problem right there. It doesn't matter *one bit* how much work highly capable engineers put into the development of the software, or how many clever prompt blockers are implemented to prevent Studio Ghibli ripoffs or Joe Biden porn photocollages, or wether or not the training database is clean to begin with. If the commercial platform at the end of the chain doesn't have human verification for every uploaded picture (just like how regular freelancers work with their clients really, slowly but surely building mutual trust), in the day of AI-vomit and art laundering a platform like Stock suddently loses all its value.
No one is forcing Adobe to host 250 milions of images, and no one is forcing them to automate their content moderation or do it yolo-style after the fact based only on reporting.
There was indeed a fragile equilibrium up until now because there was a historically high barrier entry for the generation of visual content, and straight up art theft/reuploads were easy to spot with a bit of image reverse search. These days are over, and nothing done after the fact (like removing art laundered AI content when reported) can prevent that.
If anything ... even ignoring the bad actors uploading AI-vomit ripoffs, the mere fact that the platform accepts its own one-click, easy to produce output as input (since I assume that Firefly-made pictures are allowed) is ultimately driving its value down.