Automating the drudgery, I actually like that approach.
However as far as I can tell it’s still burdened with unethical/illegal data scraping.
it doesn't have to be.
I have seen (with my own eyes) realtime ML cloth sims trained off in house data - Its not a great leap to think the same is viable with muscle sims and so on. Because you're simply trying to replace a simulation with something that runs faster you can easily generate that data by running the original sim lots of times. Additionally, because you're working off actual data you don't need an LLM to interpret it and can use a much more focused approach.
(this does of course require that the people building the tool are doing it properly and aren't simply repackaging an opensource LLM)
... we believe there are strong arguments that, in most cases, using copyrighted works to train generative AI models would be fair use in the United States, and such training can be protected by the text and data mining exception in the EU. ...
Which is kind of sad to see, from a content creator's standpoint. But they also outline a lot of the existing uncertainties in their article.
I must say I fail to understand why "Creative Commons" believe they have any say on the matter of the use (authorized or not) of copyrighted work in AI training. They do provide a useful framework for a bunch of premade license types and that's great ; but they do not define any law whatsoever, and them seing something as "fair use" doesn't mean that it is ...
Oh of course not. But I do see them as an informed source on copyright matters, so it's interesting to see what their opinion is, given this is an ongoing matter and not resolved yet.
Think they are on the idea of the law where you can be recorded outside because you are outside, like how there are cameras everywhere now. So they are i suspect trying to say that since the internet is an equivalent to being outside and you are blatantly posting your "creative works" "effectively outside your home to a server outside your residence they see it as sadly. Now i am 1000% against this stuff as this creativity takes years to develop and the sloppy people who do not practice anything other than how to manipulate a situation benefit more than those that take time to create and develop aspects of value and those without value benefit off the backs of those that have.
And afaik, they are scraping everything now cause people are pushing for laws to prevent this which is stoopid as fox cause its after the fact which is how they always do these things to the "average person".
Let’s see how this evolves in the coming months, but surely there are a lot of creatives not feeling very calm right now. I’ve already shared my point of view on AI here a while ago, but seeing this makes me a bit uncomfortable. What do you all think? Is this ultimately going to affect jobs? https://x.com/DiscussingFilm/status/1836412337953161431
So far I see only few AI things that do something helpful . Simplygon maybe. but not that good actually especially on hard edge things . Cascadeur maybe but I am not an animator so have no idea really . Chat GPT that wrote a few helpful addons for Blender for me . Adobe sampler Ai , not that much really. Textures are still awful and I see no improvements really . Half time its something blurry ,inconsistent and repeating like hell but maybe ok for something visually unimportant. And no way Ai could solve problems, do trade offs and find workarounds you constantly have to do to make something looking like low poly crap be looking not that much of that way. Moreover all those Ai are in kind of stall state . No improving really for years already. GPT4 vs GPT3 . no difference at all. Most of its code creation never works untill you invent every detail and every step on your own , including actual math involved and feed it to him. So well Ai can do pictures , many pictures to form an animation and what . Games are so much beyond of that.
The brute force models are apparently running out of steam - which I suppose is inevitable. This won't stop the march of progress but I expect we'll see a few years of stagnation during which LLMs will be repurposed to do something more useful than make porn of Taylor Swift and we'll all learn to love the enhancements they bring to hair sims and boring shit like making lods.
There's not much point in worrying about it on a societal scale - AGI will either wipe us out overnight or make it so nobody wants for anything
i haven't found much practical benefit for any of the direct art stuff, though haven't messed with it very much
on the coding front i find chatgpt to be useful and used almost daily. so long as I can write pseudo-code, it is able to make working scripts in python and other languages. really handy for simple tools to automate tedious things. essentially it makes it so that you get unreal blueprints for other code languages - so long as you can handle the order of operations and basic logic, it takes care of all the other pedantic pain points for you.
It's been really reliable for that - i've iterated on tools up to 500 lines of code and it just works - or if it doesn't, it's because i didn't have order of operations correct (and its not smart enough to detect problems with that).
Well, it seems things are about to get interesting. OpenAI has been accused of stealing vast amounts of copyrighted works from books across the internet and then integrating it into ChatGPT. Now that their being sued over it, they will have to provide access to its training data for review of whether copyrighted works were used to power its technology.
Since they will undoubtedly find copyrighted material in ChatGPT, the big question here is what kind of guardrails will be put into place over AI. I guess we'll find out.
if you ask it what happens in X chapter of X book, it will tell you, and it's accurate.
But i dont see why they couldn't provide a phony dataset for people to inspect. looks like there is a lot of stipulations surrounding the check too, and who is going to be doing the review?
I'd expect corruption to win. the article makes it sound like whoever has been in charge of the cases against OpenAI is dragging their feet. Probably for reasons.
The authors are alleging ChatGPT is generating summaries and in-depth analyses of the themes in their novels. However in-depth the analyses were, they must've been detailed enough that there is no possibility ChatGPT could've simply guessed it. My guess is the authors have documented evidence on what ChatGPT was spewing out before they went ahead with suing OpenAI. Now that they have access to the training datasets they can look through them for questionable material and also re-test ChatGPT on the spot to look for inconsistencies. If ChatGPT starts giving analyses on those books that sound different from what the authors were getting beforehand then it tells you OpenAI tampered with their training datasets for the review.
That makes OpenAI's issues worse so they're better off just admitting they used copyrighted material.
I don't know whose all involved in the review as it seems like these details are in the process of being worked out, but we can guess the author, their lawyers and company representatives will be part of it.
i haven't found much practical benefit for any of the direct art stuff, though haven't messed with it very much
on the coding front i find chatgpt to be useful and used almost daily. so long as I can write pseudo-code, it is able to make working scripts in python and other languages. really handy for simple tools to automate tedious things. essentially it makes it so that you get unreal blueprints for other code languages - so long as you can handle the order of operations and basic logic, it takes care of all the other pedantic pain points for you.
It's been really reliable for that - i've iterated on tools up to 500 lines of code and it just works - or if it doesn't, it's because i didn't have order of operations correct (and its not smart enough to detect problems with that).
while this is true - please understand that it does not write good code and it cannot do c++ properly. (it's bad enough at it that I know it's bad c++ and im very bad at c++)
yeah my attempts at using it for unreal C++ were an exercise in frustration. But still overall, as a relative beginner with c++, it definitely got me a lot further along than I'd be able to go on my own. Not sure it would be much help to anybody who knows what they are doing already.
the sort of python tools i have it make are on the level of things like parsing plain text into various other formats, doing simple analysis, things like that. Just one off tools, not gameplay code where performance or readability or any big concerns like that are involved. In fact I dont even look at the code, usually if there is any issue it's possible to work through it just by being a little more explicit with "pseudo-code" instructions.
in my case it's like being able to hire a junior programmer for little scripts that may take them a few hours to make. But instead, it takes a few seconds and cost $20 a month. It is great and convenient but definitely wouldn't pay much more than I do now for it.
my uses case is as a mostly solo indie developer, so basically having a shitty but cheap code generator tool is sometimes a nice little convenience and time saver. Not sure how much use it would have in professional environment but I guess people using it that way won't ever be able to say much.
Finally, there is a court ruling in Germany on the question of whether data mining and the use of publicly available images for training AI is legal. A stock photo provider has sued LAION, the maker of the training data sets, which is for example used in Stable Diffusion. It is. And that is even if you state an explicit no on your site. Please scroll down to the English part. It starts in the middle of the page:
This is a groundbreaking decision that I personally have been waiting for for a long time. Especially since Germany has very restrictive laws in this regards. However, the ruling is not yet final. The ruling can still be appealed. So the next month will show what this court ruling is worth.
Finally, there is a court ruling in Germany on the question of whether data mining and the use of publicly available images for training AI is legal.
It was only about the question if Laion (the AI Database makers) are allowed to download the Image to do a comparison within its Dataset and picture description(s). The reasoning the court from Hamburg gave for allowing it; As long as it is purely for scientific purposes and non-commercial.
Those datasets would also need to be open and accessible with worldwide access for everyone.
On the surface this court decision makes some sense, since by definition a DB is simply a sort of indexing - not so different from the widely accepted way a search engine indexes content for retrieval. After all, one wouldn't blame Google for the actions of chinese video card manufacturers stealing art they find online for their boxes.
So from there I suppose that the appeal will likely hinge on wether or not the people operating Laion where knowingly or not steering their actions/data collection towards the use of their DB in plagiarism machines like MJ and SD. The fact that they do provide an "aesthetic score" is probably not a great look for them, since it clearly makes them appear as knowingly facilitating copyright-infringing image generators by going much further than merely providing indexed data.
Furthermore, "non-profit" and "scientific research" are not a free passes for everything ; and, data collection of any kind is still subject to existing international legislation (predating AI) on the right to be digitally forgotten.
I find it fascinating how in the span of about two years we are already out of the "AI hype" trend, with AIbros and AI companies running in circles - with an output consisting of infinite Facebook slop, a few art gallery scams, fake anime waifus for lofi youtube channels, and highly airbrushed, derivative kitsh. So much for dEmOcRaTiZeD cReAtIvItY
"Allen’s lawyer, recently claimed that Allen had worked hard on his
digital illustration. “In our case, Jason had an extensive dialogue with
the AI tool, Midjourney, to create his work"
EDIT; if I only wanted to do non-commercial projects and don´t care about what is included and other issues, it might be of interest. As others wrote, academia and non-commercial are not magic words to dissolve copyright and licensing. After a quick look a their website, the full 10TB version is only for their internal usage. They say, you would have to download the data yourself from the original sources.
Laion also warns that they have "discomforting and disturbing content" included. I don´t think I would want to willingly download their files. Especially when they openly state, they used a care free approach of indiscriminately downloading all kinds of who knows what kind of data. quick search, look at that;
https://fedscoop.com/ai-federal-research-database-laion-csam A report
published in December determined that “having possession of a LAION‐5B
dataset populated even in late 2023 implies the possession of thousands
of illegal images,” and in particular, child sexual abuse material.
The largest dataset is 240 TB, not just 10. I haven't found a normal PC with that much storage space yet. So don't worry ^^
Hmm, I should probably add that the crawling part that was discussed here is done by LAION. Not by the AI companies that then use the dataset. The training dataset itself is nothing more than a huge collection of links to freely available data with descriptions. And this can of course also be used for commercial purposes. That's a whole different story that has nothing to do with this court ruling. At least that's how I know it.
And of course a link can change or lead to the disturbing content mentioned. The internet is constantly changing. And full of bad content. Be careful when using Google...
yea it's been a couple years as an indie dev, and so far I still haven't found any use for AI outside chatGPT as an enhanced google search, and stable diffusion for placeholder art that I want to replace with human made art.
Not game dev related, but my husband regularly uses copilot to help generate meeting notes, turned a 1.5 hour process of re-listening to a call and taking notes to a 30 minute task where he asks copilot for summaries for their reports and verifies details with time stamps and the transcription. It does a good job of turning a rambling 10 minute technical explanation of what happened to a few concise sentences that anyone would understand. That's more of what I want to see AI doing for jobs, grunt work no one wants to do, but needs to be done.
He's also had to deal with teams using chatgpt to generate filler nonsense instead of actually providing the information he's asking for.
Yeah it was a bot or a link spammer. When we mark an account as spam, we delete all their posts too. Which breaks continuity (sorry not sorry). It could be a conspiracy though you never know.
Automating simulations like cloth and muscle with actual data from the original sim makes so much sense—it's focused, faster, and avoids the ethical concerns of scraping.
Plus, not relying on an LLM for interpretation sounds like a smarter, more efficient approach if done right. It really comes down to whether the tool developers are genuinely building it with care or just taking shortcuts with open-source models.
The same can be said for [link redacted], where the data sources and training methods often determine how ethical or innovative the output is.
It would seem the committee might've caught the bug, as well...
A joint Nobel Prize awarded for ground breaking Ai research in physics - who'd thunk it
This year’s two Nobel Laureates in Physics have used tools from physics
to develop methods that are the foundation of today’s powerful machine
learning. John Hopfield created an associative memory that can store and
reconstruct images and other types of patterns in data. Geoffrey Hinton
invented a method that can autonomously find properties in data, and so
perform tasks such as identifying specific elements in pictures.
John Hopfield invented a network that uses a method for saving and recreating patterns. We can imagine the nodes as pixels. The Hopfield network
utilises physics that describes a material’s characteristics due to its
atomic spin – a property that makes each atom a tiny magnet. The
network as a whole is described in a manner equivalent to the energy in
the spin system found in physics, and is trained by finding values for
the connections between the nodes so that the saved images have low
energy. When the Hopfield network is fed a distorted or incomplete
image, it methodically works through the nodes and updates their values
so the network’s energy falls. The network thus works stepwise to find
the saved image that is most like the imperfect one it was fed with.
Geoffrey Hinton used the Hopfield network as the foundation for a new network that uses a different method: the Boltzmann machine.
This can learn to recognise characteristic elements in a given type of
data. Hinton used tools from statistical physics, the science of systems
built from many similar components. The machine is trained by feeding
it examples that are very likely to arise when the machine is run. The
Boltzmann machine can be used to classify images or create new examples
of the type of pattern on which it was trained. Hinton has built upon
this work, helping initiate the current explosive development of machine
learning.
my problem with the up-res reimagings is that it removes my own imagination.
with the chunky graphics i imagine it as a real world in my own mind and become immersed, same as reading a book
when the graphics look like a photo, except its still missing the one billion tiny details that are needed to make it actually convincingly real, all i notice is all the one billion things missing that make it feel off and weird.
IMO the best era for 3d graphics was sega dreamcast time.
CSGO, its 100% Ai, it's running at 10 FPS but it unlocks
the question if this is possible, crazy to think a computer trained itself on a
level and then runs the images in real time.
I'm a bit confused, he says its postprocess filter, but then he says that it uses a render queue and isn't real-time? So is the filter post process in real time, or did he take a render queue and run it through an AI model?
Be the first one to implement this, then you can make millions of millions of dollars! The "hyper realistic" characters look all very different compared to the source, but who cares if you make millions of millions just like that?
I'm a bit confused, he says its postprocess filter, but then he says that it uses a render queue and isn't real-time? So is the filter post process in real time, or did he take a render queue and run it through an AI model?
'This is not in real time yet' he says.
He's also putting far too much effort into making us think he made this, saying it's 'utilising a few tools', but yeah this is just pumping the 3d model through a GenAI. And the results are predictably smushy, inconsistent and lacking any art direction.
The ones where it's dreaming up all of the footage based on an existing game also just make me question: why? As a curio, sure. As 'the fuuuuture of gaming brooo', as it's often labeled: huh?
It's really mesmerizing to see how "tech" is leading us straight into cultural stagnation. Nothing is new, everything is just rehashed slop pushed by button-pushers - like these dime a dozen "game rendering through AI" videos.
I dusted off a (paper) comic books and entertainment magazine from the 2000s the other day and I was in utter shock by how incredibly rich and varied everything was compared to the infinite boredom of the pop culture landscape of today, AI and all. It felt like a glimpe into some kind of Star Trek utopia where people could just freely explore any creative endeavours.
I also just noticed that a AI-scammer in my area worked his way into a local art exhibit organized by cute elderly people, passing himself as some genius painter invited as a "guest of honor"... even though all the pieces are straight up MidJourney vomit printed on canvas. Seing this kind of utter deception cascade all the way down into local events is even more depressing than the widespread art theft that this tech relies on. And no one seems to be willing to speak up, because of course artists are shy and are affraid to make a fuss.
At this point the only single good use of AI in the creative field is the animation helping feature of Cascadeur, all trained on internal data. Pretty much everything else is derivative slop ...
Michael Knubben
: He's also putting far too much effort into making us think he made this,
saying it's 'utilising a few tools', but yeah this is just pumping the
3d model through a GenAI.
Got flashbacks of a politician doing the same thing ad nauseam.
pior : I dusted off a (paper) comic books and entertainment magazine from the
2000s the other day and I was in utter shock by how incredibly rich and
varied everything was compared to the infinite boredom of the pop
culture landscape of today,
Music is suffering the same tech sameness. This vid is very similar to what is being remarked upon here.
Replies
it doesn't have to be.
I have seen (with my own eyes) realtime ML cloth sims trained off in house data - Its not a great leap to think the same is viable with muscle sims and so on.
Because you're simply trying to replace a simulation with something that runs faster you can easily generate that data by running the original sim lots of times. Additionally, because you're working off actual data you don't need an LLM to interpret it and can use a much more focused approach.
(this does of course require that the people building the tool are doing it properly and aren't simply repackaging an opensource LLM)
Interesting to see clarifications from Creative Commons about AI vs. their licenses:
https://creativecommons.org/2023/08/18/understanding-cc-licenses-and-generative-ai/
In particular:
Which is kind of sad to see, from a content creator's standpoint. But they also outline a lot of the existing uncertainties in their article.
https://x.com/DiscussingFilm/status/1836412337953161431
There's not much point in worrying about it on a societal scale - AGI will either wipe us out overnight or make it so nobody wants for anything
on the coding front i find chatgpt to be useful and used almost daily. so long as I can write pseudo-code, it is able to make working scripts in python and other languages. really handy for simple tools to automate tedious things. essentially it makes it so that you get unreal blueprints for other code languages - so long as you can handle the order of operations and basic logic, it takes care of all the other pedantic pain points for you.
It's been really reliable for that - i've iterated on tools up to 500 lines of code and it just works - or if it doesn't, it's because i didn't have order of operations correct (and its not smart enough to detect problems with that).
Since they will undoubtedly find copyrighted material in ChatGPT, the big question here is what kind of guardrails will be put into place over AI. I guess we'll find out.
OpenAI Training Data to Be Inspected in Authors’ Copyright Cases (yahoo.com)
But i dont see why they couldn't provide a phony dataset for people to inspect. looks like there is a lot of stipulations surrounding the check too, and who is going to be doing the review?
I'd expect corruption to win. the article makes it sound like whoever has been in charge of the cases against OpenAI is dragging their feet. Probably for reasons.
That makes OpenAI's issues worse so they're better off just admitting they used copyrighted material.
I don't know whose all involved in the review as it seems like these details are in the process of being worked out, but we can guess the author, their lawyers and company representatives will be part of it.
yeah my attempts at using it for unreal C++ were an exercise in frustration. But still overall, as a relative beginner with c++, it definitely got me a lot further along than I'd be able to go on my own. Not sure it would be much help to anybody who knows what they are doing already.
the sort of python tools i have it make are on the level of things like parsing plain text into various other formats, doing simple analysis, things like that. Just one off tools, not gameplay code where performance or readability or any big concerns like that are involved. In fact I dont even look at the code, usually if there is any issue it's possible to work through it just by being a little more explicit with "pseudo-code" instructions.
in my case it's like being able to hire a junior programmer for little scripts that may take them a few hours to make. But instead, it takes a few seconds and cost $20 a month. It is great and convenient but definitely wouldn't pay much more than I do now for it.
my uses case is as a mostly solo indie developer, so basically having a shitty but cheap code generator tool is sometimes a nice little convenience and time saver. Not sure how much use it would have in professional environment but I guess people using it that way won't ever be able to say much.
The exact reasons for the judgment are only available in German, it is a german court. Google translate may help.
https://openjur.de/u/2495651.html
This is a groundbreaking decision that I personally have been waiting for for a long time. Especially since Germany has very restrictive laws in this regards. However, the ruling is not yet final. The ruling can still be appealed. So the next month will show what this court ruling is worth.
The reasoning the court from Hamburg gave for allowing it;
As long as it is purely for scientific purposes and non-commercial.
And it was more than just this question. Laion was sued, and won.
https://www.jdsupra.com/legalnews/california-legislature-passes-1908394/
I find it fascinating how in the span of about two years we are already out of the "AI hype" trend, with AIbros and AI companies running in circles - with an output consisting of infinite Facebook slop, a few art gallery scams, fake anime waifus for lofi youtube channels, and highly airbrushed, derivative kitsh. So much for dEmOcRaTiZeD cReAtIvItY
maybe we can get on with using the learnings for somethin useful now.. ?
Famous AI Artist Says He’s Losing Millions of Dollars From People Stealing His Work
As others wrote, academia and non-commercial are not magic words to dissolve copyright and licensing.
After a quick look a their website, the full 10TB version is only for their internal usage.
They say, you would have to download the data yourself from the original sources.
Laion also warns that they have "discomforting and disturbing content" included.
I don´t think I would want to willingly download their files. Especially when they openly state, they used a care free approach of indiscriminately downloading all kinds of who knows what kind of data.
quick search, look at that;
Yeah.. you can download it, I won´t.
Hmm, I should probably add that the crawling part that was discussed here is done by LAION. Not by the AI companies that then use the dataset. The training dataset itself is nothing more than a huge collection of links to freely available data with descriptions. And this can of course also be used for commercial purposes. That's a whole different story that has nothing to do with this court ruling. At least that's how I know it.
And of course a link can change or lead to the disturbing content mentioned. The internet is constantly changing. And full of bad content. Be careful when using Google...
He's also had to deal with teams using chatgpt to generate filler nonsense instead of actually providing the information he's asking for.
Plus, not relying on an LLM for interpretation sounds like a smarter, more efficient approach if done right. It really comes down to whether the tool developers are genuinely building it with care or just taking shortcuts with open-source models.
The same can be said for [link redacted], where the data sources and training methods often determine how ethical or innovative the output is.
Half Life with ultra-realistic graphics Gen-3 video to video Runway ML Artificial intelligence
https://www.youtube.com/watch?v=XBrAomadM4cAt this point the most impressive thing about all this is how quickly the "rEiMaGiNeD wItH AI" shtick became so very boring and predictable. Yawn !
with the chunky graphics i imagine it as a real world in my own mind and become immersed, same as reading a book
when the graphics look like a photo, except its still missing the one billion tiny details that are needed to make it actually convincingly real, all i notice is all the one billion things missing that make it feel off and weird.
IMO the best era for 3d graphics was sega dreamcast time.
https://x.com/bornakang/status/1846503795922022706?s=4
Putting on an Ai filter as a post process and getting some hyper realistic results.
https://youtu.be/OeR5fzmdGik
CSGO, its 100% Ai, it's running at 10 FPS but it unlocks the question if this is possible, crazy to think a computer trained itself on a level and then runs the images in real time.
https://www.youtube.com/watch?v=L-aAD9qATwg&t=126s
So is the filter post process in real time, or did he take a render queue and run it through an AI model?
The "hyper realistic" characters look all very different compared to the source, but who cares if you make millions of millions just like that?
I dusted off a (paper) comic books and entertainment magazine from the 2000s the other day and I was in utter shock by how incredibly rich and varied everything was compared to the infinite boredom of the pop culture landscape of today, AI and all. It felt like a glimpe into some kind of Star Trek utopia where people could just freely explore any creative endeavours.
I also just noticed that a AI-scammer in my area worked his way into a local art exhibit organized by cute elderly people, passing himself as some genius painter invited as a "guest of honor"... even though all the pieces are straight up MidJourney vomit printed on canvas. Seing this kind of utter deception cascade all the way down into local events is even more depressing than the widespread art theft that this tech relies on. And no one seems to be willing to speak up, because of course artists are shy and are affraid to make a fuss.
At this point the only single good use of AI in the creative field is the animation helping feature of Cascadeur, all trained on internal data. Pretty much everything else is derivative slop ...