Hm seems useful, but I'm not sure I understand what they think/want it to be. from the trailer it looks like an asset editor where you can categorize pre created assets and the engine then positions the assets based on the rules that are set, not sure what is AI, people commenting on this seems to think the assets are created automatically by the AI and the company seems to do their best not to explain that part. Might be misunderstanding something or missing something, but yeah looks like it could be very useful but not life changing.
The guy announcing it like if it was some sort of charity or video about suicide prevention really make this video weird.
Very cool advanced snapping, sure speed up the boringness of changing asset by hand.
That's just how Andrew Maximov speaks. I don't know if you've ever watched any of his numerous talks but that's just how he is. Even in small venues, like doing a talk with a few students from my school. He just really loves art and is genuinely excited to share this with the industry.
Better be building a big fracking game to make this useful.
Watching the video, games like Gone Home, What Remains of Edith Finch, Life Is Strange all popped into my head as far as how useful it could be. Smallish environments with high asset density.
Watching the video, games like Gone Home, What Remains of Edith Finch, Life Is Strange all popped into my head as far as how useful it could be. Smallish environments with high asset density.
Only if you're starting with a huge asset library.
I think they could sell it better if they showed the AI building different bedrooms with those assets. This sounds very cool and promising, there isn't really a whole lot to go off of yet. First impression is that I like it, sounds like a time sink for small projects, but the learning capabilities could be great (or catastrophic) especially if user data can be sent back to the mothership for analysis.
Given enough assets and metadata (and time spent databasing), it's an easy enough concept to understand. What I'm curious about is the semantics that the AI can work with. Like they say that it doesn't have inherent knowledge of a "bedroom", but can understand that little things go on top of big things, and seems to place the big things against walls before moving on. Now what if we told it to make a Barracks layout in a large room using the same assets?
I haven't been in the games industry for a while, but now I'm in archviz and this would save me so much time. If it worked as advertised, that is. I don't know how many hours I've spent arranging and rearranging furniture and stuff. If they make this work for 3ds max, it's an insta-buy from me.
I like the snapping and asset variation tools - obviously, the variation tools only works if you put the work in to create those in the first place, or have access to a hopefully not overly-generic library. The "create a room" feature's usefulness will really depend on how it integrates with game-play requirements. I'm curious how they will solve that,
Finally, I would like to know where the AI comes in and how much of a boost it provides, because some things, like snapping, could be done via a traditional rule-based approach based on a good algorithm.
The machine learning systems I've seen usually analyze a pre-existing set of scenes, learning which types of props are associated with which types of furniture. Then it can start to extrapolate new layouts.
So, they must have fed it a bunch of scenes to learn from.
The weird thing is, no one understands how exactly AI come up with their decisions. It's a black box. Lots of active research right now, trying to figure this out.
The machine learning systems I've seen usually analyze a pre-existing set of scenes, learning which types of props are associated with which types of furniture. Then it can start to extrapolate new layouts.
If I'm understanding their press release correctly, it's smart enough to learn based on photographs, not just 3d models of rooms.
I once worked for a company that was exploring using machine learning / computer vision to identify human body parts (it was porn, ok!) and track them their place on screen during video. Basically, id draw rectangles around different parts frame by frame till we had a library of 10k~ images for it to parse. I'm imagining its a very similar concept
The machine learning systems I've seen usually analyze a pre-existing set of scenes, learning which types of props are associated with which types of furniture. Then it can start to extrapolate new layouts.
So, they must have fed it a bunch of scenes to learn from.
The weird thing is, no one understands how exactly AI come up with their decisions. It's a black box. Lots of active research right now, trying to figure this out.
I wondering how they will come up with a good data-set though. If it's your own scenes, then you probably have most of the game made until the AI gets clever enough. Or they include everyone else's learning, funneling it all through the central app which ensures some constraints, but then it could all be much too generic and unapplicable to your particular visual and game design goals. I guess they'll have some clever minds looking at that though... seems like a challenge.
Personally, I'm more in favor of small smart tools that could get better with AI. Something that makes the daily modeling, UVing, LODing, animation grind more bearable. Having a bunch of those, productivity gains could stack up quickly. But it doesn't make for great announce trailers
Replies
Better be building a big fracking game to make this useful.
Very cool advanced snapping, sure speed up the boringness of changing asset by hand.
That's just how Andrew Maximov speaks. I don't know if you've ever watched any of his numerous talks but that's just how he is. Even in small venues, like doing a talk with a few students from my school. He just really loves art and is genuinely excited to share this with the industry.
Anyways, this might clarify some things - https://80.lv/articles/promethean-ai-the-tricks-of-learning-game-art/
Watching the video, games like Gone Home, What Remains of Edith Finch, Life Is Strange all popped into my head as far as how useful it could be. Smallish environments with high asset density.
Given enough assets and metadata (and time spent databasing), it's an easy enough concept to understand. What I'm curious about is the semantics that the AI can work with. Like they say that it doesn't have inherent knowledge of a "bedroom", but can understand that little things go on top of big things, and seems to place the big things against walls before moving on. Now what if we told it to make a Barracks layout in a large room using the same assets?
So, they must have fed it a bunch of scenes to learn from.
The weird thing is, no one understands how exactly AI come up with their decisions. It's a black box. Lots of active research right now, trying to figure this out.