Just as shown in the image below, I took a picture of a mushroom, and the AI directly generated a mesh. Since it was quite irregular, I used Quad Remesher for retopology. I feel the results are borderline usable. What do you think?
What do you mean by quite iregular? Looks like its alls very evenly spaced triangles, why the quad remesher, what gains do you think this brought you?
I mean sure looks like a mesh of mushrooms, similar to the input, quite dense, but lets believe its gonna be a nanite mesh, so it doesnt matter so much. I guess it could be used, depends on the usecase really.
What do you mean by quite iregular? Looks like its alls very evenly spaced triangles, why the quad remesher, what gains do you think this brought you?
I mean sure looks like a mesh of mushrooms, similar to the input, quite dense, but lets believe its gonna be a nanite mesh, so it doesnt matter so much. I guess it could be used, depends on the usecase really.
Biggest issue is the generate model isn't understanding the shape and forms at a fundamental level, none of the mushrooms have a nice skirt around the top like the drawing is depicting. Even a new 3d modeler would approach that shape better. It might be good enough as a base mesh to start sculpting on though or a blockout.
Yeah it kinda looks like vague photogammetry then zremeshed. If I assigned that image to a modeler and they came back with that model, I'd tell them to take another crack at it.
Just as shown in the image below, I took a picture of a mushroom, and the AI directly generated a mesh. Since it was quite irregular, I used Quad Remesher for retopology. I feel the results are borderline usable. What do you think?
What is this "AI" tool called? So in this tool you're able to feed in only 1 2D image and it's able to generate a full 3D model?
I'd say for mobile, the mesh seems rather dense. Various, optimized mushroom formations could be efficiently build from modules sourcing from the same texture. Maybe the generated one can act as a initial reference, but really, this subject is not that complex.
cool who did you learn from, how much time you put into it? oh this was made with the promotional material of not doing anything "ourselves" anymore cool another things ruined.
Jeez. This is already a 3d model, by Jasmin Habezai-Fekri, Senior 3D Environment Artist @ Airship Syndicate. https://www.artstation.com/artwork/2xxXmB (reverse image search is your friend!)
You could learn how to model this in an efficient manner, by directly studying the provided wireframes. While it's cool to explore new techniques, if you haven't developed the fundamental skills you're not going to actually get hired as a 3D artist.
Thankfully you've stumbled on the best possible site to help you learn how this works! If you want to learn, let us know and we'll be happy to help you every step of the way.
Jeez. This is already a 3d model, by Jasmin Habezai-Fekri, Senior 3D Environment Artist @ Airship Syndicate. https://www.artstation.com/artwork/2xxXmB (reverse image search is your friend!)
You could learn how to model this in an efficient manner, by directly studying the provided wireframes. While it's cool to explore new techniques, if you haven't developed the fundamental skills you're not going to actually get hired as a 3D artist.
Thankfully you've stumbled on the best possible site to help you learn how this works! If you want to learn, let us know and we'll be happy to help you every step of the way.
Thank you so much for your help! Actually, I was running a little test, haha. I took a screenshot from Artstation, then used AI to generate a 3D file and textures. I'm curious to hear everyone's thoughts on its usability. We're in the process of developing an AI tool to help artists boost their efficiency. This time, I mainly wanted to get a gut feeling from the community on its functionality (you know, previous tools like Shap-e haven't been great). Once again, thank you for your kindness and feedback!
if so, do you think it was more or less time efficient compared to using existing modeling techniques?
do you think its viable for non-organic things, or characters? By viable I mean total workflow considered, not just output quality.
And lastly, everybody is going to want to know, how is the AI which made this mushroom trained?
I feel the accuracy requirements for characters are quite high and it might not be ready for use yet. The algorithm is still in the iteration phase. We're hoping to release it next month.
Yeah it kinda looks like vague photogammetry then zremeshed. If I assigned that image to a modeler and they came back with that model, I'd tell them to take another crack at it.
Hahaha, that's quite entertaining! I agree; while the quality might not be top-notch, having a base generated by this tool could save some time, allowing for manual refinements afterwards. What's your take on this approach?
cool who did you learn from, how much time you put into it? oh this was made with the promotional material of not doing anything "ourselves" anymore cool another things ruined.
We're still deep in the process. We've been working on this for several months, even dedicating our weekends to it. The dedication is real!
in terms of use-ability i think it will only be helpful for people who cannot already model. seems like its gonna be more work and fuss to generate a gooey blob and then have to basically rebuild it if you want something half-way efficient, which will be important if you are working on mobile. Quicker to just make it efficiently from the onset. I mean, a model like this - just the model not textures - could be made in less than half an hour by modeler and its going to be lightweight and be setup to actually look like the concept.
I can see practical value for the person who cannot model and cannot afford to hire a modeler, however since it looks like the only way you get any training for the AI - or in this case the base image to start from - is by theft, it seems like doing so places you in the ranks of basic bottom feeder and is going to make it so that you risk your project if publishing on steam.
What dataset are you using to train with? Was it scraped from the web? Curious about consent from creatives.
We didn't specifically collect any individual artist's work. Instead, we primarily used open-source datasets like LAION, similar to what Stable Diffusion has done.
in terms of use-ability i think it will only be helpful for people who cannot already model. seems like its gonna be more work and fuss to generate a gooey blob and then have to basically rebuild it if you want something half-way efficient, which will be important if you are working on mobile. Quicker to just make it efficiently from the onset. I mean, a model like this - just the model not textures - could be made in less than half an hour by modeler and its going to be lightweight and be setup to actually look like the concept.
I can see practical value for the person who cannot model and cannot afford to hire a modeler, however since it looks like the only way you get any training for the AI - or in this case the base image to start from - is by theft, it seems like doing so places you in the ranks of basic bottom feeder and is going to make it so that you risk your project if publishing on steam.
Thank you for your input! We're continuously working on enhancing the usability of our tool, and we acknowledge that the current quality isn't up to the mark. As for copyright concerns, like Stable Diffusion, our main approach has been utilizing open-source data. However, there might be some grey areas we need to address. We'll brainstorm some solutions. In any case, I truly appreciate your feedback!
I think this tech, as it is now, has to wait until everyone has insanely powerful hardware that makes optimization obsolete. Better use of AI, if you're going to use it, is texture maps.
What dataset are you using to train with? Was it scraped from the web? Curious about consent from creatives.
We didn't specifically collect any individual artist's work. Instead, we primarily used open-source datasets like LAION, similar to what Stable Diffusion has done.
That’s a problem unfortunately. We’ve talked about this already (https://polycount.com/discussion/comment/2785084/#Comment_278508) but the summary is that LAION and similar datasets are infringing by design, they’re not designed for commercial use. They’re meant for research and educational purposes.
As soon as you base a commercial application on this, you run into ethical and legal problems.
As creatives, we see the benefits of technological progress, yet we also cannot condone illegal and unethical behavior with our intellectual property.
What dataset are you using to train with? Was it scraped from the web? Curious about consent from creatives.
We didn't specifically collect any individual artist's work. Instead, we primarily used open-source datasets like LAION, similar to what Stable Diffusion has done.
That’s a problem unfortunately. We’ve talked about this already (https://polycount.com/discussion/comment/2785084/#Comment_278508) but the summary is that LAION and similar datasets are infringing by design, they’re not designed for commercial use. They’re meant for research and educational purposes.
As soon as you base a commercial application on this, you run into ethical and legal problems.
As creatives, we see the benefits of technological progress, yet we also cannot condone illegal and unethical behavior with our intellectual property.
I'll give the article a thorough read. Thanks for pointing it out!
I think this tech, as it is now, has to wait until everyone has insanely powerful hardware that makes optimization obsolete. Better use of AI, if you're going to use it, is texture maps.
It does sound like it'll take a considerable amount of time, and optimization might indeed be an ongoing process.
"open-source data" ... like the image you used for that mushroom ?
Oh, I apologize for misunderstanding. Like Eric pointed out, even open-source data can have its challenges regarding these concerns. We'll certainly have more internal discussions on this matter. Thanks for highlighting it!
What dataset are you using to train with? Was it scraped from the web? Curious about consent from creatives.
We didn't specifically collect any individual artist's work. Instead, we primarily used open-source datasets like LAION, similar to what Stable Diffusion has done.
That’s a problem unfortunately. We’ve talked about this already (https://polycount.com/discussion/comment/2785084/#Comment_278508) but the summary is that LAION and similar datasets are infringing by design, they’re not designed for commercial use. They’re meant for research and educational purposes.
As soon as you base a commercial application on this, you run into ethical and legal problems.
As creatives, we see the benefits of technological progress, yet we also cannot condone illegal and unethical behavior with our intellectual property.
I'll give the article a thorough read. Thanks for pointing it out!
The article I linked to is not the primary point I was making with my reply.
My point is that these datasets are not for commercial use.
Only their metadata is licensed via CCBY4.
The associated artwork is still constrained by their own individual copyrights and licenses.
You can’t use these research datasets for commercial use, without consulting with the original owners of those scraped images.
What dataset are you using to train with? Was it scraped from the web? Curious about consent from creatives.
We didn't specifically collect any individual artist's work. Instead, we primarily used open-source datasets like LAION, similar to what Stable Diffusion has done.
That’s a problem unfortunately. We’ve talked about this already (https://polycount.com/discussion/comment/2785084/#Comment_278508) but the summary is that LAION and similar datasets are infringing by design, they’re not designed for commercial use. They’re meant for research and educational purposes.
As soon as you base a commercial application on this, you run into ethical and legal problems.
As creatives, we see the benefits of technological progress, yet we also cannot condone illegal and unethical behavior with our intellectual property.
I'll give the article a thorough read. Thanks for pointing it out!
The article I linked to is not the primary point I was making with my reply.
My point is that these datasets are not for commercial use.
Only their metadata is licensed via CCBY4.
The associated artwork is still constrained by their own individual copyrights and licenses.
You can’t use these research datasets for commercial use, without consulting with the original owners of those scraped images.
hah as if ANY commercial supplier for AI tools cares, like ... ever
Didnt the US just have some sort of ruling on an upcoming case ? Iirc they're not interested in the content of generated images but they're still quite interested in the datasets
Didnt the US just have some sort of ruling on an upcoming case ? Iirc they're not interested in the content of generated images but they're still quite interested in the datasets
I can't find a link to it unfortunately, sorry
I'll take a look and see. Indeed, this issue has been bothering me as well. I suspect that GPT-4 might also face this challenge, right? 🤔
Replies
I mean sure looks like a mesh of mushrooms, similar to the input, quite dense, but lets believe its gonna be a nanite mesh, so it doesnt matter so much. I guess it could be used, depends on the usecase really.
(reverse image search is your friend!)
You could learn how to model this in an efficient manner, by directly studying the provided wireframes. While it's cool to explore new techniques, if you haven't developed the fundamental skills you're not going to actually get hired as a 3D artist.
Thankfully you've stumbled on the best possible site to help you learn how this works! If you want to learn, let us know and we'll be happy to help you every step of the way.
You can’t use these research datasets for commercial use, without consulting with the original owners of those scraped images.
Really curious where the current court cases are going to go. It’s going to take a few years for them to wrap up though.
Still, we’re going to keep standing up for what’s right for creatives.
Iirc they're not interested in the content of generated images but they're still quite interested in the datasets
I can't find a link to it unfortunately, sorry