is this serious? seems like it might be, kinda funny/ condescending tone but if this is real then things are about to change in a big way. What do you think? Im sceptical but it seems legit. I assume you still have to model in polys and then export to point cloud or something.
[ame]
http://www.youtube.com/watch?v=Q-ATtrImCx4[/ame]
Replies
Give polygon video game art enough time and it'll esentialy be the same thing. Think of a zbrush model with tons of tris (points) and all the texture information is baked into it. Isn't that just the same thing? Simply apply a grid system to these points and you have these "atoms" they're talking about, but when you factor in animation and physics suddenly this world building method becomes exponentially more costly considering current technology. We're already approaching that point, just with polygons as a crutch in the meantime.
Interesting video though, makes you wonder where the next big advancement for game art is going to be.
I was very interested in what you had to say.
Sofar they only show static environments... Id like to see this with moving light or animated objects before I see possibilities!
As for the video, yeah it all sounds good, but I can't see this becoming mainstream anytime soon, sadly. Plus, atm detail levels go up around the same pace as artistic talent. We work with our tools in stages of experience, and every "jump" in power gives us a "jump" in what we can do. To give us total freedom over night wouldn't produce realistic perfect visuals, becasue nobody is skilled enough to do that. It takes a team of people months to just get one FACE looking near real, so to do an entire game, where everything CAN be different... Its just not gonna happen.
I do not agree with that. The problem with faces in games is mostly related to the lowpoly model, shaders and animation. Animation is limited because of the lowpoly geometry. I mean, the zbrush sculpts of the faces in assasins creed 2 looks totally awesome, yet in the game they are.. mjeh. Sure, alot of art would not benefit from this, but many limitations on the artists would be lifted.
The thing is a lot of people have got very close with polygons, and extremely comfortable with their programs to create them. Like even if you look at some people talking about having to switch from Maya to Max or vice versa and at the beginning some would rather die. But then again, if this is something that could change the industry, well time to pull us out of that comfort zone.
That doesn't seem to make a whole lot of sense. I mean sure there is going to be a transition period, but to say that nobody is skilled enough to do it??? Most things that are created for realistic next gen games are made first as super highpoly sculpts and models that then have to be forced down into lower poly models. So obviously the skill is already there, they just wouldn't have to crush everything down as far. And look at the film industry, if anything it just means more guys from film could start making games without really changing the models they produce.
What would technology like this mean for texturing ?
I kept asking him how the hell the thing worked and he just avoided giving any details. I remember there was some poster on the side of his booth with 3D model => "Binary" => magic unlimited detail engine. Real helpful.
what was it now.. hmmm.. oh yeah, republic, the revolution.
The website is about as imformative as the video...
No need for 2D texture maps anymore?
it would mean the death to UVmaps, since you could just paint the colour directly onto the surface.
like polypaint in zbrush.
That was what I was wondering.
And even if those are stored in some grouping thing in order to keep the work load small... the memory needed for storing that giant amounts of data in one object... where should that come from?
Also... only rendering the ones onscreen? that must be a joke... downsampling the points of a multi million point model to a few hundred thousands in the current view sounds like a too hard job to run just in software
I think textureing would work like polypaint in Zbrush, painting in 3d space the points.
What up with that logo? Its such eye candy!
no, you don't need to keep track of billions of Points. you just create an octree or similar structure to keep track of the general positions of stuff.
btw, wasnt there some tech thingy video up some time ago by john carmack talking about voxel characters?
Honestly if the general "shooting guys in the face" next generation looks like the character from nvidia´s human head demo [ame="http://www.youtube.com/watch?v=A838dclFr5U"]This[/ame] I would be freaked out.
This unlimited detail is probably a voxel octree thing.
More of that can be seen here.
Samuli Laine
Also I really enjoyed the videos here: Atomontage
If these guys have legit technology they need some better artists helping them portray their tech because everything on their site is blobby and ambiguous.
It's also interesting that all of the scenes are made of a few instanced pieces. I'm sure having "unlimited" detail takes it's toll on memory.
But most importantly, the fact that the entire thing is narrated as if he was talking to an idiot has me the most skeptical. New Nintendos! We'll call them Polygon Companies. They didn't like Our Crazy New Technology Much, Did They Mr. Bear? That's because we're too amazing for them, let's go ride our bikes into the unlimited detail forest of mystery!
I don't think the guy could seriously address any concerns someone who works in 3d professionally would raise, and so he's spun his entire demonstration to pander to the uneducated and make people who would question his AMAZING NEW TECHNOLOGY as some sort of 3d Luddite.
If that's the case, then I'm curious what (if any) limitations it has for the number of unique objects it can work with at one time without slowing down. I imagine the speed and memory conservation would be from caching the required objects for quick searching so that it's deriving, say 100x100 pixels out of the same object that consists of maybe 100,000 points, and doing so for each of maybe 2000 instances of that object that currently appear on the screen, 30 times per second (if it doesn't drop any, there's no reason to update any quicker than that because we can only see at like 24, right?); but it's optimized around doing that with the same set of maybe only 10 unique objects at once. So then at what point would there be too many unique objects for it to quickly search through?
Really though, I guess the calculations involved shouldn't be any more intensive than what games do now to calculate dynamic lighting. That's all about indexing every texel on screen and determining which ones should be shadowed to what extent, right? So we're already packing an object down to maybe 4,194,304 texels (a 2048 map), some of which are wasted because the UVs don't use them, and then running all sorts of fancy math on all those texels, such that it has to process each one several times to figure out different things about it before finally putting it on-screen. The objects are hollow, so looking up the positions of each texel in space without polygons to guide them shouldn't really be a matter of checking a 2048 cubed grid of used and unused spaces (8.5 billion pixels) as though it were a real volume that had to be figured. It could just be another texture map that... oh wait, no. There'd be no reason for that to be an image.
Okay, now that I think about it, how is this not just voxels?
If I had to guess you would create a highpoly model, texture it, and then the model and texture would be "baked", except instead of getting maps, you would get an engine specific file format containing geo, color, and material data for every point. I suppose there would be a cap on the amount of points per area though, as making everything microscopically detailed wouldn't work out if an artist needed to do it manually. Could work if everything was captured from a real world object though.
I don' think it would be limited by object as much as by surface properties. I'd imagine it would search based on groupings, allowing all similar points in a scene to be called and rendered together. So if you have something like D3 where everything is gray plastic, you could have lots of different objects. If you have many objects with completely different materials it would probably slow down.
The real issue is some of the effects and shaders that the game industry regularly uses, and how this system they describe would take that into account. Specifically, I'm thinking about transparency/opacity. How would an engine like this handle partially transparent objects? It would need to not only take into account which point was under a specific pixel, but also the points behind it, if that particular point is semi-visible.
If we could get mental ray quality lighting, with 8x Anti-aliasing, motion blur, DoF, dynamic tone mapping on HDR, and all the superawesomesauce post effects, I really think this generation would look incredibly believable.
The animation systems could also use a pretty big boost. More complex rigs, per-poly collison/cloth deformation. Those would help us with that entire 'uncanny valley' look.
Have you guys even stopped to see how fake the actual modelling and texturework looks in some of these big-budget blockbuster movies?
Yes it would, if a pixel returned a 100% black point next to a 100% white point it would alias, if you tried to draw a power line by simply sampling 1 point per pixel you would get a heck of a lot of aliasing. Same goes for textures, there wouldn't be "Textures" on the points but there would be colors and those would need to be filtered somehow.
I think you're a bit confused, it isn't the polygon that causes the aliasing, it's the fact that the render sees one object on one pixel and another object on the other pixel causing a harsh edge.
I doubt something like this would work on its own, to much is invested in the current tech. Polygons are a dead end anyway I think we are reaching the limit of their ability. To get true to life fidelity graphics we will have to have raytraced graphics.
Something like this would be good to hack on to the current tech(if it was possible). Have a layer of abstraction which this identifies the level of detail and amount of polygons on each element in the view requires, and then just hand that of to the graphics card.
http://notch.tumblr.com/
Not to mention animation. Fair amount of points to weight.
I'm sceptical though, partly because of the whole "unlimited" angle.
Sorry but no, there is no way it's unlimited. You would not be able to have unlimited enemies or an unlimited array of objects in a level, because regardless of how they are rendered, it takes juice to transform them/cull them etc etc, both in processing and memory.
So that puts me off straight away, saying unlimited, it's simply not true.
I guess the caveat is unlimited "detail" per object, and not unlimited amounts of objects/geometry. But still.
Also their website looks complete shit, and contains no more info than that weird video.
If they want to get people interested and adopting their technology, they need to present themselves better, with more info and MUCH better demos.
Personally I think polys are here to stay. You can achieve Pixar level graphics with enough polygons. Of course realtime 3D is not at that level yet, but the rate technlogy increases, it's getting there.
With graphics cards being able to churn out millions of polygons a frame, the bottleneck is going to start being the CPU, and all the tasks it has to do in a game world.
Right now in games people are used to doing all the rendering on the GPU, and the rest of the stuff on the CPU.
The demos that currently exist pushing software only solutions are somewhat unrealistic no? Using a very high end multi core processor, that's being used 100% for rendering and nothing else.
It would be a hell of a lot more difficult to use a Core 2 Duo to do 50% of the rendering, and make sure you have 50% spare for the logic, physics, transformations, culling, marshalling, etc etc.
I could see GPU's being used for other stuff, like CUDA, but just a CPU doing everything? It seems unlikely.
I could see it more easily in consoles than PC's ironically. With PC's it would take too long to adopt, you'd have to wait until everyone had a ridiculously fast 12 core processor etc, for software only rendering to be viable.
But a console? It would be easy. There is nothing stopping Sony or Microsoft from releasing a PS4 or xbox 720 with just one epic multi core processor, rendering voxels.
It's certainly interesting, and voxels have even been used before in commercial games, but I'm not being skeptical on purpose, I just think polygons are here to stay, at least for another good few years.
you could volunteer
Make something that looks better than what we have now and I'll be coloured impressed.
[ame]http://www.youtube.com/watch?v=HScYuRhgEJw[/ame]
Pfffft.
Looks like Ecstatica, from like 15 years ago:
VOXELSTEIN IS NEXT NEXT GEN