They've shown moving objects with collision detection too.
where? not that woefull toxic neon giraffe shit?
edit-
just to be clear Rex, im not saying the tech doest exist im trying to get across that what they've shown is in no way convincing or conclusive. all they've done is avoid straight answers and kept showing their cactus again and again.
They've likely found ways around many of the issues, and also there is a video that shows collision and animation from these guys, they showed it in the Hard OCP interview... so what are the reasons as to why this isn't applicable again?
As noted many times, there are upsides and there are downsides, many a countless travels down this road has revealed these downsides and there just isn't any optimal way to solve it.
Eucledian has not invented an optimal way to solve these issues, they've worked on what they already have, and countless times sidestepping the issues of their approaching.
Animation issues have been answered with "it's not done yet" and then showing animations in the only feasible way you can actually do it and calling it reality.
Questions about rotational issues and placement have not even been answered in any way.
Memoryrequirements have been dodged by saying that "it's not an issue"
he said himself there was no collision detection, and just that 'it wont be a problem'
I'd like to know how it won't be.
those 'animated' characters that were shown certainly do not prove collision detection. Simply fix their height to above the ground and move them around horizontally.. that's all it is. they don't hit anything, and the animation is all non-skinned
I still think though that maybe we're all looking at this wrong. Ok so it can't replace modern game pipelines, but *what could you make with it* - if you were clever in how you make your tiled pieces, and you had a game where static lighting is acceptable, it seems like a talented art team could make a fantastic looking game- if it's designed around the obvious limitations and takes advantage of the obvious strengths.
I'm perfectly aware of raytracing/raycasting advantages. But it doesn't have anything to do with the detail in the scene.
You can't store unlimited detail, so you can't render it.
He tries to bend this definition by rendering many instances of the same geometry. By my book it's still not unlimited, it's just a lot of very limited.
Polygons aren't going to be around forever. Eventually we are going to switch to something else.
Why are you so sure about that? Voxels are even more limited than polygons, a lot harder to work with, and less efficient to store data compared to a base mesh + tessellation + displacement mapping.
There isn't really anything unbelievable about the software, I still cannot really understand the disbelief.
Where are the following:
- proper terrain variation, with changing elevation and such
- arbitrary transformations on every object
- deformable geometry on at least foliage and water
- at least 15-20 animated characters
- dynamic lighting, shading with complex, energy preserving materials (a lot of games already implement it) and shadows; global illumination could be good too.
Knowing the limitations of voxels I find it hard to believe that he alone has been able to overcome the inherent problems that have stopped everyone else from doing the above.
And that wasn't even a sorry excuse for animation. We want skeletial deformations for characters.
So despite all of this, you are really willing to believe that it all works together, without seeing it, just by the word of a guy who's been preaching about this for eight years??
Also, I just read a presentation on how Little Big Planet 2 does lighting using voxels.
Interesting note from the coder on how his time is spent:
- 10% R&D that you're not gonna use
- 10% R&D that you're gonna use
- 30% implementation
- 50% bug fixing
Euclideon is still stuck in the "R&D you're not gonna use" phase.
Assuming that they're just shortsighted because of their lack of experience, eventually they'll realize that this tech can't compete and abandon it altogether... Or maybe they really are a scam in which case they'll try to get more funding in 1-2 years or so.
How do you know Euclideon's progress with development? You state it like you know everything going on with them behind the scenes.
Right on, but the devs of LBP aren't developing the Unlimited Detail technology... obviously since this company has been focusing on this tech for so long and JUST on this tech, they're really not in the same shoes in terms of resource allocation as to where development time is spent.
LBP is made by a game developer. Unlimited Detail is made by a tech company... No video game company can just focus solely on the programming for their games, they've got way more to deal with than that.
Euclideon doesn't have all the other stuff to deal with.
I'm only forming my perspective based on what has been said and shown by the company.
Also no, they aren't using raycasting or raytracing, so your comment there makes no sense; again showing how much actual research you've done into Euclideon.
You still didn't understand what I said. How the scenes can have unlimited detail is through basically the most efficient occlusion culling possible, where the atom counts are consistent and don't need to change. The tech only fetches the points that it needs to display at any one time.
Not a single point that you cannot see is drawn using this technique.
Just like with screen space effects. Performance for SSAO doesn't change because a scene is more complex.
Just like with screen space effects. Performance for SSAO doesn't change because a scene is more complex.
... screen space AO is cheap and consistent because it is a post process - it is based on existing passes (the screen space normals and Zdepth of the scene being displayed), not on thin air. And the reason why it is cheap is that unlike "bouncy" AO, it uses 2.5D tricks to achieve a cool effect.
Following your logic, this voxel "pass" would have to be based on and interpolated from some underlying data set, and be applied as some sort of postprocess on top of something cheap. Like ... polygons!
As a matter of fact, the only sensible use I could see with voxels (not "Atoms"...) would be to create voxel based geometry based on polygonal data, just like what 3DCoat does when importing an OBJ to its voxel module. Later on, this new voxelized data could be used for erosion/chipping gameplay effects, Minecraft/Earthworm Jim2 style ...
Also the reason why people are very doubtful are pretty obvious. Why would any sane "game tech company" use such terrible, misinformed and misleading PR practices ? They are clearly not trying to directly market to the real game industry with their terrible Kotaku-level hype. I don't think that any sane graphics programmer would like to collaborate with a company not even able to explain what its tech is really about, while hiding behind nonsensical buzzwords ...
There is nothing "Unlimited" about this tech - its is a grid-based unification of 3D assets. The fact alone that they call it "Unlimited" is in itself a good enough reason to doubt them. Maybe they have some good compression going on (like ILM did on Transformers2) but again unike movies, game tech is all about being efficient and budgeted in the first place. Realtime 60fps stuff.
This company sounds a lot like the tech dreamers thinking that one day, UVs wont be necessary because "it will all be automated" or that maybe even later in the future, we won't have to use lightweight geometry anymore because processing power will just be "free".
Whoever thinks this way most likely never worked on a recent, demanding high quality game asset. Ask artists, and they will tell you that they don't believe in "no UVs" ; instead, they want tools to make the process much smoother and faster.
RexM I think you're a little misguided and misinformed by what this "unlimited" type of lingo is.. Just because you're only bound the the amount of renderable pixels on the screen, it doesn't mean that you don't have to worry about video memory. Not to mention the physical memory to compress all this data.
In my experience the only form of "unlimited" detail is Carmack's megatexture technology. Which is still a complex system of mipping, compression, organizing and caching. Yes, this virtualized texture technology allows for nearly unlimited amount of texel density at authoring because the technology is brilliantly efficient at utilizing all your display pixels. However, it was still an uphill battle to get it all running on limited amount of memory on consoles at 60fps. Not to mention the compression methods to crap all that authored terabytes of data on disk.
Just because someone is claiming that they can virtualize geometry in the same fasion in software mode by throwing around a few buzzwords, it doesn't mean it's ready as a technology, nor will be anytime soon. Besides it's not as if John Carmack isn't investigating the very idea himself.
Since the demo ran on the laptop in the Hard OCP interview, that also proves that the alleged memory requirements that others speculated that this tech would need had no basis.
He isn't just claiming it, he has shown various videos that prove it, and then later proves it is real-time.
Since the demo ran on the laptop in the Hard OCP interview, that also proves that the alleged memory requirements that others speculated that this tech would need had no basis.
He isn't just claiming it, he has shown various videos that prove it, and then later proves it is real-time.
Yes, he showed tiled geometry. The whole world appears to be made up of maybe 30 items tiled over and over again, which is why it's possible to run it in software alone.
Now try that with a real landscape, where every rock is different, and we'll talk. Unlimited detail doesn't mean anything if you put that unlimited detail into limited objects.
Rex: That's an excuse. If they have a converter for objects, like they say they do, and they have access to a laser scanner, like they say they do, then they can put more than 30 things into their world. Hell, turbosquid's free section is full of models that are useless to modern polygon games, but should be no issue if they can just convert them to point data.
Instancing is wonderful. I can run like 20 times as many polygons in a non-optimized display app like 3ds max if I keep them instanced.
The point is all this has to have a compute and storage cost. Even kkrieger, which is 96k on disk and has a whole FPS inside, still takes over 200mb of ram when running.
Points can only be compressed so far while still retaining data. 8gb is a LOT of ram. You can fit the entirety of many a modern game's hard drive footprint in that space with some left over for the OS.
I think its a reason why they're being so secretive. Its not that they have something no one's ever thought of, its multiple existing things that people haven't put together in the same way before.
Like efficient instancing, combined with a google-like search algorithm, and a hypertexture renderer.
and I'll bet the animation we see next time will be either fluid, or descrete transoforms, but not skinned points.
edit: Btw that paper is over 10 years old.. and a single core of the intel laptop has about 2x10^3 more processing power than the high end server system listed in the paper to do the rendering. So.. yeah even old algorithms eventually become realtime, and they've likely improved it. One way or another its a voxel engine, it just doesn't use the marching cubes or marching tetrahedrons surfacing algorithm.
Since the demo ran on the laptop in the Hard OCP interview, that also proves that the alleged memory requirements that others speculated that this tech would need had no basis.
He isn't just claiming it, he has shown various videos that prove it, and then later proves it is real-time.
It being realtime isn't what people are sceptical about, there hasn't been any concrete proof to anything.
we're talking about the claims of unlimited detail with limited memory and no details on memory-usage (and this is when not looking at any of the other issues involved)
replace the ray with a search and everything else with the tech remains fundamentally equal to other voxel rendering techniques, with the same limitations.
People can do rough calculations on the needed memory for this, and it becomes apparent with the need for instanced data, even euclidean themselves further back talked about how we have tons of storage but limited rendering power, which further shows that even they know that the object data will be very heavy if you intend to have unlimited detail (detail up close).
And if you actually want to have unique data, like the ground dirt being pushed away from roots of the trees, instead of just having models intersect (which ironically is another thing they talked about in their videos) you'd have to have that ground data be unique.
But back to their search algorithm, everything points to the very foundation and speed being that everything is aligned and categorized in that way, there's no confusion for the search, but it also means there's little flexibility in position.
Right on, but the devs of LBP aren't developing the Unlimited Detail technology... obviously since this company has been focusing on this tech for so long and JUST on this tech, they're really not in the same shoes in terms of resource allocation as to where development time is spent.
The LBP guy was talking about graphics programming in general, from his who knows how many years of experience combined with the entire company's experience. Incidentally he was also working with voxels a lot. Also remember how id software demonstrated Rage and virtual texturing almost 5 years ago - it took them that long to work the new, experimental tech into a finished product.
So that 10-10-30-50 division is an industry level average. Makes all kinds of sense too, if you've done any programming work.
LBP is made by a game developer. Unlimited Detail is made by a tech company...
This is totally stupid. All game devs have tech people to create their tech (some of them can even be called scientists, most of them are called engineers) - and Euclideon wants to create tech for games. They'd better work the same way, or Dell won't ever get his stuff into commercial products... which actually seems to be the case though.
Euclideon doesn't have all the other stuff to deal with.
In that case they won't be able to sell their stuff.
I'm only forming my perspective based on what has been said and shown by the company.
They haven't shown lighting, shading, deformations, skeletial animations, realistic (uneven) terrain, collision and so on.
Also no, they aren't using raycasting or raytracing, so your comment there makes no sense; again showing how much actual research you've done into Euclideon.
Realize that what they described is basically raycasting: for every pixel, find the smallest single element that's visible and render only that element. It is basically the definition of ray tracing without secondary rays, which is also called raycasting. Even Wolfenstein 3D used it, did you know that?
Just because Dell calls it a "search" algorithm doesn't really change it, he's just blowing smoke to make his tech appear something new and therefore more interesting, which it isn't.
So apparently it's you who did not do actual research.
This also won't allow unlimited detail. For a start you'd need unlimited data and for that, unlimited storage... but I've already stated that rendering instances does not mean even "nearly unlimited" detail to me.
Yes, he showed tiled geometry. The whole world appears to be made up of maybe 30 items tiled over and over again, which is why it's possible to run it in software alone.
And even that was on a laptop with 8 gigabytes of memory.
Consoles have 512 megabytes, next gen shouldn't have more than 2 gigs either.
If you have a 1920x1080 screen, only 2,073,600 points have to be rendered. That number won't change even if a scene appears to be much more detailed than another scene using the same tech. So, instead of rendering billions/trillions of points, only a couple million are drawn.
Lower the resolution, even less points have to be rendered.
Compressed files that only store the surface points of models.
Storing all the points, inside and out would be stupid. That's what makes this different than voxels. Couple that storage method with compression and you have a memory footprint ridiculously smaller than traditional voxels.
Remember: They ran this on the Hard OCP laptop, and I doubt that laptop has 512 Petabytes of hard disk space.
If you have a 1920x1080 screen, only 2,073,600 points have to be rendered. That number won't change even if a scene appears to be much more detailed than another scene using the same tech. So, instead of rendering billions/trillions of points, only a couple million are drawn.
Lower the resolution, even less points have to be rendered.
Imagine that there is a landscape which is completely unique; no tiling. That makes billions of unique voxels, each with location, color, and material properties. Even if the call routine only has to reference a million voxels at once, you still need to store the landscape info somewhere. You can't just pull unlimited detail out of nowhere if you want it to be anything but procedural. Where will you store that kind of information?
512PB is what Notch proposed it would take to store the data for a 1km chunk of unique detail data. the demo is a group on highly repeated instanced objects, whos to say that it isnt eating up the 8GB.
Also you cant just compress everything in ram. decompression is slow, and even if they were able to decompress the data at fast rates you still need the ram to store it in while its being traversed. you'd end up in a situation where you are constantly decompressing and dumping data from ram.
The fact that the demo is running on the Hard OCP laptop shows that those memory constraints are simply not true. I seriously doubt that laptop has more than 4 TB of HDD space.
512PB is what Notch proposed it would take to store the data for a 1km chunk of unique detail data. the demo is a group on highly repeated instanced objects.
Notch was assumping that all points were stored for the entire volume of the object. He didn't consider compression or just storing surface points.
Notch also coded a poorly optimized game in Java. He's hardly an authority on the subject, and was proven wrong.
Carmack said the tech is possible and Carmack has a little bit more expertise on the matter than Notch does.
The claims aren't that extraordinary when you understand the technology though.
Notch and Carmack understand the tech, I understand the tech. Everyone arguing with you understands the tech better than you. The claims are bullshit without actual evidence and NONE has been shown that wasn't possible 10 years ago.
You just keep spewing out the same crap that they've said, because what they say is the truth and industry experts and knowledgeable programmers don't know what they're talking about at all.
The spokesperson can be quoted many times saying that their approach also uses less memory than using polygons and texture sheets.
The spokesperson failed to mention the exact terms of that comparision. Surpsisingly, those terms are very important.
What he meant is probably this:
The amount of runtime memory required to display those instances of voxelized geometry stored in a sparse octree is lower than the amount of runtime memory required to display the base models (pre voxelization) with their millions of vertices and huge color textures. This may indeed be true.
What's actually more important are these:
- Using low poly models with tesselation and displacement requires even less memory than using sparse voxel octrees, and allows for similar visual detail.
(bonus points for including the least impressive image possible to show what tessellation is)
- Using the same models and textures requires far, far less background storage space for the game content on HDD / DVD / BR / whatever than storing the entire voxel data. Shipping a game with full voxel content is still impossible on today's distribution formats.
- Also, if building a game world without instancing, the renderer will have to sacrifice a LOT of runtime memory for streaming and caching world data as the player moves around.
Thus in practice, his approach actually requires a LOT more memory. In the very restricted case of this particular demo and compared to the source objects he gets from their artists, he is indeed telling the truth - but in every other case he's not right.
See how you're getting misguided by Euclideon?
They've shown lighting, collisions, and skeletal animation so far.
They did not show lighting, every object has fully baked in shading.
They did not show any collisions at all.
Those laughable creatures might have been using skeletial animation but I was talking about skeletial deformation. As a game artist you should understand the importance of this distinction; I'm willing to explain otherwise.
Still not seeing any reason to disbelieve in this yet, since we don't know anything about their implementation.
That is because you're both misguided and willingly misguiding yourself at the same time.
Lol Rex, they're running tiled objects, and it's obvious. You're missing the point of "What if the landscape isn't tiled? What if it's all unique?"
Even with surface points only (which is more or less a given) and insane compression you're still looking at storing titanic amounts of data. Modern games can make huge worlds because the actual poly-budget for ground is low as hell. Not so if you want to make "unlimited detail" with completely unique geometry.
Storing all the points, inside and out would be stupid. That's what makes this different than voxels.
Seriously, are you reading this topic at all? Sparse voxel octrees are exactly about not storing all the points, just the ones that matter.
There are a lot of actual examples for how much space voxel data requires.
John Olick's demo, Nvidia's cathedral demo... Even sparse octrees mean gigabytes of data per model.
Couple that storage method with compression and you have a memory footprint ridiculously smaller than traditional voxels.
Yes, but even non-traditional voxels require a LOT of space.
Remember: They ran this on the Hard OCP laptop, and I doubt that laptop has 512 Petabytes of hard disk space.
No, you remember: there are only about 30-40 individual objects in the demo, and it runs on 8 gigabytes of memory.
Even if polygon meshes were instanced to be as big as their demonstration map, it would never run even if the polygons were instanced.
I suggest doing more research into the tech. Seems too many of you are basing your statements on personal biases and continue to ignore my posts explaining how this tech is possible.
Carmack said the tech is possible and Carmack has a little bit more expertise on the matter than Notch does.
Carmack said it might be implemented in a very limited form in the next 5 to 10 years. As in, still not competitive with even today's AA titles (not to mention how far those games will ge tin 5 to 10 years).
Maybe you wish to watch the Q&A section at Quakecon and a recent interview where he's talking about this.
Seriously, are you reading this topic at all? Sparse voxel octrees are exactly about not storing all the points, just the ones that matter.
There are a lot of actual examples for how much space voxel data requires.
John Olick's demo, Nvidia's cathedral demo... Even sparse octrees mean gigabytes of data per model.
Yes, but even non-traditional voxels require a LOT of space.
No, you remember: there are only about 30-40 individual objects in the demo, and it runs on 8 gigabytes of memory.
30-40 polygon objects that had no LOD's could never run on that hardware on that island size when using polygon meshes with textures, while matching the geometric complexity of the point models in the demonstration.
Great, it has 8 GB's of RAM.... but it doesn't have more than 4 TB's of HDD space, I can tell you that for a fact. That should be more than enough evidence to know that they've found ways around the memory issues traditionally associated with voxels.
You are an idiot, I won't repeatedly post the same info that both others and I have already posted, instead I give up. Is there a damn ignore function on this forum engine?
Seriously dude, read back, we've explained everything like 5 times already, it's you who refuses to understand and even claim that we're all of us wrong and ignorant and only you and Dell are right. Good for you, be happy about it, I'll go and have a beer instead.
Seriously dude, read back, we've explained everything like 5 times already, it's you who refuses to understand and even claim that we're all of us wrong and ignorant and only you and Dell are right. Good for you, be happy about it, I'll go and have a beer instead.
Great, it has 8 GB's of RAM.... but it doesn't have more than 4 TB's of HDD space, I can tell you that for a fact. That should be more than enough evidence to know that they've found ways around the memory issues traditionally associated with voxels.
Great, I can instance in Cryengine 2/3 and still not be able to make islands that large, all rendered in real-time with no LoD's at the same geometry levels of the unlimited detail models.
When you begin using swear words and insults to convey your point, that is when you have nothing more to add and respond out of baseless frustration.
Great, I can instance in Cryengine 2/3 and still not be able to make islands that large, all rendered in real-time with no LoD's at the same geometry levels of the unlimited detail models.
Does it also look like tiled bullshit? Oh no, it doesn't. Euclidean renders a few objects over and over and still takes up a load of resources and runs at low FPS. It's impressive detail, and the best voxel engine I've seen so far, but it's not worth anything in terms of game dev. Tiling a few things around just doesn't cut it anymore.
When you begin using swear words and insults to convey your point, your point is lost.
Same goes for you when you fail to show sense or critical thinking ability. Hell, you can't even understand what we're talking about.
How do you know it takes up loads of resources? We don't know how many resources it takes.
It's running solely on a CPU in software mode, low FPS is understandable.
If I load 10000 rocks at 2000 polygons a piece in Cryengine 3, and then instance them without LoD's, that would take up loads of resources and run at low FPS too.
Replies
where? not that woefull toxic neon giraffe shit?
edit-
just to be clear Rex, im not saying the tech doest exist im trying to get across that what they've shown is in no way convincing or conclusive. all they've done is avoid straight answers and kept showing their cactus again and again.
As noted many times, there are upsides and there are downsides, many a countless travels down this road has revealed these downsides and there just isn't any optimal way to solve it.
Eucledian has not invented an optimal way to solve these issues, they've worked on what they already have, and countless times sidestepping the issues of their approaching.
Animation issues have been answered with "it's not done yet" and then showing animations in the only feasible way you can actually do it and calling it reality.
Questions about rotational issues and placement have not even been answered in any way.
Memoryrequirements have been dodged by saying that "it's not an issue"
I'd like to know how it won't be.
those 'animated' characters that were shown certainly do not prove collision detection. Simply fix their height to above the ground and move them around horizontally.. that's all it is. they don't hit anything, and the animation is all non-skinned
I still think though that maybe we're all looking at this wrong. Ok so it can't replace modern game pipelines, but *what could you make with it* - if you were clever in how you make your tiled pieces, and you had a game where static lighting is acceptable, it seems like a talented art team could make a fantastic looking game- if it's designed around the obvious limitations and takes advantage of the obvious strengths.
I'm perfectly aware of raytracing/raycasting advantages. But it doesn't have anything to do with the detail in the scene.
You can't store unlimited detail, so you can't render it.
He tries to bend this definition by rendering many instances of the same geometry. By my book it's still not unlimited, it's just a lot of very limited.
Why are you so sure about that? Voxels are even more limited than polygons, a lot harder to work with, and less efficient to store data compared to a base mesh + tessellation + displacement mapping.
Where are the following:
- proper terrain variation, with changing elevation and such
- arbitrary transformations on every object
- deformable geometry on at least foliage and water
- at least 15-20 animated characters
- dynamic lighting, shading with complex, energy preserving materials (a lot of games already implement it) and shadows; global illumination could be good too.
Knowing the limitations of voxels I find it hard to believe that he alone has been able to overcome the inherent problems that have stopped everyone else from doing the above.
And that wasn't even a sorry excuse for animation. We want skeletial deformations for characters.
So despite all of this, you are really willing to believe that it all works together, without seeing it, just by the word of a guy who's been preaching about this for eight years??
Interesting note from the coder on how his time is spent:
- 10% R&D that you're not gonna use
- 10% R&D that you're gonna use
- 30% implementation
- 50% bug fixing
Euclideon is still stuck in the "R&D you're not gonna use" phase.
Assuming that they're just shortsighted because of their lack of experience, eventually they'll realize that this tech can't compete and abandon it altogether... Or maybe they really are a scam in which case they'll try to get more funding in 1-2 years or so.
Right on, but the devs of LBP aren't developing the Unlimited Detail technology... obviously since this company has been focusing on this tech for so long and JUST on this tech, they're really not in the same shoes in terms of resource allocation as to where development time is spent.
LBP is made by a game developer. Unlimited Detail is made by a tech company... No video game company can just focus solely on the programming for their games, they've got way more to deal with than that.
Euclideon doesn't have all the other stuff to deal with.
I'm only forming my perspective based on what has been said and shown by the company.
Also no, they aren't using raycasting or raytracing, so your comment there makes no sense; again showing how much actual research you've done into Euclideon.
You still didn't understand what I said. How the scenes can have unlimited detail is through basically the most efficient occlusion culling possible, where the atom counts are consistent and don't need to change. The tech only fetches the points that it needs to display at any one time.
Not a single point that you cannot see is drawn using this technique.
Just like with screen space effects. Performance for SSAO doesn't change because a scene is more complex.
... screen space AO is cheap and consistent because it is a post process - it is based on existing passes (the screen space normals and Zdepth of the scene being displayed), not on thin air. And the reason why it is cheap is that unlike "bouncy" AO, it uses 2.5D tricks to achieve a cool effect.
Following your logic, this voxel "pass" would have to be based on and interpolated from some underlying data set, and be applied as some sort of postprocess on top of something cheap. Like ... polygons!
As a matter of fact, the only sensible use I could see with voxels (not "Atoms"...) would be to create voxel based geometry based on polygonal data, just like what 3DCoat does when importing an OBJ to its voxel module. Later on, this new voxelized data could be used for erosion/chipping gameplay effects, Minecraft/Earthworm Jim2 style ...
Also the reason why people are very doubtful are pretty obvious. Why would any sane "game tech company" use such terrible, misinformed and misleading PR practices ? They are clearly not trying to directly market to the real game industry with their terrible Kotaku-level hype. I don't think that any sane graphics programmer would like to collaborate with a company not even able to explain what its tech is really about, while hiding behind nonsensical buzzwords ...
There is nothing "Unlimited" about this tech - its is a grid-based unification of 3D assets. The fact alone that they call it "Unlimited" is in itself a good enough reason to doubt them. Maybe they have some good compression going on (like ILM did on Transformers2) but again unike movies, game tech is all about being efficient and budgeted in the first place. Realtime 60fps stuff.
This company sounds a lot like the tech dreamers thinking that one day, UVs wont be necessary because "it will all be automated" or that maybe even later in the future, we won't have to use lightweight geometry anymore because processing power will just be "free".
Whoever thinks this way most likely never worked on a recent, demanding high quality game asset. Ask artists, and they will tell you that they don't believe in "no UVs" ; instead, they want tools to make the process much smoother and faster.
In my experience the only form of "unlimited" detail is Carmack's megatexture technology. Which is still a complex system of mipping, compression, organizing and caching. Yes, this virtualized texture technology allows for nearly unlimited amount of texel density at authoring because the technology is brilliantly efficient at utilizing all your display pixels. However, it was still an uphill battle to get it all running on limited amount of memory on consoles at 60fps. Not to mention the compression methods to crap all that authored terabytes of data on disk.
Just because someone is claiming that they can virtualize geometry in the same fasion in software mode by throwing around a few buzzwords, it doesn't mean it's ready as a technology, nor will be anytime soon. Besides it's not as if John Carmack isn't investigating the very idea himself.
He isn't just claiming it, he has shown various videos that prove it, and then later proves it is real-time.
Yes, he showed tiled geometry. The whole world appears to be made up of maybe 30 items tiled over and over again, which is why it's possible to run it in software alone.
Now try that with a real landscape, where every rock is different, and we'll talk. Unlimited detail doesn't mean anything if you put that unlimited detail into limited objects.
We'll find out soon, in either case.
Instancing is wonderful. I can run like 20 times as many polygons in a non-optimized display app like 3ds max if I keep them instanced.
The point is all this has to have a compute and storage cost. Even kkrieger, which is 96k on disk and has a whole FPS inside, still takes over 200mb of ram when running.
Points can only be compressed so far while still retaining data. 8gb is a LOT of ram. You can fit the entirety of many a modern game's hard drive footprint in that space with some left over for the OS.
I think its a reason why they're being so secretive. Its not that they have something no one's ever thought of, its multiple existing things that people haven't put together in the same way before.
Like efficient instancing, combined with a google-like search algorithm, and a hypertexture renderer.
here, read this http://cs.swansea.ac.uk/~csmark/PDFS/vg99.pdf
and I'll bet the animation we see next time will be either fluid, or descrete transoforms, but not skinned points.
edit: Btw that paper is over 10 years old.. and a single core of the intel laptop has about 2x10^3 more processing power than the high end server system listed in the paper to do the rendering. So.. yeah even old algorithms eventually become realtime, and they've likely improved it. One way or another its a voxel engine, it just doesn't use the marching cubes or marching tetrahedrons surfacing algorithm.
It seems to me like this tech might be great for medical visualization, virtual surgery type stuff where things need to be super accurate.
This tech actually came from medical visualization. They render point clouds all the time on cpu's with little gpu assistance.
It being realtime isn't what people are sceptical about, there hasn't been any concrete proof to anything.
we're talking about the claims of unlimited detail with limited memory and no details on memory-usage (and this is when not looking at any of the other issues involved)
replace the ray with a search and everything else with the tech remains fundamentally equal to other voxel rendering techniques, with the same limitations.
People can do rough calculations on the needed memory for this, and it becomes apparent with the need for instanced data, even euclidean themselves further back talked about how we have tons of storage but limited rendering power, which further shows that even they know that the object data will be very heavy if you intend to have unlimited detail (detail up close).
And if you actually want to have unique data, like the ground dirt being pushed away from roots of the trees, instead of just having models intersect (which ironically is another thing they talked about in their videos) you'd have to have that ground data be unique.
But back to their search algorithm, everything points to the very foundation and speed being that everything is aligned and categorized in that way, there's no confusion for the search, but it also means there's little flexibility in position.
Cool, I didn't know that.
[ame]http://www.youtube.com/watch?v=b-4eqKRBDis[/ame]
The LBP guy was talking about graphics programming in general, from his who knows how many years of experience combined with the entire company's experience. Incidentally he was also working with voxels a lot. Also remember how id software demonstrated Rage and virtual texturing almost 5 years ago - it took them that long to work the new, experimental tech into a finished product.
So that 10-10-30-50 division is an industry level average. Makes all kinds of sense too, if you've done any programming work.
This is totally stupid. All game devs have tech people to create their tech (some of them can even be called scientists, most of them are called engineers) - and Euclideon wants to create tech for games. They'd better work the same way, or Dell won't ever get his stuff into commercial products... which actually seems to be the case though.
In that case they won't be able to sell their stuff.
They haven't shown lighting, shading, deformations, skeletial animations, realistic (uneven) terrain, collision and so on.
Realize that what they described is basically raycasting: for every pixel, find the smallest single element that's visible and render only that element. It is basically the definition of ray tracing without secondary rays, which is also called raycasting. Even Wolfenstein 3D used it, did you know that?
Just because Dell calls it a "search" algorithm doesn't really change it, he's just blowing smoke to make his tech appear something new and therefore more interesting, which it isn't.
So apparently it's you who did not do actual research.
This also won't allow unlimited detail. For a start you'd need unlimited data and for that, unlimited storage... but I've already stated that rendering instances does not mean even "nearly unlimited" detail to me.
And even that was on a laptop with 8 gigabytes of memory.
Consoles have 512 megabytes, next gen shouldn't have more than 2 gigs either.
They've shown lighting, collisions, and skeletal animation so far.
Still not seeing any reason to disbelieve in this yet, since we don't know anything about their implementation.
http://en.wikipedia.org/wiki/Marcello_Truzzi#.22Extraordinary_claims.22
Bruce says some new things here.
http://kotaku.com/5827192/euclideon-creator-swears-infinite-detail-is-not-a-hoax
If you have a 1920x1080 screen, only 2,073,600 points have to be rendered. That number won't change even if a scene appears to be much more detailed than another scene using the same tech. So, instead of rendering billions/trillions of points, only a couple million are drawn.
Lower the resolution, even less points have to be rendered.
Storing all the points, inside and out would be stupid. That's what makes this different than voxels. Couple that storage method with compression and you have a memory footprint ridiculously smaller than traditional voxels.
Remember: They ran this on the Hard OCP laptop, and I doubt that laptop has 512 Petabytes of hard disk space.
Imagine that there is a landscape which is completely unique; no tiling. That makes billions of unique voxels, each with location, color, and material properties. Even if the call routine only has to reference a million voxels at once, you still need to store the landscape info somewhere. You can't just pull unlimited detail out of nowhere if you want it to be anything but procedural. Where will you store that kind of information?
Also you cant just compress everything in ram. decompression is slow, and even if they were able to decompress the data at fast rates you still need the ram to store it in while its being traversed. you'd end up in a situation where you are constantly decompressing and dumping data from ram.
Notch was assumping that all points were stored for the entire volume of the object. He didn't consider compression or just storing surface points.
Notch also coded a poorly optimized game in Java. He's hardly an authority on the subject, and was proven wrong.
Carmack said the tech is possible and Carmack has a little bit more expertise on the matter than Notch does.
Notch and Carmack understand the tech, I understand the tech. Everyone arguing with you understands the tech better than you. The claims are bullshit without actual evidence and NONE has been shown that wasn't possible 10 years ago.
You just keep spewing out the same crap that they've said, because what they say is the truth and industry experts and knowledgeable programmers don't know what they're talking about at all.
The spokesperson failed to mention the exact terms of that comparision. Surpsisingly, those terms are very important.
What he meant is probably this:
The amount of runtime memory required to display those instances of voxelized geometry stored in a sparse octree is lower than the amount of runtime memory required to display the base models (pre voxelization) with their millions of vertices and huge color textures. This may indeed be true.
What's actually more important are these:
- Using low poly models with tesselation and displacement requires even less memory than using sparse voxel octrees, and allows for similar visual detail.
(bonus points for including the least impressive image possible to show what tessellation is)
- Using the same models and textures requires far, far less background storage space for the game content on HDD / DVD / BR / whatever than storing the entire voxel data. Shipping a game with full voxel content is still impossible on today's distribution formats.
- Also, if building a game world without instancing, the renderer will have to sacrifice a LOT of runtime memory for streaming and caching world data as the player moves around.
Thus in practice, his approach actually requires a LOT more memory. In the very restricted case of this particular demo and compared to the source objects he gets from their artists, he is indeed telling the truth - but in every other case he's not right.
See how you're getting misguided by Euclideon?
They did not show lighting, every object has fully baked in shading.
They did not show any collisions at all.
Those laughable creatures might have been using skeletial animation but I was talking about skeletial deformation. As a game artist you should understand the importance of this distinction; I'm willing to explain otherwise.
That is because you're both misguided and willingly misguiding yourself at the same time.
Even with surface points only (which is more or less a given) and insane compression you're still looking at storing titanic amounts of data. Modern games can make huge worlds because the actual poly-budget for ground is low as hell. Not so if you want to make "unlimited detail" with completely unique geometry.
Seriously, are you reading this topic at all?
Sparse voxel octrees are exactly about not storing all the points, just the ones that matter.
There are a lot of actual examples for how much space voxel data requires.
John Olick's demo, Nvidia's cathedral demo... Even sparse octrees mean gigabytes of data per model.
Yes, but even non-traditional voxels require a LOT of space.
No, you remember: there are only about 30-40 individual objects in the demo, and it runs on 8 gigabytes of memory.
I suggest doing more research into the tech. Seems too many of you are basing your statements on personal biases and continue to ignore my posts explaining how this tech is possible.
Carmack said it might be implemented in a very limited form in the next 5 to 10 years. As in, still not competitive with even today's AA titles (not to mention how far those games will ge tin 5 to 10 years).
Maybe you wish to watch the Q&A section at Quakecon and a recent interview where he's talking about this.
30-40 polygon objects that had no LOD's could never run on that hardware on that island size when using polygon meshes with textures, while matching the geometric complexity of the point models in the demonstration.
Great, it has 8 GB's of RAM.... but it doesn't have more than 4 TB's of HDD space, I can tell you that for a fact. That should be more than enough evidence to know that they've found ways around the memory issues traditionally associated with voxels.
Nope, unlike you he's someone who makes opinions based on facts
This.
The thread should be closed already.
CHEERS!
Thanks
I would be a little bit more skeptical if somebody could answer that.
Otherwise, enough of the company's claims have been verified to take this tech seriously.
They're fucking INSTANCING!
Know what, nevermind, you're dense. [ame]http://www.youtube.com/watch?v=Ja06DJrFe5E[/ame]
When you begin using swear words and insults to convey your point, that is when you have nothing more to add and respond out of baseless frustration.
Does it also look like tiled bullshit? Oh no, it doesn't. Euclidean renders a few objects over and over and still takes up a load of resources and runs at low FPS. It's impressive detail, and the best voxel engine I've seen so far, but it's not worth anything in terms of game dev. Tiling a few things around just doesn't cut it anymore.
Same goes for you when you fail to show sense or critical thinking ability. Hell, you can't even understand what we're talking about.
It's running solely on a CPU in software mode, low FPS is understandable.
If I load 10000 rocks at 2000 polygons a piece in Cryengine 3, and then instance them without LoD's, that would take up loads of resources and run at low FPS too.