nah those ARE 3d scans you can see the fuckups in the grass...
Yeah I was gonna say, those are the pre-edited scans, before they added realy crappy rocks all over the ground, and tried to re-bake some ambient occlusion and drop a shadow into the mix. Also, the "web journalist" aka "technology marketer", John Gatt, along with someone claiming to be Bruce Dell have apparently been trying to win over the hardcore gamers on HardOCP's forums with more ridiculous statements. http://hardforum.com/showthread.php?t=1628422
I thought people were starting to dislike me because of my opinion, so I figured I would leave.
It seemed that way because it was getting to the point to where instead of discussing things, people started to insult me and I don't really know why... even though I was respectful or at least tried to be.
Some even went as far as to put me on their ignore lists. Pretty childish to ignore somebody because you disagree with a single opinion of theirs.
Well, it started as an education discussion with experienced people here pointing out the faults and shortcomings, to which you responded with obtuse optimism and denial.
These things are not a matter of opinion. It's empirical evidence backed by technological proof. You approached it from the viewpoint of an opinion and were brought down to earth with legitimate research.
It's like arguing that the earth isn't round. You can get upset about it becoming a personal attack on an innocent opinion. Except that the fact that the earth is round is a fact .. not a matter of opinion.
It's alright though, we've all had our fun You can bow out gracefully and make some arts instead
At this point... it is a matter of opinion considering that nobody here knows the implementation.
They can guess on what they think the implementation is like, but the fact is, is that a lot of people didn't think it was running in real-time in the first place. Nobody can answer me as to how the demo was running on that laptop, even though they were telling me that 500 terabytes of data would be needed for that scene... which is clearly not the case.
That was proven, so that gives me even less reason to not believe the claims... even though people were saying we'd never be able to do that in real-time in even 10 years.
Animations and collisions were shown in that video as well. I think that people are going too much by traditional misconceptions in terms of 3D.
Back when 3D was first introduced, everything had to be developed for it, such as animation, collisions, everything like that. Similar issues were faced with developing those.
Last word: If this technology was so unrealistic and unfeasible, then why has John Carmack been looking into it for years? If animation and collisions are impossible, then there must be some other work-around for the same type of systems.
Everyone is still thinking too much in terms of their experience with polygons.... when this system deals with many things like memory, drawing the scene, animation, in an entirely different manner.
Hardware wasn't the only thing that had to advance for this to be possible, look how far UI's have come and what not. A computer is just a lot easier to work with now than it was 15-20 years ago... so it isn't surprising that the company is just now making huge progress with their technology.
Seriously, IBM has a working chip that can learn. It can run code it is not programmed to run... and Unlimited Detail seems that far out there?
Technology is progressing.
Delta Force's terrain definitely had collision, and that was around 13 years ago.
It could even end up being a hybrid system (maybe terrain and vegetation as voxels/point clouds, as well as other things you won't interact with, then everything else as polygons?).
The better question is, if this technolohy was realistic and feasible, then why wouldn't Carmack be leading the charge?
The fact that Carmack has been working on it for, in all likelyhood over a decade but is still very skeptical of real implementation in an actual game engine is extremely telling.
Like Carmack clearly states here, anyone can put out a tech demo that runs a 20-30fps at 720p with only basic rendering systems active, and 5-10 unique assets instanced en masse. To actually get that to work in a game, with lighting, physics, collision, ai, gameplay systems, sound, post effects, and hundreds or thousands of unique assets all in one level, you're going to need not to just optimize it a little, but to run about 100 times faster. We're not talking "this is a little slow" we're talking, this is years and years and multiple hardware revisions off from being realistic.
And that is only the technical side, you're talking about reinventing the wheel when it comes to workflow and tools for getting assets into an engine like this. Hell, most people still struggle to get clean normal map bakes, and we've had tools for normal mapping for almost 10 years. To think a robust tool pipeline for this would be feasible anytime in the near future is wishful thinking at best.
It's running in software. If they run in hardware it will run a ton faster... and this isn't even considering that they are using 0% of the GPU currently, which would push it well beyond 100 times faster.
You don't have to reinvent the wheel with this. The programmers do, but their developing this in such a way to where the artists aren't going to have to worry about that. The tools can be made compatible and that is one of Euclideon's main focuses: making all the tools compatible with everything we're working with now.
I just don't see a company wasting time to go this far if it is all a hoax and is not feasible for games.
Also, I keep on hearing that instancing argument, but considering that one object is supposed to be millions of points which ends up being way more than 8 GB of data, it is clear that they've worked around the memory issues and that they're no longer an issue.
Also, I keep on hearing that instancing argument, but considering that one object is supposed to be millions of points which ends up being way more than 8 GB of data, it is clear that they've worked around the memory issues and that they're no longer an issue.
If you only can use 30 unique assets pr. level, then it's still an issue.
Many people said it couldn't possibly run in real-time. Not only are they running it in real-time, but they're running it in real-time in software. On a CPU. GPU is unused except for putting the image on the screen, and any GPU could do that much.
Now everybody cries instancing when the real-time demo is brought up, even though it is point clouds or voxels and handles data in an entirely different way than polygons do.
Not to mention that even with instancing, no common consumer hard disk is supposed to be able to store the scene.... but there it is. On a laptop.
It seems that every time that someone lists a limitation, the guys do a demonstration that proves them wrong.
So.... the fact that it is a new take on voxels/point clouds is true, otherwise it couldn't have run in the conditions it has run.
So, talking to a programmer buddy of mine, the memory restrictions maybe be grossly exaggerated.
If you're not using textures, that frees up a massive amount of memory. If you break it down into channels, you've got XYZ, RGB, Spec RGB, Gloss, Alpha. That is 11 channels.
With a game asset, you're storing Normal RGB/XYZ, RGB, Spec RGB, Gloss, Alpha, Again, 11 channels.
So if we assume that these channels can be compressed to an equivalent amount as traditional textures can, you're looking at about about 4 million "points" you can store in comparison to an asset with a 2048x2048 texture.
If I look at a high resolution FPV weapon model that I've created, I'm using about 1.5 million verts.
Now this is just one asset, the idea that you wouldn't be able to fit a fully unique level with "unlimited" detail onto disk is still very much true, but if you compare it to traditional techniques, and replace your unique textures with unique point cloud objects with color and material properties, its not too insane.
You lose out in that you can't use hacks like floating geometry in material creation, every flat wall that used to just use a tiling texture would need to have high res geometry, and you wouldn't be able to apply a flat tiling texture to a curved wall, you would need a brand new "curve wall" asset....
There are still a large amount of drawbacks to it, but maybe the memory concerns aren't such a big deal.
The only reason they don't have a ton of awesome assets is because they have one aritst and they made the entire demonstration scene in three weeks.
This is pure speculation on YOUR part. You can't have it both ways. There could be a large variaty of reasons, other than the lack of artists for why they don't have more unique assets in their demo.
but the fact is, is that a lot of people didn't think it was running in real-time in the first place.
Those of us arguing about the inherent problems have never doubted that it's running in real time. What we have questioned are the stuff NOT in the demo that the guy said was already solved.
Nobody can answer me as to how the demo was running on that laptop, even though they were telling me that 500 terabytes of data would be needed for that scene... which is clearly not the case.
We have never said that the demo would require that much data because it's using INSTANCING, the same 30 objects over and over again.
What we've said is that a REAL GAME LEVEL would require too much data to run on any computer and fit on any media to deliver it to the customers.
Don't post stuff that isn't true, please.
That was proven, so that gives me even less reason to not believe the claims...
Nothing has been proven. You have completely misunderstood nearly everything that's been explained to you multiple times. Either you're a bit dense or your intentionally denying everything posted here, just for you.
Animations and collisions were shown in that video as well.
No skeletial deformation and no collision was shown.
I think that people are going too much by traditional misconceptions in terms of 3D.
We have no misconceptions but apparently you have a lot.
Last word: If this technology was so unrealistic and unfeasible, then why has John Carmack been looking into it for years?
The more important question is why he repeatedly abandoned it.
The claims he's backed up so far are not impressive at all. Carmack coined the term Sparse Voxel Octree for this kind of thing years ago, youtubing octree and voxels gets you all kinds of similar but less hyperbolic videos, and EVERY SERIOUS PROGRAMMER who responds about this in interviews pretty much says it isn't plausible. A few pages ago someone posted a link to a downloadable voxel engine that can push similar amounts of points on your computer. Download it and see!
Why would two guys who stand to profit off of this be more honest than a bunch of dudes with no vested interest and a lot of programming success?
Carmack, for example, is i believe a millionaire? He funds his own rocket program. He's been making and showing 'atom' tech demos for a very, very long time. He put this insane, unwieldy, technologically amazing megatexture technology into his new game just to squeak a LITTLE more detail into it. If he isn't leading the charge for voxels, some assholes with a dishonest, misleading tech demo certainly aren't. Voxels are very real tech, that do EXACTL WHAT THIS TECH DEMO SHOWS -- carry a LOT of detail, VERY cheap, but offer no solution to all of the fundamental problems that you need to solve to make something realtime.
What I am saying is that the memory footprint everybody keeps mentioning is purely speculative.
They wouldn't be able to run the scenes that they have run if the memory footprint really was as bad as people have said.
Do you even know what instancing is??
It means the object is stored in memory only once but can be drawn as many times as you want. They can store about 30 objects in 8 GBs of memory. Storing hundreds or thousands of objects is not going to be possible; but the more important issue is that they have to somehow ship those thousands of objects to the gamers, on something.
And if you look at the surface area of those objects you'll see that it is practically impossible to store even just a 1km x 1km sized outdoor terrain level without repetition. And that's tiny small, even for most FPS games.
Yeah... you aren't the majority. I can link a ton of posts saying it isn't real-time, or posts saying that the memory requirements would be so large that we couldn't even hope to do this in 8 years.
Your comment about instancing is irrelevant when supposedly one point cloud object with millions of points is supposed to be vastly more than 8 GB... so then how did they get more than one object in that scene?
We are not talking about this scene, we are talking about what a real world game requires, damnit.
Yes, we are talking about this scene. Now you are making comments that have nothing to do with the statements I have made because you truly have no technical response.
Your comment about instancing is irrelevant when supposedly one point cloud object with millions of points is supposed to be vastly more than 8 GB... so then how did they get more than one object in that scene?
So, how does their scene exist in real-time on a laptop when considering this?
So if we assume that these channels can be compressed to an equivalent amount as traditional textures can, you're looking at about about 4 million "points" you can store in comparison to an asset with a 2048x2048 texture.
There are still a large amount of drawbacks to it, but maybe the memory concerns aren't such a big deal.
The problem here is more complex. Level geometry is using a lot of texture repetition, hidden by multiple layers of textures, decals, detail maps, vertex colors etc. The geometric variety is decoupled from the textures, it's a lot less detailed for a start, and even that detail can be localized as needed (ie. you can use smaller polygons where needed).
With voxels or point clouds or anything, you need to cover every surface unit uniquely, or use the highly limited form of instancing seen in the Euclideon demo. So weapons and characters are one thing, but the far, far more problematic issue is terrain, foliage, buildings, indoor areas and so on.
Just think about Rage which has unique surface texturing. The granularity is quite rough, look at the texel sizes and imagine if that'd be the minimum voxel size - it wouldn't be enough. Also, texture data is 2D and far easier to compress, and lossy compression wouldn't distort the geometry. Even with those advantages, Rage requires 22 GB of space for a full X360 install, and that is the compressed version of the megatextures.
A voxel based game with a similar game world size would easily require at least one, but probably two orders of magnitude more data. And that's with a voxel size far, far bigger than what Euclideon promises... do the math.
So, talking to a programmer buddy of mine, the memory restrictions maybe be grossly exaggerated.
If you're not using textures, that frees up a massive amount of memory. If you break it down into channels, you've got XYZ, RGB, Spec RGB, Gloss, Alpha. That is 11 channels.
With a game asset, you're storing Normal RGB/XYZ, RGB, Spec RGB, Gloss, Alpha, Again, 11 channels.
So if we assume that these channels can be compressed to an equivalent amount as traditional textures can, you're looking at about about 4 million "points" you can store in comparison to an asset with a 2048x2048 texture.
If I look at a high resolution FPV weapon model that I've created, I'm using about 1.5 million verts.
Now this is just one asset, the idea that you wouldn't be able to fit a fully unique level with "unlimited" detail onto disk is still very much true, but if you compare it to traditional techniques, and replace your unique textures with unique point cloud objects with color and material properties, its not too insane.
You lose out in that you can't use hacks like floating geometry in material creation, every flat wall that used to just use a tiling texture would need to have high res geometry, and you wouldn't be able to apply a flat tiling texture to a curved wall, you would need a brand new "curve wall" asset....
There are still a large amount of drawbacks to it, but maybe the memory concerns aren't such a big deal.
Yes, this technology can never have unlimited unique objects without procedural generation, but the claims of unlimited detail - unlimited geometry are true.
They never claimed to be able to have unlimited unique detail.
Also, you must tell me how Delta Force and Open Outcast did terrain with voxels? With collision, no less.
So, how does their scene exist in real-time on a laptop when considering this?
They only use about 30 assets which just fit into 8 GB of memory. That's it, what the hell can't you understand about it? Are you intentionally messing with us or is it really that hard to read the words and understand their meaning?
Stop skipping my posts and posting things out of context, vargatom. Also read Earthquake's post that I quoted.
Answer this too.
~ Your comment about instancing is irrelevant when supposedly one point cloud object with millions of points is supposed to be vastly more than 8 GB... so then how did they get more than one object in that scene?
So, how does their scene exist in real-time on a laptop when considering this?
The problem here is more complex. Level geometry is using a lot of texture repetition, hidden by multiple layers of textures, decals, detail maps, vertex colors etc. The geometric variety is decoupled from the textures, it's a lot less detailed for a start, and even that detail can be localized as needed (ie. you can use smaller polygons where needed).
With voxels or point clouds or anything, you need to cover every surface unit uniquely, or use the highly limited form of instancing seen in the Euclideon demo. So weapons and characters are one thing, but the far, far more problematic issue is terrain, foliage, buildings, indoor areas and so on.
Just think about Rage which has unique surface texturing. The granularity is quite rough, look at the texel sizes and imagine if that'd be the minimum voxel size - it wouldn't be enough. Also, texture data is 2D and far easier to compress, and lossy compression wouldn't distort the geometry. Even with those advantages, Rage requires 22 GB of space for a full X360 install, and that is the compressed version of the megatextures.
A voxel based game with a similar game world size would easily require at least one, but probably two orders of magnitude more data. And that's with a voxel size far, far bigger than what Euclideon promises... do the math.
Yeah totally agree, when we get to environment stuff it gets a lot more fuzzy. You could probably do some really clever stuff with a modular type setup, but this is going to eat up a lot more memory than a modular setup where extra pieces are essentially free, and the texture usage(for the whole set) is fixed.
You could also do some smart stuff like "instance groups", say you have 20 unique blades of grass, you could link together little groups to make a massive amount of variation, assuming basic rotation/translation works(which hasn't been confirmed). But that would be pretty limited to foliage.
There are a lot of really good workflows we use with traditional modeling to get variation out of limited texture resources, like decals and such which would have to be entirely uniquely stored. On the other hand, that is sort of what rage is doing anyway, baking everything down to a massive texture sheet and streaming it in efficiently. I bet there are some creative ways to work with a relatively large amount of unique voxel data as well.
~ Your comment about instancing is irrelevant when supposedly one point cloud object with millions of points is supposed to be vastly more than 8 GB... so then how did they get more than one object in that scene?
I'm not sure when/where this was ever suggested. Source?
So, how does their scene exist in real-time on a laptop when considering this?
Again - their objects are stored in sparse octrees, which is a type of compression compared to standard voxels. This way they can store the equivalent of, say, a 2048x2048x2048 voxel object at a fraction of the memory cost. They only store the surface points, so their objects are probably in the 100-400MB range each. That way they can fit those ~30 objects in memory and instance the hell out of them.
This is not something that Euclideon invented, just to be clear.
Now, it'd be easy to create multiple versions of palm trees if they could afford to store them. Say, rotate and scale them and create 15 variations. It'd take about an hour at most and then they could voxelize them all and they wouldn't have to use the same damn tree all over the demo. But they aren't doing this, they're only using instanced trees.
It is not plausible that they don't have an artist or anyone to do this.
It is far more likely that they simply don't have enough free memory to store 15 trees.
Especially when you realize that the tree trunk itself isn't a unique object either, but the same 1 meter high trunk repeated 8-10 times, so they probably don't have enough memory even for a complete palm tree.
Also understand that the average console game has 400MB for ALL its data including textures, geometry, animation, sound, music and everything. This engine would never be able to run on a current console.
Procedural generation of some things could help most likely.
As I have stated earlier in this thread, procedural generation is a means of generation only, it doesn't save you any ram. Once the object/whatever is generated, it must need stored in ram. You can save disk space here, but its moot if you cant fit it into ram.
You could also do some smart stuff like "instance groups", say you have 20 unique blades of grass, you could link together little groups to make a massive amount of variation, assuming basic rotation/translation works(which hasn't been confirmed). But that would be pretty limited to foliage.
The problem is that you have to make a tradeoff with voxels.
Either you store them in an octree to get the high level of compression and the fast lookups (fast rendering) when raycasting into the scene; or you drop the octree and get the ability to transform the geometry.
Octrees are a data structure that take a lot of time to build (obviously proportional to the amount of geometry, too) and their nature dictates that even the smallest change requires a complete rebuild. You can't do that in real time, that's quite certain, even for the small amount of data that Euclideon uses.
This is why instancing can not change transforms. You basically stop at a level of the tree and point to another branch when you want to draw it, re-using the data there.
The guy with the youtube video about animated voxel octrees shows some stuff but it's always a very limited case. Like he's only moving around pieces of foliage that are probably not a part of the main octree. And he explains the problem with higher amounts of rotation and such quite clearly.
The only way to re-use data would be to build a set of tiles that can connect at the sides. It'd look like the first Super Mario, but in 3D...
Sure, you could do a hybrid renderer, rasterized polygons for characters and other dynamic objects, but that still wouldn't make the problems with voxel terrain go away...
One object with about a million points does not require 8Gb of data. Assuming 32 bytes for each point, you only need 31 megabytes of memory to store it. Their claim is just regular instancing. Reference the same memory repeatedly instead of repeatedly storing the same points.
The problem is that you have to make a tradeoff with voxels.
Either you store them in an octree to get the high level of compression and the fast lookups (fast rendering) when raycasting into the scene; or you drop the octree and get the ability to transform the geometry.
Octrees are a data structure that take a lot of time to build (obviously proportional to the amount of geometry, too) and their nature dictates that even the smallest change requires a complete rebuild. You can't do that in real time, that's quite certain, even for the small amount of data that Euclideon uses.
This is why instancing can not change transforms. You basically stop at a level of the tree and point to another branch when you want to draw it, re-using the data there.
The guy with the youtube video about animated voxel octrees shows some stuff but it's always a very limited case. Like he's only moving around pieces of foliage that are probably not a part of the main octree. And he explains the problem with higher amounts of rotation and such quite clearly.
The only way to re-use data would be to build a set of tiles that can connect at the sides. It'd look like the first Super Mario, but in 3D...
Sure, you could do a hybrid renderer, rasterized polygons for characters and other dynamic objects, but that still wouldn't make the problems with voxel terrain go away...
Because in the end when you go "yes but yes but yes but" and have replaced everything that was promising back to polygonal tech due to stuff just being way too expensive to do with this kind of tech, you have already gone back fully to polygons.
As mentioned, Carmack has invented tech where we can actually have unique detail everywhere and actually shape the ground around objects, and this is on five year old hardware with extremely limited memory.
Now is this worth the trade for a highly detailed but extremely instanced world where interactivity will be zero, and the fact that there's still no proper solution to skinned animation (due to expense of restructuring the already structured voxel data) which leaves only polygons for that, not to mention every other limitation that will just make polygons be the better choice.
No one has been disputing its validity to run in real time, we are disputing the claims that the end result would be able to run any kind of feasible framerate, the very things that are by definition the downsides of the tech.
Replies
On a serious note, improvements in graphics at this point are the least important thing in games development in my opinion.
Yeah I was gonna say, those are the pre-edited scans, before they added realy crappy rocks all over the ground, and tried to re-bake some ambient occlusion and drop a shadow into the mix. Also, the "web journalist" aka "technology marketer", John Gatt, along with someone claiming to be Bruce Dell have apparently been trying to win over the hardcore gamers on HardOCP's forums with more ridiculous statements. http://hardforum.com/showthread.php?t=1628422
http://www.gamingface.com/2011/08/death-of-gpu-as-we-know-it-re-post-from.html
All aboard the bullshit train! There are some amazing quotes in there.
Christ this website is terrible, I'm surprised this isn't a geocities URL.
I thought people were starting to dislike me because of my opinion, so I figured I would leave.
It seemed that way because it was getting to the point to where instead of discussing things, people started to insult me and I don't really know why... even though I was respectful or at least tried to be.
Some even went as far as to put me on their ignore lists. Pretty childish to ignore somebody because you disagree with a single opinion of theirs.
These things are not a matter of opinion. It's empirical evidence backed by technological proof. You approached it from the viewpoint of an opinion and were brought down to earth with legitimate research.
It's like arguing that the earth isn't round. You can get upset about it becoming a personal attack on an innocent opinion. Except that the fact that the earth is round is a fact .. not a matter of opinion.
It's alright though, we've all had our fun You can bow out gracefully and make some arts instead
They can guess on what they think the implementation is like, but the fact is, is that a lot of people didn't think it was running in real-time in the first place. Nobody can answer me as to how the demo was running on that laptop, even though they were telling me that 500 terabytes of data would be needed for that scene... which is clearly not the case.
That was proven, so that gives me even less reason to not believe the claims... even though people were saying we'd never be able to do that in real-time in even 10 years.
Animations and collisions were shown in that video as well. I think that people are going too much by traditional misconceptions in terms of 3D.
Back when 3D was first introduced, everything had to be developed for it, such as animation, collisions, everything like that. Similar issues were faced with developing those.
Last word: If this technology was so unrealistic and unfeasible, then why has John Carmack been looking into it for years? If animation and collisions are impossible, then there must be some other work-around for the same type of systems.
Everyone is still thinking too much in terms of their experience with polygons.... when this system deals with many things like memory, drawing the scene, animation, in an entirely different manner.
Hardware wasn't the only thing that had to advance for this to be possible, look how far UI's have come and what not. A computer is just a lot easier to work with now than it was 15-20 years ago... so it isn't surprising that the company is just now making huge progress with their technology.
Seriously, IBM has a working chip that can learn. It can run code it is not programmed to run... and Unlimited Detail seems that far out there?
Technology is progressing.
Delta Force's terrain definitely had collision, and that was around 13 years ago.
It could even end up being a hybrid system (maybe terrain and vegetation as voxels/point clouds, as well as other things you won't interact with, then everything else as polygons?).
The fact that Carmack has been working on it for, in all likelyhood over a decade but is still very skeptical of real implementation in an actual game engine is extremely telling.
http://www.youtube.com/watch?feature=player_embedded&v=hapCuhAs1nA
Like Carmack clearly states here, anyone can put out a tech demo that runs a 20-30fps at 720p with only basic rendering systems active, and 5-10 unique assets instanced en masse. To actually get that to work in a game, with lighting, physics, collision, ai, gameplay systems, sound, post effects, and hundreds or thousands of unique assets all in one level, you're going to need not to just optimize it a little, but to run about 100 times faster. We're not talking "this is a little slow" we're talking, this is years and years and multiple hardware revisions off from being realistic.
And that is only the technical side, you're talking about reinventing the wheel when it comes to workflow and tools for getting assets into an engine like this. Hell, most people still struggle to get clean normal map bakes, and we've had tools for normal mapping for almost 10 years. To think a robust tool pipeline for this would be feasible anytime in the near future is wishful thinking at best.
You don't have to reinvent the wheel with this. The programmers do, but their developing this in such a way to where the artists aren't going to have to worry about that. The tools can be made compatible and that is one of Euclideon's main focuses: making all the tools compatible with everything we're working with now.
I just don't see a company wasting time to go this far if it is all a hoax and is not feasible for games.
Also, I keep on hearing that instancing argument, but considering that one object is supposed to be millions of points which ends up being way more than 8 GB of data, it is clear that they've worked around the memory issues and that they're no longer an issue.
If you only can use 30 unique assets pr. level, then it's still an issue.
They wouldn't be able to run the scenes that they have run if the memory footprint really was as bad as people have said.
The only reason they don't have a ton of awesome assets is because they have one aritst and they made the entire demonstration scene in three weeks.
assuming that is wrong
Many people said it couldn't possibly run in real-time. Not only are they running it in real-time, but they're running it in real-time in software. On a CPU. GPU is unused except for putting the image on the screen, and any GPU could do that much.
Now everybody cries instancing when the real-time demo is brought up, even though it is point clouds or voxels and handles data in an entirely different way than polygons do.
Not to mention that even with instancing, no common consumer hard disk is supposed to be able to store the scene.... but there it is. On a laptop.
It seems that every time that someone lists a limitation, the guys do a demonstration that proves them wrong.
So.... the fact that it is a new take on voxels/point clouds is true, otherwise it couldn't have run in the conditions it has run.
If you're not using textures, that frees up a massive amount of memory. If you break it down into channels, you've got XYZ, RGB, Spec RGB, Gloss, Alpha. That is 11 channels.
With a game asset, you're storing Normal RGB/XYZ, RGB, Spec RGB, Gloss, Alpha, Again, 11 channels.
So if we assume that these channels can be compressed to an equivalent amount as traditional textures can, you're looking at about about 4 million "points" you can store in comparison to an asset with a 2048x2048 texture.
If I look at a high resolution FPV weapon model that I've created, I'm using about 1.5 million verts.
Now this is just one asset, the idea that you wouldn't be able to fit a fully unique level with "unlimited" detail onto disk is still very much true, but if you compare it to traditional techniques, and replace your unique textures with unique point cloud objects with color and material properties, its not too insane.
You lose out in that you can't use hacks like floating geometry in material creation, every flat wall that used to just use a tiling texture would need to have high res geometry, and you wouldn't be able to apply a flat tiling texture to a curved wall, you would need a brand new "curve wall" asset....
There are still a large amount of drawbacks to it, but maybe the memory concerns aren't such a big deal.
Again guy, you can't rag on others for assuming things, and then state your own assumptions as fact. For instance, the "fact" that
This is pure speculation on YOUR part. You can't have it both ways. There could be a large variaty of reasons, other than the lack of artists for why they don't have more unique assets in their demo.
If it doesn't turn out to be truly feasible for games, it would suck, but oh well. There will be something else that will be as good or better.
Those of us arguing about the inherent problems have never doubted that it's running in real time. What we have questioned are the stuff NOT in the demo that the guy said was already solved.
We have never said that the demo would require that much data because it's using INSTANCING, the same 30 objects over and over again.
What we've said is that a REAL GAME LEVEL would require too much data to run on any computer and fit on any media to deliver it to the customers.
Don't post stuff that isn't true, please.
Nothing has been proven. You have completely misunderstood nearly everything that's been explained to you multiple times. Either you're a bit dense or your intentionally denying everything posted here, just for you.
No skeletial deformation and no collision was shown.
We have no misconceptions but apparently you have a lot.
The more important question is why he repeatedly abandoned it.
Why would two guys who stand to profit off of this be more honest than a bunch of dudes with no vested interest and a lot of programming success?
Carmack, for example, is i believe a millionaire? He funds his own rocket program. He's been making and showing 'atom' tech demos for a very, very long time. He put this insane, unwieldy, technologically amazing megatexture technology into his new game just to squeak a LITTLE more detail into it. If he isn't leading the charge for voxels, some assholes with a dishonest, misleading tech demo certainly aren't. Voxels are very real tech, that do EXACTL WHAT THIS TECH DEMO SHOWS -- carry a LOT of detail, VERY cheap, but offer no solution to all of the fundamental problems that you need to solve to make something realtime.
Do you even know what instancing is??
It means the object is stored in memory only once but can be drawn as many times as you want. They can store about 30 objects in 8 GBs of memory. Storing hundreds or thousands of objects is not going to be possible; but the more important issue is that they have to somehow ship those thousands of objects to the gamers, on something.
And if you look at the surface area of those objects you'll see that it is practically impossible to store even just a 1km x 1km sized outdoor terrain level without repetition. And that's tiny small, even for most FPS games.
Your comment about instancing is irrelevant when supposedly one point cloud object with millions of points is supposed to be vastly more than 8 GB... so then how did they get more than one object in that scene?
They found a solution is all.
We are not talking about this scene, we are talking about what a real world game requires, damnit.
This is one of his claims, yet to be proven.
Yes, we are talking about this scene. Now you are making comments that have nothing to do with the statements I have made because you truly have no technical response.
Stop skipping my statements and answer them.
So, how does their scene exist in real-time on a laptop when considering this?
The problem here is more complex. Level geometry is using a lot of texture repetition, hidden by multiple layers of textures, decals, detail maps, vertex colors etc. The geometric variety is decoupled from the textures, it's a lot less detailed for a start, and even that detail can be localized as needed (ie. you can use smaller polygons where needed).
With voxels or point clouds or anything, you need to cover every surface unit uniquely, or use the highly limited form of instancing seen in the Euclideon demo. So weapons and characters are one thing, but the far, far more problematic issue is terrain, foliage, buildings, indoor areas and so on.
Just think about Rage which has unique surface texturing. The granularity is quite rough, look at the texel sizes and imagine if that'd be the minimum voxel size - it wouldn't be enough. Also, texture data is 2D and far easier to compress, and lossy compression wouldn't distort the geometry. Even with those advantages, Rage requires 22 GB of space for a full X360 install, and that is the compressed version of the megatextures.
A voxel based game with a similar game world size would easily require at least one, but probably two orders of magnitude more data. And that's with a voxel size far, far bigger than what Euclideon promises... do the math.
Yes, this technology can never have unlimited unique objects without procedural generation, but the claims of unlimited detail - unlimited geometry are true.
They never claimed to be able to have unlimited unique detail.
Also, you must tell me how Delta Force and Open Outcast did terrain with voxels? With collision, no less.
Read my posts, and don't post lies, okay?
They only use about 30 assets which just fit into 8 GB of memory. That's it, what the hell can't you understand about it? Are you intentionally messing with us or is it really that hard to read the words and understand their meaning?
Answer this too.
~ Your comment about instancing is irrelevant when supposedly one point cloud object with millions of points is supposed to be vastly more than 8 GB... so then how did they get more than one object in that scene?
So, how does their scene exist in real-time on a laptop when considering this?
Yeah totally agree, when we get to environment stuff it gets a lot more fuzzy. You could probably do some really clever stuff with a modular type setup, but this is going to eat up a lot more memory than a modular setup where extra pieces are essentially free, and the texture usage(for the whole set) is fixed.
You could also do some smart stuff like "instance groups", say you have 20 unique blades of grass, you could link together little groups to make a massive amount of variation, assuming basic rotation/translation works(which hasn't been confirmed). But that would be pretty limited to foliage.
There are a lot of really good workflows we use with traditional modeling to get variation out of limited texture resources, like decals and such which would have to be entirely uniquely stored. On the other hand, that is sort of what rage is doing anyway, baking everything down to a massive texture sheet and streaming it in efficiently. I bet there are some creative ways to work with a relatively large amount of unique voxel data as well.
I'm not sure when/where this was ever suggested. Source?
Again - their objects are stored in sparse octrees, which is a type of compression compared to standard voxels. This way they can store the equivalent of, say, a 2048x2048x2048 voxel object at a fraction of the memory cost. They only store the surface points, so their objects are probably in the 100-400MB range each. That way they can fit those ~30 objects in memory and instance the hell out of them.
This is not something that Euclideon invented, just to be clear.
Now, it'd be easy to create multiple versions of palm trees if they could afford to store them. Say, rotate and scale them and create 15 variations. It'd take about an hour at most and then they could voxelize them all and they wouldn't have to use the same damn tree all over the demo. But they aren't doing this, they're only using instanced trees.
It is not plausible that they don't have an artist or anyone to do this.
It is far more likely that they simply don't have enough free memory to store 15 trees.
Especially when you realize that the tree trunk itself isn't a unique object either, but the same 1 meter high trunk repeated 8-10 times, so they probably don't have enough memory even for a complete palm tree.
Also understand that the average console game has 400MB for ALL its data including textures, geometry, animation, sound, music and everything. This engine would never be able to run on a current console.
No one said that.
Notch mentioned it, Carmack mentions it in his voxel .pdf, and others in this thread mentioned it too.
As I have stated earlier in this thread, procedural generation is a means of generation only, it doesn't save you any ram. Once the object/whatever is generated, it must need stored in ram. You can save disk space here, but its moot if you cant fit it into ram.
Please provide a source for your claims, not paraphrasing.
Nah, Notch said the entire world would take something like 500 Petabytes.... xD
I think he was gauging their implementation against his implementation too much.
The problem is that you have to make a tradeoff with voxels.
Either you store them in an octree to get the high level of compression and the fast lookups (fast rendering) when raycasting into the scene; or you drop the octree and get the ability to transform the geometry.
Octrees are a data structure that take a lot of time to build (obviously proportional to the amount of geometry, too) and their nature dictates that even the smallest change requires a complete rebuild. You can't do that in real time, that's quite certain, even for the small amount of data that Euclideon uses.
This is why instancing can not change transforms. You basically stop at a level of the tree and point to another branch when you want to draw it, re-using the data there.
The guy with the youtube video about animated voxel octrees shows some stuff but it's always a very limited case. Like he's only moving around pieces of foliage that are probably not a part of the main octree. And he explains the problem with higher amounts of rotation and such quite clearly.
The only way to re-use data would be to build a set of tiles that can connect at the sides. It'd look like the first Super Mario, but in 3D...
Sure, you could do a hybrid renderer, rasterized polygons for characters and other dynamic objects, but that still wouldn't make the problems with voxel terrain go away...
I see, that make sense.
As mentioned, Carmack has invented tech where we can actually have unique detail everywhere and actually shape the ground around objects, and this is on five year old hardware with extremely limited memory.
Now is this worth the trade for a highly detailed but extremely instanced world where interactivity will be zero, and the fact that there's still no proper solution to skinned animation (due to expense of restructuring the already structured voxel data) which leaves only polygons for that, not to mention every other limitation that will just make polygons be the better choice.
No one has been disputing its validity to run in real time, we are disputing the claims that the end result would be able to run any kind of feasible framerate, the very things that are by definition the downsides of the tech.