Kinda off topic but Im hoping that they are combining virtual texturing and regular texture streaming for doom4 and the next engine. In a game like doom4 with rooms and hallways you probably dont need unique textures on 90% of stuff, but in some cases it could really help. You could just select a few man made objects where they meet some sand or snow and use virtual texturing on just those objects to get a really good blend between them.
Just had a thought probably unrelated and stupid, but couldnt we have fractal geometry?
I mean so each object has self similarity, so that you zoom in but its smaller instances of the larger objects or something like that.
While it saves memory, it is not feasible due to computational complexity. It doesn't work for voxels since you need an acceleration structure which has to be built beforehand and not at runtime, and it doesn't work for polygons since trying to tesselate fractal geometry gets really messy (not to mention that fractals are hard to work with for artists).
While procedural content is cool and works in some cases (non-linear fog with perlin noise, for example), making a whole game out of it is not very feasible today (of course, there's demos like .kkrieger, but they converted their procedural content to actual geometry at startup and keep them in memory).
I probably shouldn't feed the trolls, but I stumbled upon this earlier in the thread and my nerd sense tingled:
Yes, this technology can never have unlimited unique objects without procedural generation, but the claims of unlimited detail - unlimited geometry are true.
They never claimed to be able to have unlimited unique detail.
What... what?! Unlimited geometry is possible, but unlimited detail isn't? Could you elaborate on this paradox? Also, why IS unlimited geometry possible? Just because they render only so-many pixels, the running time of their "search algorithm" will still increase the more data is present in the scene (this can actually be mathematically proven).
In the meantime, I've been watching their latest video and noticed a few other things that might be a dealbraker for Euclideon:
- There is no transparency whatsoever in any of their videos. Even though depth sorting comes more or less for free with point cloud/voxel based methods, finding all points visible cannot be done so efficiently anymore. Basically, what they do right now is trace intersection tests through their datastructure and stop at the first point found. With transparency, they have to blend this point and continue tracing, which, depending on the datastructure and, more importantly, the size of their voxels and the respective thickness of the outter shell of each model, cannot be done efficiently and without artifacts. So basically, transparency will probably look shitty and may slow things down by a considerable amount.
- There is no LOD (level of distance *pffffffthahahahahah*). While Dell thinks this is only employed to save rendering time, it also serves an important purpose of keeping the triangle-per-pixel-ratio below 1. If you don't do this, you'll have an awful lot of noise in the distance which will be very noticable when you move around. Why this happens is very similar to why mipmaps are needed; basically, if you have more detail per pixel than can be displayed (e.g. you have 5 triangles that occupy the same pixel, or 1'000 "atoms" that would map to the same pixel), only one of the triangles/atoms will ultimately get selected and presented on the screen. The selection process, however, is not continuous between frames. For rasterization, the triangle that wins the Z-buffer-fight or gets drawn last defines the pixel color, or for Unlimited Detail, the atom that lies closes to the view ray gets drawn. Slight changes in view direction or position change the selection outcome, meaning that suddenly a different triangle/"atom" occupies the pixel, which leads to rapid color changes between frames (-> noise). SVOs solve this thanks to the hierarchical storage methods, which allows the tracing algorithm to stop descending down the octree as soon as the projected size of the voxel is below one pixel. The Unlimited Detail algorithm apparently does not allow this, and I think in the last video posted here the guy mentions that there seems to be an awful lot of noise in the distance.
- There is no transparency whatsoever in any of their videos. Even though depth sorting comes more or less for free with point cloud/voxel based methods, finding all points visible cannot be done so efficiently anymore. Basically, what they do right now is trace intersection tests through their datastructure and stop at the first point found. With transparency, they have to blend this point and continue tracing, which, depending on the datastructure and, more importantly, the size of their voxels and the respective thickness of the outter shell of each model, cannot be done efficiently and without artifacts. So basically, transparency will probably look shitty and may slow things down by a considerable amount.
Oh man, I think you hit the nail on the head there. "Alpha testing" of any kind would be astronomically expensive using voxels unless you had a really mean approximation. At the same time though, imagine the complexity of the light modelling you could do if you have the processing muscle to do it - subsurface scattering could be handled very nicely indeed.
Transparency could be solved the same way as foliage and dynamic geometry + characters, by using a hybrid renderer.
Especially if it's using deferred rendering, because then all lighting and shading could be done in a single pass using the same code and generating consistent results (because it only works with G-buffer data).
The main problem is still the background storage issue IMHO, and that can't be overcome by software solutions. But in the 5 to 10 years it takes to get proper hardware, stuff might once again be easier and better to do using polygons and rasterization...
sounds like an awesome plan, just think of the possibilities for characters! :poly122:
Ha ha that cracked me up. I see a slight flaw. I'll rub that one of the drawing board and quietly forget it. If anyone says anything I shall deny all knowledge.
Bruce Dell is still at it. More words yet little substance to show anyone else is wrong, more intentional mis-statements made about industry standards, and mis-statements about others like Notch.
Haha seing his face trying to argue his point makes it even more obvious that he has no idea what he is talking about Nice booth too!
He also forgets the third kind of critics - the ones knowing what production realities are like! Go ahead and convert any current gen asset to Haaaatoms and see what happens ...
I think unlimited detail might be a government funded military experiment to send normal healthy people into murderous rage, to make them more effective killing machines, unlimited potential!
I apologize in advance if these questions and ideas are not plausible, but I've tried to find answers, yet haven't been able to. I'm not a programmer so if this isn't possible, sorry for making you read this.
1. Would it not be possible to save color data on the asset itself? In ZBrush, they have a feature called PolyPainting which allows an artist to paint directly onto the mesh with UV maps attributed later in the process. I believe it's called "Vertex Color", but I'm not sure.
If the so-called "Vertex Color" COULD be saved, does that mean the filesize would exponentially increase to the point where loading multiple assets is unrealistic? I ask this because, wouldn't loading multiple textures maps be more memory hungry than an .OBJ file with color BUILT IN?
2. Is it not plausible that these guys have figured out a way to reverse the process of a converter?
What I mean is, we already have Point Cloud to Polygon converters. Why wouldn't just reversing that process allow artists to load Polygonal Meshes into the engine and automatically be converted into "Atoms" in real-time? Then they use their Search Algorithm to find whatever data is needed to be displayed.
The reason I thought about these things is because they have said they actually use LESS MEMORY than what is already done in Game Engines.
So, if an artist could paint the mesh like a virtual Ceramic Sculpture and save the color, you would only be loading assets and the converter would be breaking up everything into "Atoms", which would already have color data attributed to each "Atom". Wouldn't that allow developers to continue working in basically the same pipeline they always have, just without a Polygon Limit?
I'm not talking about animation or physics, just the process of the engine itself. I know alot of people have spoken out against the lack of animation, I'm just trying to figure out if this was plausible or too file heavy to be a solution.
Again, sorry for making this so long, especially if these things aren't even possible.
Most of the answers to those questions are "it depends, they haven't released much more than razzle dazzle videos".
1) There are all kinds of ways to paint on meshes and transfer it to point cloud, colors and all. The problem becomes storing a wide enough pallet per atom to make the item visually appealing. Noss, or Natch (whatever his name is the guy that created minecraft) ran some basic numbers that where pretty wild guesses but followed that train of thought to its ultimate conclusion. There was some back and forth about the accuracy and the trickery involved but ultimately it came down to needing to store a lot of data per point and having to load that data at some point.
3D coat is probably the closest thing to what you're thinking of. In terms of points, colors and painting.
2) They have, or at least I remember saying they worked on converter so that artists could use tools they already have instead of waiting on these guys to reinvent 30 years of tools.
It's not suprising that they report less memory consumption their world is devoid of complex lighting. It recycles the same few assets over and over again which are all facing the same direction. Which raises the question if they rotated some of those objects would it slow down their search? Being unable to rotate, scale, and manipulate placed objects is a deal breakers on so many levels.
They report lower memory uses while they are running a 10th of what would be required for an actual game. You could report lower memory usage if you stripped a game down to just what they are showing. You could run that game on 2 rocks and a paper clip. Zero AI, no complex lighting, basic shaders, no animating meshes or scripted sequences and no physics.
Just because you can paint on the model, doesn't mean you should... you can do that now and in most cases its very limiting and inaccurate. I would image any tools they cook up wouldn't be much better. I love using viewport canvas in max and for certain styles and some things it will take you 80-90% there and in some cases that's good enough, but for other things like aligning and styling type or overlaying photos and details most of the paint on model apps suck.
Just look at Rage and all the fuss about its release...
- criticized for static environments and low texture res
- huge problems on release because of driver issues (and yet, id gets most of the blame)
- one of the biggest games out there with 22GB of pure data
Id Software has some of the best programmers with decades of game development experience, huge financial resources, great influence on hw vendors. And yet it took them 7 years of development, 4 of which happened after this [ame="http://www.youtube.com/watch?v=dEqeVOZzzAc"]techdemo[/ame] to put together a game, and while I have high hopes for them it still remains to see how successful Rage is going to get.
So seriously what the hell do people expect from Euclideon...
I'm sorry, I should have been more clear about the converter theory. I didn't mean in real-time convert them INTO Point Cloud, but more or less break a polygon into a certain amount of Atoms.
Like if you had a way to make any polygon break equally into 1000 pieces (using 1000 for example purposes), it would be more of a smoke and mirrors type of Point Cloud, but without that data stored in the file.
Essentially, using just the polygon file size, which is massively lower, you would break it into Atoms, without needing to load Point Cloud Data type filesizes and would retain all color attributes. That's what I meant by converter, like a faked Point Cloud system.
I kind of figured something like that would be far easier to accomplish by painting directly on the highly detailed mesh. It seems like more of a burden to create the UV Map, then bring it into Photoshop and try to match up all the seams correctly.
I'm not either a programmer nor an artist, so I'm actually curious about that. Is it really easier to map a texture, than to just paint on the mesh? To me, the latter would give more freedom to an artist. Is it more time consuming to paint like that?
And finally, I've seen quite alot of people say that the geometry in the video was untouched and not rotated. This is actually incorrect from what I've seen. In the 7 minute demo, you can clearly see multiple rocks, the purple feathers, vines and that female statue facing in different directions.
I understand the repetition part, where only certain assets are loaded over and over, but quite a few are actually rotated. If you think about it, the whole 1km squared world has a ton of curved paths, so automatically, rocks would have to be rotated to adjust for the curved pathways that make up the design of the world.
Most of what you think about here is already implemented with sparse voxel octrees, which is why some of us continue to mention that tech.
And finally, I've seen quite alot of people say that the geometry in the video was untouched and not rotated. This is actually incorrect from what I've seen. In the 7 minute demo, you can clearly see multiple rocks, the purple feathers, vines and that female statue facing in different directions.
You make the mistake to think that the rocks are treated as the individual asset. It's actually a roughly 2mx2m or so tile of ground with rocks and grass and such that's used as a tile and there are a few variations of this tile repeated all over again.
It's quite evident here:
You can also see how the palm trees are all facing in the same direction, or that there are no arbitrary edges around the water, it's all in 90 degrees because of the use of square tiles.
I think unlimited detail might be a government funded military experiment to send normal healthy people into murderous rage, to make them more effective killing machines, unlimited potential!
all this stuff is still heavily in development. before you fuss bout anything, pay attention to the narration.
Like any engine, this engine isn't a catch-all for all types of games. But its potential for something along the lines of a 3rd person exploration game would be pretty neat. I know I'd give it a go if a company was willing to take the plunge.
edit: a game like 'from dust' could have probably been made with a proper voxel engine with some good programmers behind it. would have been neat to alter the landscape in a way that wasn't limited to raising or lowering a heightmap, instead, being able to carve out caverns or having a bit more unique detail on the terrain.
We already have a hard enough time trying to get certain functions and graphical interpretations to be shown with correct fidelity, so that aesthetics don't suffer from 'limitations', and here we are, 24 pages in, still talking about a tech that is going force wind to all this.
Which reminds me, I need to send them my 'folio, however with the reputation this guys have, I wouldn't be surprised if I suck a few cocks to get in.
The C4 engine has a very robust voxel terrain implementation with working LOD (< first, and only if my memory serves right). So, it is certainly feasible for games right now, depending on your use.
We already have a hard enough time trying to get certain functions and graphical interpretations to be shown with correct fidelity, so that aesthetics don't suffer from 'limitations', and here we are, 24 pages in, still talking about a tech that is going force wind to all this.
Which reminds me, I need to send them my 'folio, however with the reputation this guys have, I wouldn't be surprised if I suck a few cocks to get in.
I'm a tad curious to know what their programmers think they'll do with this tech (and not just predictable responses from the talking heads).
The issue they seem to be faced with, is the dependency on the divergent technology that derived from polygon/texture based games. We managed to find ways to fake SO much stuff, that we have middle-ware, middleware for the middleware, then piggybanked the middleware with supporting hardware to blur the lines of every 'hack' we've created to attain our visual targets.
Keep in mind, how in many ways polygons killed much of the use of NURBS in film/games. NURBS are far more accurate, and still heavily used in industrial design, but Polygons found ways to brute force things, and fake things in much more convincing ways, while the hardware managed to catch up to effectively make polygon crunching a moot point.
It's interesting technology, but I don't think many people believe it will overtake games without the support polygons/textures had. That said, it could still have many other interesting uses.
I could see this maybe being used in the medical field. Patients can see ultrasounds of their 3D voxel based baby. Moving and breathing in real-time... rather than a crappy CRT image of that black and white pixels of their breathing child. Or even laser eye surgeries or transplants.
I think everyone on here understands the 3d industry is so heavily invested in polygons, you can't just replace it in a year or 2. But obvious voxels are amazing for things such as sculpting in 3dcoat, terrain that can have tunnels, cliffs, and over hangs, storing GI lighting info in voxel space in engines such as crysis, and I'm sure we'll see a lot more uses in the next few years, don't some fluid effects use voxel spaced simulations?
Its the same thing people have been saying about raytracing, it has it's place in the future along with many other techniques, but they will most likely be used together rather than as a replacement for the others.
I could see this maybe being used in the medical field. Patients can see ultrasounds of their 3D voxel based baby. Moving and breathing in real-time... rather than a crappy CRT image of that black and white pixels of their breathing child. Or even laser eye surgeries or transplants.
It can already be done and is already being used, voxels in medicine-imagery is not something new, it's just that those crappy crt's are connected to still perfectly working and expensive machinery, there's very little point to upgrading it for the extra hardcore gamer baby-fidelity.
But I'd go for the voxel-baby if I had the choice.
It can already be done and is already being used, voxels in medicine-imagery is not something new, it's just that those crappy crt's are connected to still perfectly working and expensive machinery, there's very little point to upgrading it for the extra hardcore gamer baby-fidelity.
But I'd go for the voxel-baby if I had the choice.
That's pretty cool.
Use it to represent tumors, scar tissue, or breast implants.
Use it to see irregular heartbeats etc.
Its cool tech that would take many years and bajillions of dollars to turn into game-ready tech.
It's not just "some places" eld. By their nature CT machines and their software can create 3d reconstructions since information is stored in voxels. It's just not necessary or diagnostically useful most of the time, plus for the really nice images you need the top shelf systems like dual energy CTs.
It's not just "some places" eld. By their nature CT machines and their software can create 3d reconstructions since information is stored in voxels. It's just not necessary or diagnostically useful most of the time, plus for the really nice images you need the top shelf systems like dual energy CTs.
You said it better than me, I honestly have very little knowledge on how it works, I just had this general experience of things working good enough that there hasn't been any need for 3d imagery.
just thot about it. why couldnt they make a hybrid system, where only static meshes use voxels. character and moving assets uses the current mesh system.
I'm pretty sure that was discussed 24 pages and 8 months ago...
I think the only real roadblock is that they want someone else (Sony or MS) to do all of the heavy lifting and reinvent the wheel and all of the tools that go with it. Does it outshine Unreal4? Yea didn't think so, its going to be hard to dethrone the king and rewrite the way the industry works from the ground up, but if they want to try they need to get moving...
If Notch somehow got behind this it would generate millions of interested people Think if he started developing Minecraft Unlimited based on this sort of voxel technology, it would immediately turned everyone's head. Nobody is going to ignore something like that if Notch was behind it.
Now the perfect way to get it done would be if somehow Notch + Valve joined forces to build something with this technology. They have the bank and the fans to do it and drag the rest of the industry behind them.
Really feels like a scam to get funding and buy a sportscar...
And its been going on for years....oooh i fly around some tiling textures and look I have 60 fps ...oh and Let's use the word LASER SCAN so people think it's really high end tech stuff...what a fucking rip off.
They gained credibility from journos and industry experts? Where did they read that? Good luck to them but that was still a lot of pretty evasive fill, blaming middleware for the lack of progress with respect to animation and physics doesn't really come across to well either. As with much of what can be read about them, it all seems more like they're abusing the readers lack of technical understanding about how games are made (some of the comments come over like they're 'fudging' the language to obfuscate peoples understanding of how polygons and the general rendering process works) to push a product that's not really that innovative . All IMHO of course
If Notch somehow got behind this it would generate millions of interested people Think if he started developing Minecraft Unlimited based on this sort of voxel technology, it would immediately turned everyone's head.
Minecraft already pretty much works like that (albeit textured 'voxels' rather than per pixel voxels). It's ample demonstration of how much resolution you can really get in a practical situation.
Replies
While procedural content is cool and works in some cases (non-linear fog with perlin noise, for example), making a whole game out of it is not very feasible today (of course, there's demos like .kkrieger, but they converted their procedural content to actual geometry at startup and keep them in memory).
I probably shouldn't feed the trolls, but I stumbled upon this earlier in the thread and my nerd sense tingled: What... what?! Unlimited geometry is possible, but unlimited detail isn't? Could you elaborate on this paradox? Also, why IS unlimited geometry possible? Just because they render only so-many pixels, the running time of their "search algorithm" will still increase the more data is present in the scene (this can actually be mathematically proven).
In the meantime, I've been watching their latest video and noticed a few other things that might be a dealbraker for Euclideon:
- There is no transparency whatsoever in any of their videos. Even though depth sorting comes more or less for free with point cloud/voxel based methods, finding all points visible cannot be done so efficiently anymore. Basically, what they do right now is trace intersection tests through their datastructure and stop at the first point found. With transparency, they have to blend this point and continue tracing, which, depending on the datastructure and, more importantly, the size of their voxels and the respective thickness of the outter shell of each model, cannot be done efficiently and without artifacts. So basically, transparency will probably look shitty and may slow things down by a considerable amount.
- There is no LOD (level of distance *pffffffthahahahahah*). While Dell thinks this is only employed to save rendering time, it also serves an important purpose of keeping the triangle-per-pixel-ratio below 1. If you don't do this, you'll have an awful lot of noise in the distance which will be very noticable when you move around. Why this happens is very similar to why mipmaps are needed; basically, if you have more detail per pixel than can be displayed (e.g. you have 5 triangles that occupy the same pixel, or 1'000 "atoms" that would map to the same pixel), only one of the triangles/atoms will ultimately get selected and presented on the screen. The selection process, however, is not continuous between frames. For rasterization, the triangle that wins the Z-buffer-fight or gets drawn last defines the pixel color, or for Unlimited Detail, the atom that lies closes to the view ray gets drawn. Slight changes in view direction or position change the selection outcome, meaning that suddenly a different triangle/"atom" occupies the pixel, which leads to rapid color changes between frames (-> noise). SVOs solve this thanks to the hierarchical storage methods, which allows the tracing algorithm to stop descending down the octree as soon as the projected size of the voxel is below one pixel. The Unlimited Detail algorithm apparently does not allow this, and I think in the last video posted here the guy mentions that there seems to be an awful lot of noise in the distance.
Oh man, I think you hit the nail on the head there. "Alpha testing" of any kind would be astronomically expensive using voxels unless you had a really mean approximation. At the same time though, imagine the complexity of the light modelling you could do if you have the processing muscle to do it - subsurface scattering could be handled very nicely indeed.
Especially if it's using deferred rendering, because then all lighting and shading could be done in a single pass using the same code and generating consistent results (because it only works with G-buffer data).
The main problem is still the background storage issue IMHO, and that can't be overcome by software solutions. But in the 5 to 10 years it takes to get proper hardware, stuff might once again be easier and better to do using polygons and rasterization...
Ha ha that cracked me up. I see a slight flaw. I'll rub that one of the drawing board and quietly forget it. If anyone says anything I shall deny all knowledge.
Bruce Dell is still at it. More words yet little substance to show anyone else is wrong, more intentional mis-statements made about industry standards, and mis-statements about others like Notch.
p.s. Cocky smile is Cocky.
p.p.s Stop saying Ah-toms, they're voxels.
He also forgets the third kind of critics - the ones knowing what production realities are like! Go ahead and convert any current gen asset to Haaaatoms and see what happens ...
UNLIMITED!
Yeah, um, I dont know German but his comment there realy came off kind of offensive. Like native speaking Germans dont understand "Unlimited?"
Unlimited ultra-real PipeDream levels! Its like 1995 came back in HD!
like so
http://www.flickr.com/photos/genista/4449316/sizes/o/
1. Would it not be possible to save color data on the asset itself? In ZBrush, they have a feature called PolyPainting which allows an artist to paint directly onto the mesh with UV maps attributed later in the process. I believe it's called "Vertex Color", but I'm not sure.
If the so-called "Vertex Color" COULD be saved, does that mean the filesize would exponentially increase to the point where loading multiple assets is unrealistic? I ask this because, wouldn't loading multiple textures maps be more memory hungry than an .OBJ file with color BUILT IN?
2. Is it not plausible that these guys have figured out a way to reverse the process of a converter?
What I mean is, we already have Point Cloud to Polygon converters. Why wouldn't just reversing that process allow artists to load Polygonal Meshes into the engine and automatically be converted into "Atoms" in real-time? Then they use their Search Algorithm to find whatever data is needed to be displayed.
The reason I thought about these things is because they have said they actually use LESS MEMORY than what is already done in Game Engines.
So, if an artist could paint the mesh like a virtual Ceramic Sculpture and save the color, you would only be loading assets and the converter would be breaking up everything into "Atoms", which would already have color data attributed to each "Atom". Wouldn't that allow developers to continue working in basically the same pipeline they always have, just without a Polygon Limit?
I'm not talking about animation or physics, just the process of the engine itself. I know alot of people have spoken out against the lack of animation, I'm just trying to figure out if this was plausible or too file heavy to be a solution.
Again, sorry for making this so long, especially if these things aren't even possible.
1) There are all kinds of ways to paint on meshes and transfer it to point cloud, colors and all. The problem becomes storing a wide enough pallet per atom to make the item visually appealing. Noss, or Natch (whatever his name is the guy that created minecraft) ran some basic numbers that where pretty wild guesses but followed that train of thought to its ultimate conclusion. There was some back and forth about the accuracy and the trickery involved but ultimately it came down to needing to store a lot of data per point and having to load that data at some point.
3D coat is probably the closest thing to what you're thinking of. In terms of points, colors and painting.
2) They have, or at least I remember saying they worked on converter so that artists could use tools they already have instead of waiting on these guys to reinvent 30 years of tools.
It's not suprising that they report less memory consumption their world is devoid of complex lighting. It recycles the same few assets over and over again which are all facing the same direction. Which raises the question if they rotated some of those objects would it slow down their search? Being unable to rotate, scale, and manipulate placed objects is a deal breakers on so many levels.
They report lower memory uses while they are running a 10th of what would be required for an actual game. You could report lower memory usage if you stripped a game down to just what they are showing. You could run that game on 2 rocks and a paper clip. Zero AI, no complex lighting, basic shaders, no animating meshes or scripted sequences and no physics.
Just because you can paint on the model, doesn't mean you should... you can do that now and in most cases its very limiting and inaccurate. I would image any tools they cook up wouldn't be much better. I love using viewport canvas in max and for certain styles and some things it will take you 80-90% there and in some cases that's good enough, but for other things like aligning and styling type or overlaying photos and details most of the paint on model apps suck.
- criticized for static environments and low texture res
- huge problems on release because of driver issues (and yet, id gets most of the blame)
- one of the biggest games out there with 22GB of pure data
Id Software has some of the best programmers with decades of game development experience, huge financial resources, great influence on hw vendors. And yet it took them 7 years of development, 4 of which happened after this [ame="http://www.youtube.com/watch?v=dEqeVOZzzAc"]techdemo[/ame] to put together a game, and while I have high hopes for them it still remains to see how successful Rage is going to get.
So seriously what the hell do people expect from Euclideon...
Like if you had a way to make any polygon break equally into 1000 pieces (using 1000 for example purposes), it would be more of a smoke and mirrors type of Point Cloud, but without that data stored in the file.
Essentially, using just the polygon file size, which is massively lower, you would break it into Atoms, without needing to load Point Cloud Data type filesizes and would retain all color attributes. That's what I meant by converter, like a faked Point Cloud system.
Also, in regards to Painting on Meshes, I was thinking about something like this. http://www.pixologic.com/zbrush/features/16_PolyPaint/
I kind of figured something like that would be far easier to accomplish by painting directly on the highly detailed mesh. It seems like more of a burden to create the UV Map, then bring it into Photoshop and try to match up all the seams correctly.
I'm not either a programmer nor an artist, so I'm actually curious about that. Is it really easier to map a texture, than to just paint on the mesh? To me, the latter would give more freedom to an artist. Is it more time consuming to paint like that?
And finally, I've seen quite alot of people say that the geometry in the video was untouched and not rotated. This is actually incorrect from what I've seen. In the 7 minute demo, you can clearly see multiple rocks, the purple feathers, vines and that female statue facing in different directions.
I understand the repetition part, where only certain assets are loaded over and over, but quite a few are actually rotated. If you think about it, the whole 1km squared world has a ton of curved paths, so automatically, rocks would have to be rotated to adjust for the curved pathways that make up the design of the world.
You make the mistake to think that the rocks are treated as the individual asset. It's actually a roughly 2mx2m or so tile of ground with rocks and grass and such that's used as a tile and there are a few variations of this tile repeated all over again.
It's quite evident here:
You can also see how the palm trees are all facing in the same direction, or that there are no arbitrary edges around the water, it's all in 90 degrees because of the use of square tiles.
http://www.tsumea.com/news/190112/3d-artist-opening-at-euclideon
I'm going to send them a portfolio. Why the hell not, eh.
Hulk movies made reality?
http://www.youtube.com/watch?v=Gshc8GMTa1Y&feature=channel_video_title
http://www.youtube.com/watch?v=_CCZIBDt1uM
http://www.youtube.com/user/AtomontageEngine#p/u/5/tnboAnQjMKE
all this stuff is still heavily in development. before you fuss bout anything, pay attention to the narration.
Like any engine, this engine isn't a catch-all for all types of games. But its potential for something along the lines of a 3rd person exploration game would be pretty neat. I know I'd give it a go if a company was willing to take the plunge.
edit: a game like 'from dust' could have probably been made with a proper voxel engine with some good programmers behind it. would have been neat to alter the landscape in a way that wasn't limited to raising or lowering a heightmap, instead, being able to carve out caverns or having a bit more unique detail on the terrain.
Which reminds me, I need to send them my 'folio, however with the reputation this guys have, I wouldn't be surprised if I suck a few cocks to get in.
You mean, UNLIMITED COCKS.
The issue they seem to be faced with, is the dependency on the divergent technology that derived from polygon/texture based games. We managed to find ways to fake SO much stuff, that we have middle-ware, middleware for the middleware, then piggybanked the middleware with supporting hardware to blur the lines of every 'hack' we've created to attain our visual targets.
Keep in mind, how in many ways polygons killed much of the use of NURBS in film/games. NURBS are far more accurate, and still heavily used in industrial design, but Polygons found ways to brute force things, and fake things in much more convincing ways, while the hardware managed to catch up to effectively make polygon crunching a moot point.
It's interesting technology, but I don't think many people believe it will overtake games without the support polygons/textures had. That said, it could still have many other interesting uses.
I could see this maybe being used in the medical field. Patients can see ultrasounds of their 3D voxel based baby. Moving and breathing in real-time... rather than a crappy CRT image of that black and white pixels of their breathing child. Or even laser eye surgeries or transplants.
It can already be done and is already being used, voxels in medicine-imagery is not something new, it's just that those crappy crt's are connected to still perfectly working and expensive machinery, there's very little point to upgrading it for the extra hardcore gamer baby-fidelity.
But I'd go for the voxel-baby if I had the choice.
That's pretty cool.
Use it to represent tumors, scar tissue, or breast implants.
Use it to see irregular heartbeats etc.
Its cool tech that would take many years and bajillions of dollars to turn into game-ready tech.
http://www.thebirthcompany.co.uk/our-services/ultrasound-baby-scans/3d-scan.html
And some random images I picked from searching for "3d ct scan"
super useful for scanning and displaying things like archeological stuff that you dont want to crack open:
You said it better than me, I honestly have very little knowledge on how it works, I just had this general experience of things working good enough that there hasn't been any need for 3d imagery.
XD thx for the pics eld.
http://www.tsumea.com/news/150212/qa-and-technical-support-officer-opening-at-euclideon
"Object wouldn't rotate"
I think the only real roadblock is that they want someone else (Sony or MS) to do all of the heavy lifting and reinvent the wheel and all of the tools that go with it. Does it outshine Unreal4? Yea didn't think so, its going to be hard to dethrone the king and rewrite the way the industry works from the ground up, but if they want to try they need to get moving...
http://www.euclideon.com/flipbook.html well well well. looks like they're planning toleverage cloud computing.
[ame="http://www.youtube.com/watch?v=ajB3ejLhfoI"]Euclideon - New Laser Scan Footage - YouTube[/ame] and this is a bit old but i haven't seen it before so
Here's the story that the footage comes from for whatever that's worth.
Now the perfect way to get it done would be if somehow Notch + Valve joined forces to build something with this technology. They have the bank and the fans to do it and drag the rest of the industry behind them.
And its been going on for years....oooh i fly around some tiling textures and look I have 60 fps ...oh and Let's use the word LASER SCAN so people think it's really high end tech stuff...what a fucking rip off.
Minecraft already pretty much works like that (albeit textured 'voxels' rather than per pixel voxels). It's ample demonstration of how much resolution you can really get in a practical situation.