Well, that was a nice change, I have more hope now, I guess generating buzz is a thing, but wow, they really could have a video like this instead of the previous ones.
Time flew by listening to Carmack's 1.5 hour long talk, going over the pros, cons, and optimizations, etc. This guy talked about the same points brought up in all of the previous videos without adding anything remotely new. "Notch is wrong because ours looks better than these videos, carmack is wrong cause he's saying the opposite of notch."
He more or less just goes over the talking points again in the exact same manner, all the while adding nothing of substance. I'm open-minded about this tech, and I hope he isn't just bullshitting, but the video does little to allay fears that this is just a really nice voxel demo.
Additionally his hope that artists will once again sculpt and carve items then scan them is absolutely hilarious. Hmm... take a month to create a full maquette, or just model it in half a week? Ultimately I have the feeling that as a hobbyist he actually knows little of the professional game scene, and that's why there are such poor PR decisions evident in the presentation.
Truth is a dilation of time in my honest opinion, we break it each time new information comes around...however, fuck it, every bloody time I make my mind up and try to reach some kind of middle ground, new evidence for/against comes up in this shirtstorm.
It also doesn't help people who have no tech knowledge, and have only played games without even knowing what a polygon is keep on bringing back and forth battling words and slander on both sides as if by doing so they will be some kind of champion of truth.
All I can say is I HOPE what the guy is doing is true, that would Epic (pun intended), if not, then I guess there will be two people who will be more disappointed in one person than the whole internet can be.
Yah they just needed an interviewer who wasn't just trying to throw out industry buzz words and suck of like a bitch.
I was waiting for him to at least say "Hey, why is everything in this world build on a grid made up of 90 degree angles" or something to that effect.
I have to say despite the numerous drawbacks mentioned here, if you designed a game which took those into account you could still make something pretty impressive
Well, it does look like they do have something in here after all. Whether it's as flexible or not, well, that's another story.
Personally, I do wish them luck. It seems like they are passionate about this thing, so I will just wait and see if they can eliminate potential drawbacks.
Some people mentioned that the environment looks terribly tiled, but I wonder if it's really such a big issue. Depending on how this thing is supposed to work, perhaps it would be possible to create full levels in an external app and simply export them to their "engine".
In any case, I do wish them luck. Even if the chances are slim, if they could free us from the constraints of the HP => LP process then they could really stir things up. Tesselation might offer a similar visual quality (as shown by the Unigine) but it does not remove the need to create LP models, do the bakes, set up UV's, LOD's, fix the vert normals, and do all that technical jibber-jabber. So in principle, what they are aiming for is a good thing and could be a huge time saver, if they are successful.
So maybe they do have something revolutionary, maybe they do not. For now I will remain cautiously optimistic. It's not like it will happen overnight anyway.
It also doesn't help people who have no tech knowledge, and have only played games without even knowing what a polygon is keep on bringing back and forth battling words and slander on both sides as if by doing so they will be some kind of champion of truth.
It doesn't take a tech guru to call bullshit on someone parroting "unlimited detail!" over and over in response to every criticism when it's both disengenuous and irrelevant.
The tech could be a quantum leap forward, but the lack of any transparency combined with the repeated buzzwords leaves only two real options. Either they're a naive, starry-eyed company with a midas touch, or they're snake-oil salesmen.
You make it sound like this interview actually proved or answered any questions.
I was talking about that realtime demo they showed. It doesn't answer any technical questions but, it proves that the video they showed a week ago was not prerecorded, as some people on the net had suggested. So there is something in there that's up and running, regardless of what that "something" really is.
Unless, the interviewer was in league with them and it was an elaborate fake :poly130: The plot thickens.
And dfacto is right on the money:
"The tech could be a quantum leap forward, but the lack of any transparency combined with the repeated buzzwords leaves only two real options. Either they're a naive, starry-eyed company with a midas touch, or they're snake-oil salesmen."
I was talking about that realtime demo they showed. It doesn't answer any technical questions but, it proves that the video they showed a week ago was not prerecorded, as some people on the net had suggested. So there is something in there that's up and running, regardless of what that "something" really is.
Which wasn't something anyone with half a brain was doubting, we wanted answers (and still do) to the real questions.
Also, you guys here are supposed to be intelligent folks, working in game development with 3D technology.
So tell me, how the hell can you believe that a thousand times more detail won't require at least a hundred times more disk space? You've all seen what happens to an image if you try to compress it to 1% of its original size. If you want to keep the information, you need the storage space. Now what's the plan, they'll just magically fit it onto a DVD?
It's no surprise to see gamers get hoodwinked, but actual game devs should be smarter than that.
There's been a lot of such demos, some of them years ago. Heck, Carmack talked in the recent Q&A about running voxelised Quake 2 levels more than a decade ago. It'd be fun to get a screenshot of that.
Pfft, tell me about it, its bad enough we can't compress high quality audio/voice tracks without it splurging into the 20GB zone, and most PC game static pieces weigh in with textures and shaders a good 10-20MB, not counting any destructible copies or animations.
Having massive amounts of details would kill any ISP provider and if it went MMO and PC's out there which doesn't have a server on the standby would be crushed, which is why I at least hope such an idea can be made into smaller area of things with lower quality (as many mentioned, plants and other knick-knacks on such scale).
I wonder if somehow the guy didn't simply take voxels and change some stuff about them to call it his own now, and why the bloody fuck does he still call them atoms?!
But it is voxels, here's what he posted on the beyond3d forums 3 years ago:
Hi every one , Im Bruce Dell (though Im not entirely sure how I prove that on a forum)
Unlimited Detail is a sorting algorithm that retrieves only the 3d atoms (I wont say voxels any more it seems that word doesnt have the prestige in the games industry that it enjoys in medicine and the sciences)
So tell me, how the hell can you believe that a thousand times more detail won't require at least a hundred times more disk space? You've all seen what happens to an image if you try to compress it to 1% of its original size. If you want to keep the information, you need the storage space. Now what's the plan, they'll just magically fit it onto a DVD?
take 2 gigabytes of tga or png file sequences and you might very well be able to rar them down to 1-0.1% of their original file size without any information loss. Compression of data is most definitely the easy part here (since it's easily scaleable). performance and usage limitations are what really matter.
Them saying "There are no drawbacks, the tech just isn't complete yet" can be applied to rasterization, raycasting and even raytracing too. It's just not a compelling argument to ignore whatever drawbacks and limitations exist with this technology. Does this take less than a couple of milliseconds per frame to run? if not it's no better than most of the "realtime" stuff that comes out of academic research. And getting 20-30 fps on what we can only guess is a best case scenario (a scene created by them to showcase specific features of their tech) on an i7 is not going to cut it.
But it's impossible to say when or if this will ever be practical from a technical standpoint. What are the bottlenecks? What are the optimization possibilities? How does this even work? we just don't know enough to draw any certain conclusions.
take 2 gigabytes of tga or png file sequences and you might very well be able to rar them down to 1-0.1% of their original file size without any information loss. Compression of data is most definitely the easy part here (since it's easily scaleable). performance and usage limitations are what really matter.
You cannot rely on that type of compression for use in a realtime game. Go ahead and compress some 2+ GB of data and see how long it takes.
So tell me, how the hell can you believe that a thousand times more detail won't require at least a hundred times more disk space? You've all seen what happens to an image if you try to compress it to 1% of its original size. If you want to keep the information, you need the storage space. Now what's the plan, they'll just magically fit it onto a DVD?
That's the biggest issue I have with their tech right now. But, from what I've seen in the demo, it looks like they're using a limited number of heavily instanced assets.
I'm no programmer, nor am I very good at math, but I assume that if each of their object has a size similar to objects exported from Zbrush/Mudbox then a small environment like this could fit onto a DVD.
Now, a whole game? I doubt it. But again, it could take years before their tech is finished, so the problem could solve itself with new types of storage like holographic discs.
take 2 gigabytes of tga or png file sequences and you might very well be able to rar them down to 1-0.1% of their original file size without any information loss.
But a game world isn't an image sequence. Yeah you can fit even a 1080p movie into 4-8 GB but that doesn't mean that you could take the same amount of unique texture data and get similar results with no loss of information.
For a hundred minutes it's about what, 280 GB data, so the compression ratio to 8 GB is 3%, and that's already lossy and probably includes a lot of completely black pixels. I don't think it'd work with 280 GB of textures...
But it's impossible to say when or if this will ever be practical from a technical standpoint. What are the bottlenecks? What are the optimization possibilities? How does this even work? we just don't know enough to draw any certain conclusions.
It's still voxels with some kind of acceleration structure, so most of the known drawbacks and bottlenecks should apply just as well.
@Jordan: yes, i'd conclude that it's most likely a preprocessing step, and that that's one of the reasons why the world itself is never animated.
No, it's not just the compression that takes a long time, yeah you do that before you ship. But you have to decompress it as you stream the level in and that'll still take a freakin long time for so much data.
That's the biggest issue I have with their tech right now. But, from what I've seen in the demo, it looks like they're using a limited number of heavily instanced assets.
That's exactly why it works as a demo, but is not practical for an entire game
(Also notice how, because of the acceleration structure, they can't even apply arbitrary rotations, translations or scaling to the instanced geometry, everything is neatly aligned to tiles and the XYZ axes)
But again, it could take years before their tech is finished, so the problem could solve itself with new types of storage like holographic discs.
Just take a look at where our simple poly based renderers have come in 5 years. From, say, Battlefield 2 to Battlefield 3. These guys won't stay still either, while Euclideon figures out if they can indeed solve the problems with their voxel tech.
No, it's not just the compression that takes a long time, yeah you do that before you ship. But you have to decompress it as you stream the level in and that'll still take a freakin long time for so much data.
To be fair, if you render using a sparse octree data structure, you can cut down the actual amount of data you need for a single frame significantly. But you still need to do something like uncompress it to a HDD and a reasonable sized game level with this tech would take hundreds of gigs, probably even more.
So even if the PS4 had a 1 TB drive, you'd be like, okay, next level, let's wait while it erases everything and uncompresses the new data to the HDD...
That's exactly why it works as a demo, but is not practical for an entire game
(Also notice how, because of the acceleration structure, they can't even apply arbitrary rotations, translations or scaling to the instanced geometry, everything is neatly aligned to tiles and the XYZ axes)
Just take a look at where our simple poly based renderers have come in 5 years. From, say, Battlefield 2 to Battlefield 3. These guys won't stay still either, while Euclideon figures out if they can indeed solve the problems with their voxel tech.
Yes and yes. I completely agree My overall point in this thread is not that I'm trying to defend their tech, as I don't have any personal attachment to it, but the principle behind it.
It's the idea of being able to use your highpoly meshes directly in games, without having to go through all the hoops and loops like we do now. It could be a great time saver and would allow artists to focus primarily on art.
If we can achieve that with polygons then it's all fine with me. I'm just hoping that in 5 or 10 years from now, artists will no longer have to spend time and energy battling with getting their content to run at a reasonable performance; time and energy that could be spent better elsewhere. Or compromising their original vision or quality of art due to technical limitations.
For now however, while we do have the tech that allows us to have higher polycounts (i.e. tesselation) it doesn't make our workflows any smoother.
I doubt we'll get rid of those issues anytime soon, but if there's anything that could remove at least some of them then count me the fuck in :poly136:
To be fair, there's a lot of room for progression in tools even in the poly environment that can ease the process of getting a highpoly to lowpoly in game.
To be fair, there's a lot of room for progression in tools even in the poly environment that can ease the process of getting a highpoly to lowpoly in game.
That's true, but what worries me a little bit, is that we're not seeing significant improvements made to the production pipeline. Even something as old as creating normal maps can be a challenging task. 3pointstudios guys have been doing some amazing things with their Quality Mode and yet I haven't heard of it being adopted by major studios (though that could have changed since the last time I looked).
Then again, I'm sure that all those technical issues will solve themselves with time.
I don't know if I missed anything, but how exactly would you unwrap and texture such a huge model? wouldn't unwrapping it take more time then actually baking onto a low poly? Confused.
The interviewer is John Gatt, who works in brand awareness, marketing and promotion. Make up your own minds there, folks.
For a minute there I thought you said John Galt. Beh!
Anyway, what would everyone LIKE to see out of this as a demo? Animated meshes with actual skin weights (no properly boned meshed so far)? Unique geometry for the terrain (not the 2m square tiled floor)? Examples of shiny/matte/reflective/etc surfaces? Realtime skeletons and animated grass and terrain?
I don't know if I missed anything, but how exactly would you unwrap and texture such a huge model? wouldn't unwrapping it take more time then actually baking onto a low poly? Confused.
Anyway, what would everyone LIKE to see out of this as a demo? Animated meshes with actual skin weights (no properly boned meshed so far)? Unique geometry for the terrain (not the 2m square tiled floor)? Examples of shiny/matte/reflective/etc surfaces? Realtime skeletons and animated grass and terrain?
Boob physics of course! That's the only way to measure how good this tech really is.
So if we can convince them to make that tunnel from Duke Nukem with the boobs, but completely out of boobs with physics and hitboxes and have some wet, reflective ones, and have different ones all through the tunnel, they win at graphics forever, is that what you're saying?
I don't know if I missed anything, but how exactly would you unwrap and texture such a huge model? wouldn't unwrapping it take more time then actually baking onto a low poly? Confused.
You actually don't, each point/voxel contains a colorvalue.
It's the idea of being able to use your highpoly meshes directly in games, without having to go through all the hoops and loops like we do now. It could be a great time saver and would allow artists to focus primarily on art.
That's assuming that all of the material work is done on the high poly. The tools aren't quite there yet. Generally the high poly is used for just surface height detail and possibly some rough diffuse work. The tools to check materials in game don't really sync up with the viewports and shaders in the sculpting apps.
You're spec could look perfect in Zbrush but once in game it looks different. Forcing you to jump in and out of several apps just to view the final material. That long lag time between iterations is a real drag and roadblock a few very dedicated and talented people have been working very hard to remove.
If we can achieve that with polygons then it's all fine with me. I'm just hoping that in 5 or 10 years from now, artists will no longer have to spend time and energy battling with getting their content to run at a reasonable performance; time and energy that could be spent better elsewhere. Or compromising their original vision or quality of art due to technical limitations.
Even if this tech takes off I doubt it will be limitless. That's part of being an experienced game artist. Knowing your boundaries and working to the best that you can, within the restricted space. A good game artist won't design some amazing piece and then get slapped down by restrictions or hobbled by limitations.
A good game artist knows how much sand is in the sand box and which corner the cat hid its prizes.
For now however, while we do have the tech that allows us to have higher polycounts (i.e. tesselation) it doesn't make our workflows any smoother.
Tech always out paces tools and user friendly workflows. Right now tessellation is still in the distant future its not like the 360 will support it any time soon. The PS3 could possibly do it? But I don't think it was designed to. It will take a new round of hardware for the tech to take hold and with the way Sony and MS are dragging their feet that will be a while =/ It hasn't even take hold in the PC market which is light years a head of the consoles.
Whenever tessellation takes hold it will probably be a smoother addition to the pipeline than normal maps where. That shit was hideous, buggy, slow, full of incomplete programer-esk tools packed full of errors and glitches. We're just now getting to the point where the tools have caught up to the tech...
I doubt we'll get rid of those issues anytime soon, but if there's anything that could remove at least some of them then count me the fuck in :poly136:
They pretty much want to invent a parallel industry that competes or replaces the current industry. It's not just a re-imagining of how we create art (that would probably be the easiest transition). Its a change to the way the hardware is developed, the way games operate on a core level and how they are delivered.
It's not like epic, id or anyone else with current 3D game tech, could write in support for this not so new method. They way Bruce Dell is talking, everyone would have to throw everything out and start over, on all of it.
Bruce: You can make a game with unlimited detail!!!1 Industry: Cool, how do you make a game with it? Bruce: That's not our job we're a technology company, not a games company. Industry: I'm not asking you to make a game, just show us how we would make a game. Bruce: Well that's up to you really, you just need to throw everything out and start from scratch. Industry: Current dev time on a game is as long as it is with all of the tools advancements largely taken care of. So we ignore all of the advancements that have allowed us to take the focus off of tech dev and focus on game play, deisgn, and art direction? Bruce:Yea. Industry: Ummm we just spent the last 10-15 years smoothing out those kinds of bumps and now you want us to do it again? Bruce: Now you're talkin! Make the checks out to Bruce D-E-L-L. Industry: Thanks but no thanks. How about you reinvent the wheel and we'll see if it rolls any better than the one we have. Bruce: but... I... and you... the... money... I... ... FINE! I'll be back! /cape twirl
Their demo is running on a single-core laptop processor.
In addition to that, their search algorithm makes it so only the points facing you are visible. No other points are rendered, and that is why they can do this.... making the number of points per scene consistent and predictable depending on LoD settings that change point complexity as an object gets further away.
I think people are thinking too much in terms of traditional voxels where a model is filled with points.... and unlimited detail doesn't work like that. The technology might be based on voxels, but then goes to solve many issues with voxels and real time rendering.
It isn't that complex when you look at the actual tech.
Replies
Additionally his hope that artists will once again sculpt and carve items then scan them is absolutely hilarious. Hmm... take a month to create a full maquette, or just model it in half a week? Ultimately I have the feeling that as a hobbyist he actually knows little of the professional game scene, and that's why there are such poor PR decisions evident in the presentation.
It basically goes - "But can you actually rotate things arbitrarily, isn't the strength basically behind just instancing a ton of data?"
And the answer often goes:
- "That is a lie, look, we are realtime!"
- "You don't just scan the shape but the texture and EVERYTHING"
- "yes everything"
Another factually incorrect statement for the sake of wow'ing his audience considering they're not scanning material properties.
And you know, more factual lies:
It also doesn't help people who have no tech knowledge, and have only played games without even knowing what a polygon is keep on bringing back and forth battling words and slander on both sides as if by doing so they will be some kind of champion of truth.
All I can say is I HOPE what the guy is doing is true, that would Epic (pun intended), if not, then I guess there will be two people who will be more disappointed in one person than the whole internet can be.
I was waiting for him to at least say "Hey, why is everything in this world build on a grid made up of 90 degree angles" or something to that effect.
Personally I hope it has real application and brings some damn industry back to Australia. lol :P
Personally, I do wish them luck. It seems like they are passionate about this thing, so I will just wait and see if they can eliminate potential drawbacks.
Some people mentioned that the environment looks terribly tiled, but I wonder if it's really such a big issue. Depending on how this thing is supposed to work, perhaps it would be possible to create full levels in an external app and simply export them to their "engine".
In any case, I do wish them luck. Even if the chances are slim, if they could free us from the constraints of the HP => LP process then they could really stir things up. Tesselation might offer a similar visual quality (as shown by the Unigine) but it does not remove the need to create LP models, do the bakes, set up UV's, LOD's, fix the vert normals, and do all that technical jibber-jabber. So in principle, what they are aiming for is a good thing and could be a huge time saver, if they are successful.
So maybe they do have something revolutionary, maybe they do not. For now I will remain cautiously optimistic. It's not like it will happen overnight anyway.
It doesn't take a tech guru to call bullshit on someone parroting "unlimited detail!" over and over in response to every criticism when it's both disengenuous and irrelevant.
The tech could be a quantum leap forward, but the lack of any transparency combined with the repeated buzzwords leaves only two real options. Either they're a naive, starry-eyed company with a midas touch, or they're snake-oil salesmen.
You make it sound like this interview actually proved or answered any questions.
the only thing really was a stronger confirmation that its realtime.
The way they brushed off Notch and Carmac was laughable, they didn't address a single technical point that Notch made.
I would have been happier if the interviewer had actually asked some hard questions rather than parroting everything Dell said.
I was talking about that realtime demo they showed. It doesn't answer any technical questions but, it proves that the video they showed a week ago was not prerecorded, as some people on the net had suggested. So there is something in there that's up and running, regardless of what that "something" really is.
Unless, the interviewer was in league with them and it was an elaborate fake :poly130: The plot thickens.
And dfacto is right on the money:
"The tech could be a quantum leap forward, but the lack of any transparency combined with the repeated buzzwords leaves only two real options. Either they're a naive, starry-eyed company with a midas touch, or they're snake-oil salesmen."
The interviewer is John Gatt, who works in brand awareness, marketing and promotion. Make up your own minds there, folks.
Which wasn't something anyone with half a brain was doubting, we wanted answers (and still do) to the real questions.
The plot truly thickens..
I heard he murdered like 300 people.
Noone ever doubted that the demo itself was real.
The problem is that the demo fails to show how they plan to overcome the well known drawbacks inherent to this well known technology.
But I won't get into any explanations again. Just wait and see how they will somehow still fail to solve these issues for another 5 years at least.
I doubted it was realtime.
So tell me, how the hell can you believe that a thousand times more detail won't require at least a hundred times more disk space? You've all seen what happens to an image if you try to compress it to 1% of its original size. If you want to keep the information, you need the storage space. Now what's the plan, they'll just magically fit it onto a DVD?
It's no surprise to see gamers get hoodwinked, but actual game devs should be smarter than that.
Having massive amounts of details would kill any ISP provider and if it went MMO and PC's out there which doesn't have a server on the standby would be crushed, which is why I at least hope such an idea can be made into smaller area of things with lower quality (as many mentioned, plants and other knick-knacks on such scale).
I wonder if somehow the guy didn't simply take voxels and change some stuff about them to call it his own now, and why the bloody fuck does he still call them atoms?!
http://forum.beyond3d.com/showthread.php?t=47405
Also, sometime later:
take 2 gigabytes of tga or png file sequences and you might very well be able to rar them down to 1-0.1% of their original file size without any information loss. Compression of data is most definitely the easy part here (since it's easily scaleable). performance and usage limitations are what really matter.
Them saying "There are no drawbacks, the tech just isn't complete yet" can be applied to rasterization, raycasting and even raytracing too. It's just not a compelling argument to ignore whatever drawbacks and limitations exist with this technology. Does this take less than a couple of milliseconds per frame to run? if not it's no better than most of the "realtime" stuff that comes out of academic research. And getting 20-30 fps on what we can only guess is a best case scenario (a scene created by them to showcase specific features of their tech) on an i7 is not going to cut it.
But it's impossible to say when or if this will ever be practical from a technical standpoint. What are the bottlenecks? What are the optimization possibilities? How does this even work? we just don't know enough to draw any certain conclusions.
You cannot rely on that type of compression for use in a realtime game. Go ahead and compress some 2+ GB of data and see how long it takes.
Come on, that was a joke
That's the biggest issue I have with their tech right now. But, from what I've seen in the demo, it looks like they're using a limited number of heavily instanced assets.
I'm no programmer, nor am I very good at math, but I assume that if each of their object has a size similar to objects exported from Zbrush/Mudbox then a small environment like this could fit onto a DVD.
Now, a whole game? I doubt it. But again, it could take years before their tech is finished, so the problem could solve itself with new types of storage like holographic discs.
But a game world isn't an image sequence. Yeah you can fit even a 1080p movie into 4-8 GB but that doesn't mean that you could take the same amount of unique texture data and get similar results with no loss of information.
For a hundred minutes it's about what, 280 GB data, so the compression ratio to 8 GB is 3%, and that's already lossy and probably includes a lot of completely black pixels. I don't think it'd work with 280 GB of textures...
It's still voxels with some kind of acceleration structure, so most of the known drawbacks and bottlenecks should apply just as well.
No, it's not just the compression that takes a long time, yeah you do that before you ship. But you have to decompress it as you stream the level in and that'll still take a freakin long time for so much data.
That's exactly why it works as a demo, but is not practical for an entire game
(Also notice how, because of the acceleration structure, they can't even apply arbitrary rotations, translations or scaling to the instanced geometry, everything is neatly aligned to tiles and the XYZ axes)
Just take a look at where our simple poly based renderers have come in 5 years. From, say, Battlefield 2 to Battlefield 3. These guys won't stay still either, while Euclideon figures out if they can indeed solve the problems with their voxel tech.
To be fair, if you render using a sparse octree data structure, you can cut down the actual amount of data you need for a single frame significantly. But you still need to do something like uncompress it to a HDD and a reasonable sized game level with this tech would take hundreds of gigs, probably even more.
So even if the PS4 had a 1 TB drive, you'd be like, okay, next level, let's wait while it erases everything and uncompresses the new data to the HDD...
Yes and yes. I completely agree My overall point in this thread is not that I'm trying to defend their tech, as I don't have any personal attachment to it, but the principle behind it.
It's the idea of being able to use your highpoly meshes directly in games, without having to go through all the hoops and loops like we do now. It could be a great time saver and would allow artists to focus primarily on art.
If we can achieve that with polygons then it's all fine with me. I'm just hoping that in 5 or 10 years from now, artists will no longer have to spend time and energy battling with getting their content to run at a reasonable performance; time and energy that could be spent better elsewhere. Or compromising their original vision or quality of art due to technical limitations.
For now however, while we do have the tech that allows us to have higher polycounts (i.e. tesselation) it doesn't make our workflows any smoother.
I doubt we'll get rid of those issues anytime soon, but if there's anything that could remove at least some of them then count me the fuck in :poly136:
That's true, but what worries me a little bit, is that we're not seeing significant improvements made to the production pipeline. Even something as old as creating normal maps can be a challenging task. 3pointstudios guys have been doing some amazing things with their Quality Mode and yet I haven't heard of it being adopted by major studios (though that could have changed since the last time I looked).
Then again, I'm sure that all those technical issues will solve themselves with time.
For a minute there I thought you said John Galt. Beh!
Anyway, what would everyone LIKE to see out of this as a demo? Animated meshes with actual skin weights (no properly boned meshed so far)? Unique geometry for the terrain (not the 2m square tiled floor)? Examples of shiny/matte/reflective/etc surfaces? Realtime skeletons and animated grass and terrain?
Check the latest Mudbox2012 texturing videos.
Boob physics of course! That's the only way to measure how good this tech really is.
[ame]http://www.youtube.com/watch?v=GmtIopU7PM8[/ame]
and the whole thread just went down the drain...:poly122:
You actually don't, each point/voxel contains a colorvalue.
You're spec could look perfect in Zbrush but once in game it looks different. Forcing you to jump in and out of several apps just to view the final material. That long lag time between iterations is a real drag and roadblock a few very dedicated and talented people have been working very hard to remove. Even if this tech takes off I doubt it will be limitless. That's part of being an experienced game artist. Knowing your boundaries and working to the best that you can, within the restricted space. A good game artist won't design some amazing piece and then get slapped down by restrictions or hobbled by limitations.
A good game artist knows how much sand is in the sand box and which corner the cat hid its prizes. Tech always out paces tools and user friendly workflows. Right now tessellation is still in the distant future its not like the 360 will support it any time soon. The PS3 could possibly do it? But I don't think it was designed to. It will take a new round of hardware for the tech to take hold and with the way Sony and MS are dragging their feet that will be a while =/ It hasn't even take hold in the PC market which is light years a head of the consoles.
Whenever tessellation takes hold it will probably be a smoother addition to the pipeline than normal maps where. That shit was hideous, buggy, slow, full of incomplete programer-esk tools packed full of errors and glitches. We're just now getting to the point where the tools have caught up to the tech...
They pretty much want to invent a parallel industry that competes or replaces the current industry. It's not just a re-imagining of how we create art (that would probably be the easiest transition). Its a change to the way the hardware is developed, the way games operate on a core level and how they are delivered.
It's not like epic, id or anyone else with current 3D game tech, could write in support for this not so new method. They way Bruce Dell is talking, everyone would have to throw everything out and start over, on all of it.
Bruce: You can make a game with unlimited detail!!!1
Industry: Cool, how do you make a game with it?
Bruce: That's not our job we're a technology company, not a games company.
Industry: I'm not asking you to make a game, just show us how we would make a game.
Bruce: Well that's up to you really, you just need to throw everything out and start from scratch.
Industry: Current dev time on a game is as long as it is with all of the tools advancements largely taken care of. So we ignore all of the advancements that have allowed us to take the focus off of tech dev and focus on game play, deisgn, and art direction?
Bruce:Yea.
Industry: Ummm we just spent the last 10-15 years smoothing out those kinds of bumps and now you want us to do it again?
Bruce: Now you're talkin! Make the checks out to Bruce D-E-L-L.
Industry: Thanks but no thanks. How about you reinvent the wheel and we'll see if it rolls any better than the one we have.
Bruce: but... I... and you... the... money... I... ... FINE! I'll be back! /cape twirl
In addition to that, their search algorithm makes it so only the points facing you are visible. No other points are rendered, and that is why they can do this.... making the number of points per scene consistent and predictable depending on LoD settings that change point complexity as an object gets further away.
I think people are thinking too much in terms of traditional voxels where a model is filled with points.... and unlimited detail doesn't work like that. The technology might be based on voxels, but then goes to solve many issues with voxels and real time rendering.
It isn't that complex when you look at the actual tech.