Home General Discussion

Unlimited Detail

1246715

Replies

  • Zack Fowler
    Options
    Offline / Send Message
    Zack Fowler polycounter lvl 11
    I'm going to be very optimistic for a moment here. Let's consider the possibility that in 4 years from now, they will manage to put together a full rendering pipeline that includes the following:

    1. Full support for static lighting that doesn't look like ass.
    ---a) This means different copies of the same asset can have different lighting, which is noticeably absent from their demos.
    ---b) This also means support for specular lighting, which is also absent from their demos.
    2. Dynamic lighting at all period. Completely absent right now.
    3. The ability to place assets at arbitrary XYZ coordinates. Right now everything is very conspicuously gridded.
    4. The ability to rotate and scale assets arbitrarily. Right now every copy of an asset is facing the same exact direction and scaled the same exact size.
    5. Particle effects and translucent materials.
    6. Modern post-processing chains.
    7. Animation, whether through polygonal skeletal mesh support or a new voxel-based animation system.

    If they pulled off all of that in 4 years, it would be incredibly impressive. And it still wouldn't be a game engine... it'd be a render pipeline. But anyway. Suppose they managed to pull all that together.

    What would polygon-based rendering look like at that point, 4 years from now? For one thing, tesselation support will most likely be widespread. For another, the sheer number of polygons that graphics cards can crunch will be MUCH higher than now. There will be no more 8-sided-cylinder trees in games of the next generation - if anything they will be handled so that up close they get heavily tesselated to the degree that the difference between a tesselated tree and a voxel tree might be very hard to distinguish indeed.

    And again, those requirements above? That's to match the quality of lighting and shaders currently available to polygons. It just so happens that quality and shaders are two of the biggests focal points for polygon-based renderers right now, with stuff like the UDK Samaritan demo putting as much effort into dynamic reflection shaders and improved depth of field etc. as into geometry. Even if voxel-based rendering caught up to current lighting and shaders, it would still be far behind the upcoming improvements to lighting and shaders being developed for polygon-based renderers.

    The question isn't really whether Voxel-based renderers will ever be viable... the question is whether they will ever really outperform polygon-based renderers. Catching up isn't enough, they need to leapfrog, and to a degree that it's worth retraining an entire production team.
  • Richard Kain
    Options
    Offline / Send Message
    Richard Kain polycounter lvl 18
    I'm still a little confused about why everyone keeps referring to Voxels. From everything I've heard, the technology that this company is pitching isn't voxels, and any similarities are just a coincidence. I was under the impression that voxels were a kind of a 3D "pixel" based on incremented scene graphs. What they are pitching here are point clouds, which are essentially just collections of 3D coordinates. (without the interconnecting normals used in polygonal modeling)

    It makes sense to me that a point cloud would take up less space than a comparable 3D model, as it wouldn't have to store the extra information necessary for triangle definitions.

    I'm a little surprised at the almost violent reaction from most of you. Such rancor over a few guys playing around with a different approach to 3D rendering. If nothing comes of it, than nothing comes of it. Why all the anger?
  • Zack Fowler
    Options
    Offline / Send Message
    Zack Fowler polycounter lvl 11
    Because their marketing pitches are deliberately misleading and condescending, basically.
  • Ged
    Options
    Offline / Send Message
    Ged interpolator
    wasnt the game outcast polygon characters in a voxel world? that was in about 1999

    outcastd007_640w.jpg

    b27.jpg
  • Richard Kain
    Options
    Offline / Send Message
    Richard Kain polycounter lvl 18
    Because their marketing pitches are deliberately misleading and condescending, basically.

    But it's a marketing pitch. It's specifically aimed at potential investors, not game industry insiders. They aren't using that video to try to pitch this to game artists, game developers, or even game publishers. The technology hasn't reached a point where it would be viable in practical application within the video game industry. They know that.

    They created a dumbed-down, overly simplistic video with extremely obvious comparisons so that they can show it to potential investors who don't understand game development. The whole point was to highlight the most obvious differences between what they are working on, and the current approach to geometric rendering.

    I suppose my experience at dealing with marketing and advertising has me thinking about this sort of thing in a different fashion. I can automatically filter out marketing speak. I do the same thing for press releases from video game executives. This sort of thing doesn't upset me anymore because I can naturally tune out the parts that I know "aren't for me."

    And when I do tune that out, I see a rendering technology that is better optimized for multi-core processing than the traditional polygon model. Given the direction that most chip manufacturers are taking these days, I would say that this is a forward-thinking approach. I'm not going to dismiss it out of hand just because of a bit of spin.
  • Zack Fowler
    Options
    Offline / Send Message
    Zack Fowler polycounter lvl 11
    They've gone 8 years without showing us a demo with multiple copies of a static object so much as rotated differently from each other. Where is the promise in this if they haven't bothered addressing even the most basic needs of a modern game rendering pipeline?

    Being able to render multiple tiles of extremely detailed geometry with the same exact lighting, rotation, scale, and translation offset may be mathematically better optimized for multi-core processing, but it does hardly anything to demonstrate practical viability.

    Arbitrary rotation, offset, and scale are such basic things to do for almost any other method of 3D rendering that going so long without them strongly implies that this is an inherent limitation of their system, and their videos are just a bunch of smoke and mirrors to obfuscate these major limitations.

    This thread isn't about raging against voxel techniques, or point cloud techniques -- it's about these particular goofs who keep presenting the same stuff we've seen from them over and over with grandiose promises but no meaningful gameplan for carrying them out.
  • eld
    Options
    Offline / Send Message
    eld polycounter lvl 18
    Ged wrote: »
    wasnt the game outcast polygon characters in a voxel world? that was in about 1999

    Yeah, perfect use of both technologies, interestingly enough their world were tilebased:

    shamazaar.jpg
  • Keg
    Options
    Offline / Send Message
    Keg polycounter lvl 18
    Might have missed it, but no one seems to have mentioned lionhead's mega meshes. Basically the same idea as the mega texture technique from what I understand.

    [ame]http://www.youtube.com/watch?v=M04SMNkTx9E[/ame]
  • Richard Kain
    Options
    Offline / Send Message
    Richard Kain polycounter lvl 18
    This thread isn't about raging against voxel techniques, or point cloud techniques -- it's about these particular goofs who keep presenting the same stuff we've seen from them over and over with grandiose promises but no meaningful gameplan for carrying them out.

    1. They are a small company, not a major corporation. They have limited time and resources. You can't expect them to be finished in a few short months, let alone a few short years. Experimental technical developments take time. It took Valve 5 years to create Half-Life 2, and yet you expect this company to revolutionize rendering technology in less time?

    2. They are not a video game company, by their own admission. And even though they are using games as an example, it is quite likely that they don't intend to licence their technology strictly to game companies. Having much faster rendering for point clouds would have numerous applications in other industries, especially those that already regularly use point-cloud data.

    3. They aren't the only ones who feel that the traditional 3D rendering pipeline is behind the times. There are a lot of different companies and developers who are exploring alternative approaches to 3D rendering. And there is a lot of research going into developing more efficient software rendering systems as opposed to the GPU-powered systems currently in use. A lot of this is fueled by multi-core development. And even more of it is fueled by the growing popularity of wireless and mobile devices. (where a separate GPU is not always practical)
  • Zack Fowler
    Options
    Offline / Send Message
    Zack Fowler polycounter lvl 11
    Months? Where did you even get that from? These guys have been at it for years and years, and make the same promises and show pretty much the same stuff over and over. Address that problem for me. Address the problem of lacking absolutely fundamental things like rotation, translation, and scale for me. You also might want to reconsider suggesting that 8 years is less time than 5 years.

    Your point number 3 is irrelevant. By all means, I am more than fine with people exploring alternative approaches to 3D rendering.

    edit: lolmath
  • Keg
    Options
    Offline / Send Message
    Keg polycounter lvl 18
    Ganemi wrote: »
    I wonder if Voxels were used in Noire for the facial animations. Is that even possible?

    ANSWER ME!!

    wtf? NO.

    They used morph targets on a mesh.
  • eld
    Options
    Offline / Send Message
    eld polycounter lvl 18
    1. They are a small company, not a major corporation. They have limited time and resources. You can't expect them to be finished in a few short months, let alone a few short years. Experimental technical developments take time. It took Valve 5 years to create Half-Life 2, and yet you expect this company to revolutionize rendering technology in less time?

    I recall Carmack revolutionizing rendering tech quite a few times :P

    But seriously, with the risk of repeating myself: Understand exactly what they're doing and you'll see why it isn't as flawless as they make it to be.

    It's one giant tradeoff.
  • Mark Dygert
    Options
    Offline / Send Message
    Ganemi wrote: »
    I wonder if Voxels were used in Noire for the facial animations. Is that even possible?

    ANSWER ME!!

    http://www.rockstargames.com/lanoire/#!/video:6341
  • Mark Dygert
    Options
    Offline / Send Message
    I'm still a little confused about why everyone keeps referring to Voxels.
    Right it is point cloud and I've talked about how they might be able to achieve better results if they used less points and drew faces between the few points they keep... opps now we're back to polygons, heh.

    The idea he talks about in the Kotaku explanation might actually help to cull unseen faces/verts in traditional polygon games... maybe they should look at complementing games and riding along side instead of fire bombing the industry and flipping desks over?

    The examples of voxel based trials where more of an example of how to better promote your projects without looking like a douche bag. Also unlimited detail is not only fighting against current tech like polygons but also voxels, which currently they're losing to voxels which is losing to polygons... They've got a lot of ground to cover and aren't really able to keep up.
  • r_fletch_r
    Options
    Offline / Send Message
    r_fletch_r polycounter lvl 9
    Keg wrote: »
    wtf? NO.

    They used morph targets on a mesh.

    pretty sure it was streamed mesh data.
  • kat
    Options
    Offline / Send Message
    kat polycounter lvl 17
    1. They are a small company, not a major corporation. They have limited time and resources. You can't expect them to be finished in a few short months, let alone a few short years. Experimental technical developments take time. It took Valve 5 years to create Half-Life 2, and yet you expect this company to revolutionize rendering technology in less time?
    The received $2 million AU investment in October 2010, 10 or so months ago, plenty of time to hire some artists and do it right. They first applied for a patent back in 2004. In one of the interviews he's given he said that they've been working on the tech for about 15 years. That's quite some time to be 'developing' a project, and right now, they've not exactly short of money.
  • Wrath
    Options
    Offline / Send Message
    Wrath polycounter lvl 18
    A 1024x1024 2d texture is 4mb uncompressed. now take that to the 3rd demension. A 1024x1024x1024 volume is about 4gb uncompressed for just color, then you also need surface normal and some other info. for the kind of detail seen in that video you need at least 1 1024x1024x1024 volume per 4 meters or so. You can try to compress that all you want, but its still wayyyyyyyyy too much data for an actual game. If they use points instead of voxels that might ease up on the size a little, but really its still same problem, plus they mention indexed searches for rendering, which makes things even worse.

    I think people are still confusing this with typical rendering engines. Nothing here is rendered. Everything is already stored as an 'atom' which is essentially a position in space, and an RGB value made up of any color, pre-calculated lighting and shadows, etc. Nothing is caluculated in realtime. All that's being done is there's an algorithm that figures out which "atom" corresponds to each pixel on screen, and then draws it.

    The more I hear from these guys, the more convinced I am that he's talking out of his ass.
    "The present polygon system has got quite a few problems, but not in terms of graphics. Polygons are not really scalable between platforms – if I were to make a character on a PlayStation 3, I can't put him on the Nintendo Wii because he uses too many polygons, so I have to completely rebuild him. Imagine we weren't doing a polygon game, say we were doing a 2D game, if I drew a character on the PlayStation, he's just a bitmap image – this can easily be rescaled. You could do it in Microsoft Paint! ‘Infinite Detail' data is like a 2D bitmap image in that rescaling its size is easy, whereas polygons can't scale like that.

    "The big thing is – if you make a game using the present polygon system, you have to rebuild it to rescale it. You don't have to do that with Unlimited Detail.

    Yeah.
  • Richard Kain
    Options
    Offline / Send Message
    Richard Kain polycounter lvl 18
    eld wrote: »
    I recall Carmack revolutionizing rendering tech quite a few times :P
    Over the course of fifteen years. And based on an iterative cycle. (always building off of the the work that had already been done) And lets face it, as forward-thinking as John Carmack may be, he didn't attempt anything this fundamentally different.
    But seriously, with the risk of repeating myself: Understand exactly what they're doing and you'll see why it isn't as flawless as they make it to be.

    It's one giant tradeoff.

    I don't view it as a trade-off, so much as a new beginning. Of course this technology isn't feature complete. And frankly, it probably won't be when they release it for licencing. Again, you have to sort what this technology actually is from the marketing-speak. This IS NOT A GAME ENGINE. It is new approach to rendering technology. It will be up to other companies to figure out how it can be applied to games.

    You might as well scoff at Ogre3D for not having built-in networking code.
  • Zack Fowler
    Options
    Offline / Send Message
    Zack Fowler polycounter lvl 11
    I think I might scoff if Ogre3D forced you to use a uniform grid and didn't let you rotate anything ever.
  • Mark Dygert
    Options
    Offline / Send Message
    Wrath wrote: »
    "The present polygon system has got quite a few problems, but not in terms of graphics. Polygons are not really scalable between platforms – if I were to make a character on a PlayStation 3, I can't put him on the Nintendo Wii because he uses too many polygons, so I have to completely rebuild him. Imagine we weren't doing a polygon game, say we were doing a 2D game, if I drew a character on the PlayStation, he's just a bitmap image – this can easily be rescaled. You could do it in Microsoft Paint! ‘Infinite Detail' data is like a 2D bitmap image in that rescaling its size is easy, whereas polygons can't scale like that.

    "The big thing is – if you make a game using the present polygon system, you have to rebuild it to rescale it. You don't have to do that with Unlimited Detail.
    Yeah.
    There is a lot of work being done to make engines and assets scaleable. Has he completly glossed over idTech and unreal running on an ipad?

    Simplygon Skeletal Mesh Simplification
    http://udn.epicgames.com/Three/DevelopmentKitBuildUpgradeNotes.html#July%202011%20UDK%20Beta%20Upgrade%20Notes

    July_image05.jpg

  • arrangemonk
    Options
    Offline / Send Message
    arrangemonk polycounter lvl 15
    probably they're storing their data as bink videos, that could be reasonable
    1024x1024 video with 1024 frames isnt that big one does need one grayscale video for aplha thou

    these people look to me like those "nah im not telling, you could copy me" kiddos
  • Zack Fowler
    Options
    Offline / Send Message
    Zack Fowler polycounter lvl 11
    Shh, Vig... the potential investors might hear you. Keep it on the down low.
  • Richard Kain
    Options
    Offline / Send Message
    Richard Kain polycounter lvl 18
    There is a lot of work being done to make engines and assets scaleable. Has he completly glossed over idTech and unreal running on an ipad?

    Yes, but even that is a LOD method that assumes a base high-poly mesh with maybe 10,000 polygons. Even with that approach, you are limited to that 10,000 poly ceiling.

    What if you were using a Z-Brush sculpted model as the basis for that LOD instead? What if your reference mesh had millions of polys?

    The point of what they are attempting to achieve is to raise the granularity of general 3D rendering. To have the real-time rendering itself using high-density sculpted point clouds instead of "fudging" the complexity through shortcuts in lighting. The tessellation approach used in DirectX 11 can help to make things look smoother, but it can't add details that weren't already there. Wouldn't it be nice to make an extremely detailed digital sculpture, and be able to pop it right into a real-time rendering system without reducing the density of the detail?
  • Zack Fowler
    Options
    Offline / Send Message
    Zack Fowler polycounter lvl 11
    The tessellation approach used in DirectX 11 can help to make things look smoother, but it can't add details that weren't already there.

    I think maybe you don't understand how upcoming tessellation tech is going to work. It can take something from a flat plane to a rough cobblestone street. Hint: it's not just a meshsmooth.
  • equil
    Options
    Offline / Send Message
    hey look unlimited detail for free.
    unlimiteddetail.png

    there's no reason to care about unlimited detail. i want unlimited destruction.
  • eld
    Options
    Offline / Send Message
    eld polycounter lvl 18
    Over the course of fifteen years. And based on an iterative cycle. (always building off of the the work that had already been done) And lets face it, as forward-thinking as John Carmack may be, he didn't attempt anything this fundamentally different.

    Except, he has: http://www.tomshardware.com/reviews/voxel-ray-casting,2423-4.html
    I don't view it as a trade-off, so much as a new beginning. Of course this technology isn't feature complete. And frankly, it probably won't be when they release it for licencing. Again, you have to sort what this technology actually is from the marketing-speak. This IS NOT A GAME ENGINE. It is new approach to rendering technology. It will be up to other companies to figure out how it can be applied to games.

    You might as well scoff at Ogre3D for not having built-in networking code.

    I've never complained for the lack of game engine features, I know what a game engine is.

    I've complained for the company's lack of going into details on the fact that their tech will lack features that a rendering engine should have.
    The tessellation approach used in DirectX 11 can help to make things look smoother, but it can't add details that weren't already there.

    Actually, the basis behind geometry shaders is the fact that you can add geometry after you've sent the mesh to the graphics card, which means you get more geometry without any additional memory being used, and then that also means you can do whatever you wish with that extra geometry, including adding detail that wasn't there to start with.
  • Mark Dygert
    Options
    Offline / Send Message
    [ame]http://www.youtube.com/watch?v=-uavLefzDuQ[/ame]

    But what about DirectX11 tessellation? Same level of detail, built with existing technology that doesn't require a massive retooling of the industry.

    This is the tech he is competing with... not tech from 5-6-7-8 years ago that he keeps comparing his stuff too. If the development is going to take another 5-15 years then he needs to do a lot more heavy lifting to even catch up.
  • Wrath
    Options
    Offline / Send Message
    Wrath polycounter lvl 18
    Yes, but even that is a LOD method that assumes a base high-poly mesh with maybe 10,000 polygons. Even with that approach, you are limited to that 10,000 poly ceiling.

    What if you were using a Z-Brush sculpted model as the basis for that LOD instead? What if your reference mesh had millions of polys?

    The point of what they are attempting to achieve is to raise the granularity of general 3D rendering. To have the real-time rendering itself using high-density sculpted point clouds instead of "fudging" the complexity through shortcuts in lighting. The tessellation approach used in DirectX 11 can help to make things look smoother, but it can't add details that weren't already there. Wouldn't it be nice to make an extremely detailed digital sculpture, and be able to pop it right into a real-time rendering system without reducing the density of the detail?

    DX11 tesselation is not simply mesh-smoothing. You can define exactly where geometry detail is put in 3D space through a a non-normalized vector for tesselated geometry per-pixel.

    Of course, you're limited by texture size and compression, but it's still much less memory intensive and more flexible than an insanely detailed point cloud.
  • Richard Kain
    Options
    Offline / Send Message
    Richard Kain polycounter lvl 18
    eld wrote: »
    I've never complained for the lack of game engine features, I know what a game engine is.

    I've complained for the company's lack of going into details on the fact that their tech will lack features that a rendering engine should have.

    Did Doom have all of the features that you believe a rendering engine should have? Did Quake? Did Half-Life 1?

    We're talking about taking things back to the beginning, and starting over from there. We're talking about a fundamentally different approach to 3D real-time rendering. This is an attempt to change how it is done from the ground up. All of the features you are talking about were added to 3D rendering over a decade and a half of development. By an entire industry worth of different programmers. And yet you expect a single, small team of guys experimenting with a different approach to bring their solution up to those standards, before they even attempt to make any money off of it.

    Frankly, I'm impressed enough with the progress they have made. And I'm quite interested to see what other developers could pull off after playing around with some of this technology. And I see some significant potential for this kind of technology in the portable space, where the CPU is sometimes the only horsepower you have.

    I'm not saying that this company in particular is going to be the ones to shake things up in 3D rendering. It could be that they won't get any additional funding, and will just fade away. But I won't fault them for an attempt at something different.
  • vargatom
    Options
    Offline / Send Message
    1. They are a small company, not a major corporation. They have limited time and resources. You can't expect them to be finished in a few short months, let alone a few short years.

    They've been around since 2003, they had plenty of time.

    In 2008 the programmer posted on Beyond 3D, turned out he didn't even know what CPU cache memory is (!). He's been working in a vacuum, having no idea about how game engines work, what they need, what hardware acceleration is.

    It's so far from being a viable solution that it's just not worth all this attention.
    Lots of other, far more qualified programmers - from Carmack to Nvidia researchers - have looked into voxels, so there's probably a reason why it still hasn't picked up momentum, right?
  • vargatom
    Options
    Offline / Send Message
    Keg wrote: »
    wtf? NO.

    They used morph targets on a mesh.

    One morph target, one color and one normal map, to be precise - one each for every single frame of the animation. Thousands and thousands of them altogether, totally compressed of course.
  • vargatom
    Options
    Offline / Send Message
    And lets face it, as forward-thinking as John Carmack may be, he didn't attempt anything this fundamentally different.

    As I recall, between Quake 3 and Doom 3 he locked himself away from the internet and everyone for some weeks and wrote about a dozen experimental rendering engines. Including voxel based stuff. Only after that did he decide to go with Doom3's normal mapping and stencil shadows.

    And he did sparse voxel octree based research years ago, too, he talked about it in 2008 at quakecon.
  • Richard Kain
    Options
    Offline / Send Message
    Richard Kain polycounter lvl 18
    But what about DirectX11 tessellation? Same level of detail, built with existing technology that doesn't require a massive retooling of the industry.

    It distresses me slightly that everyone keeps bringing up DirectX 11 tessellation as an example of an alternative solution. DirectX is a Microsoft initiative, and is tied firmly to only Microsoft platforms. As someone who insists on cross-platform development, I'm a little shocked that you all seem so keen on a technology that's Microsoft exclusive.

    Moreover, DX11 tessellation is still going to be limited by PC hardware. It is not a consequence-free approach, and is not a method designed with multi-threaded processing in mind. It's still GPU-constrained, and only those who have already upgraded to Windows 7 have any chance of seeing it in action. And at the end of the day, it is one more "trick" to try to get around the limitations of polygon rendering.

    I'm still confused as to why you all seem so resistant to this idea. There's nothing wrong with pursuing a different approach. Surely your objections aren't based solely on the sound of the presenter's voice?
  • vargatom
    Options
    Offline / Send Message
    Richard, seriously, you've been totally hoodwinked by this guy. Try to read up on this stuff before you believe all the b*****t and defend it, against people who know what's this really about. It's not like the opinions against this 'tech' are pulled out of thin air.
  • vargatom
    Options
    Offline / Send Message
    Seriously, what are the limitations of polygon rendering that are so completely surpassed in this video?

    If it was about rendering instances of like 5-6 objects, the same tech demo could be done with far better results, for example with rotated objects and dynamic lighting, while keeping at least the same level of detail...
  • equil
    Options
    Offline / Send Message
    It distresses me slightly that everyone keeps bringing up DirectX 11 tessellation as an example of an alternative solution. DirectX is a Microsoft initiative, and is tied firmly to only Microsoft platforms.
    geometry shaders and tesselation work just as well on opengl, and in fact i believe that they had it first. dx11 is just the current word for "nextgen".
    I'm still confused as to why you all seem so resistant to this idea. There's nothing wrong with pursuing a different approach. Surely your objections aren't based solely on the sound of the presenter's voice?

    again they still haven't told anyone how this works. What we know: point clouds are involved. compressed data structures are involved. raycasting is involved.

    that's all they've told us, and that's not nearly enough to evaluate the technology.
  • eld
    Options
    Offline / Send Message
    eld polycounter lvl 18
    It distresses me slightly that everyone keeps bringing up DirectX 11 tessellation as an example of an alternative solution. DirectX is a Microsoft initiative, and is tied firmly to....

    [ame]http://www.youtube.com/watch?v=ZojsR4zwjt8[/ame]
  • leslievdb
    Options
    Offline / Send Message
    leslievdb polycounter lvl 15
    So they get a lot of money -> need proof they are actually doing something with the money -> create a buzz around the project by claiming they invented the wheel ->investers only care about the project getting hyped without knowing what it's really about -> hey guys another year of spending that money before another vague video is due!

    It all sounds good but why be so fishy if it's legit...
  • eld
    Options
    Offline / Send Message
    eld polycounter lvl 18
    I would draw a parallel to the company currently developing flying cars.

    http://www.moller.com/

    http://en.wikipedia.org/wiki/Moller_Skycar_M400
  • vargatom
    Options
    Offline / Send Message
    equil wrote: »
    that's all they've told us, and that's not nearly enough to evaluate the technology.

    But we already know the most likely drawbacks:
    - incredibly high background storage requirements (up to hundreds of gigabytes)
    - completely static game world (otherwise you lose all the significant speed advantages of octrees)

    There are other, secondary issues as well, like complex curved surfaces and complicated shaders like reflective materials, but the above ones are problematic enough...
  • Mark Dygert
    Options
    Offline / Send Message
    It distresses me slightly that everyone keeps bringing up DirectX 11 tessellation as an example of an alternative solution. DirectX is a Microsoft initiative, and is tied firmly to only Microsoft platforms.
    Last time I checked they controlled a pretty sizable share of the gaming hardware and software. Does he (and you?) honestly think that the next round of consoles is going forsake what they've built upon, put everything on hold and wait for this guy to catch up and prove his revolutionary claims? If they thought his ideas where worth it, they would either buy him out and we would never hear from him again, or they would develop something that is very similar.

    "I hate M$, stop drinking the kool-aid" is not a solid reason to trumpet this technology. Yea yea I run firefox and I hate IE, and I like the interface a mac has but you can't dismiss that there are somethings that big money and a big company can put their resources behind and deliver on a reasonable time table. But your argument makes about as much sense as me writting my own linux based os just so I'm free of "teh mans shackles"... heh.

    I'm for David vs Goliath but seriously when David talks a bunch of smack for a few years then shows up with a shoelace and a lump of poo I have to wonder if he knows what he's getting himself into.
    I'm still confused as to why you all seem so resistant to this idea. There's nothing wrong with pursuing a different approach. Surely your objections aren't based solely on the sound of the presenter's voice?
    It really doesn't help... But more over I'm just not amazed by their lack of proof.

    "You can make game out of this!"
    "Oh really?"
    "Yea so you going to make it or what?"
    "Umm a bit busy building this other game, have fun with that, let me know how it turns out"
    "Wait... give me money..."
  • Richard Kain
    Options
    Offline / Send Message
    Richard Kain polycounter lvl 18
    vargatom wrote: »
    Richard, seriously, you've been totally hoodwinked by this guy. Try to read up on this stuff before you believe all the b*****t and defend it, against people who know what's this really about. It's not like the opinions against this 'tech' are pulled out of thin air.

    Well, here's what I DO know.

    The traditional polygon rendering pipeline currently in use was designed specifically for single-thread operation. This is why an entire industry sprang up around designing and selling independent 3D cards. The current rendering pipeline needed an additional processor to really push performance. (since CPUs at the time had only a single core and could only run a single processing thread) This solution worked for the most part, and the industry embraced it. Initiatives like OpenGL and DirectX came about because of this approach to rendering.

    A few years ago, hardware and chip designers realized that they were reaching a point of diminishing returns on clockspeeds and the size that they could shrink their processors down to. The speed at which computers would run had been increasing rapidly for years, but that processing power was about to hit a brick wall. That was when they came up with the idea of introducing multiple cores into the same CPU and running multiple processing threads using the same CPU. This would allow them to broaden computational power without forcing the processors to run any faster.

    That just about brings us up to the present day, and introduces a whole new problem that multiple industries are still trying to grapple with. We have snazzy new multi-core processors that allow for more computational power than ever before. However, well over 90% of the software in existence was not built for multi-threaded processing. And the traditional rendering pipeline is one of the biggest, and most backwards offenders.

    The game industry is not alone in their difficulty in coming to grips with multi-core programming. Most industries have yet to really explore the potential on display. And multi-threaded processing requires a step back in terms of programming methodology. One of the best languages for multi-threaded coding is C. And a lot of programmers are just unwilling to go back that far.

    I'm very interested to see the potential performance that can be milked from multi-threaded processing. But tightly adhering to the traditional rendering pipeline is going to prevent gaming from taking advantage of this new technology. As such, I am always excited to see anyone attempting a different approach to rendering 3D graphics.
  • vargatom
    Options
    Offline / Send Message
    Well, here's what I DO know.

    The traditional polygon rendering pipeline currently in use was designed specifically for single-thread operation.

    False. GPUs run hundreds of threads in parallel at any given clock cycle. Probably even more by now; I know they have many dozens of shading units (complex arithmetic units that do the actual math).
    The current rendering pipeline needed an additional processor to really push performance. (since CPUs at the time had only a single core and could only run a single processing thread)

    No, 3D hardware became widespread because it was specialized, it had actual wiring to perform a lot of small simple calculations like texture coordinate interpolation, filtering etc. Meanwhile they were able to skip a lot of circuits because they did not need to support code branching and other more complex features.
    CPUs were at a disadvantage because more than 90% of the die space was spent on the extra circuits and only the rest was left to do the math. So they had to use these few fully functional arithmetic units on all the small, simple calculations too and that wasted a lot of clock cycles.

    Today GPUs are still specialized to a high level, they still don't have random memory access and such, while they still have hundreds of tiny fixed function arithmetic units to speed up small calculations.

    In short GPUs sacrifice flexibility by specialization, gaining a lot of efficiency for the same silicon space.

    However, well over 90% of the software in existence was not built for multi-threaded processing.

    True.
    And the traditional rendering pipeline is one of the biggest, and most backwards offenders.

    Completely false. Unless you want to do raytracing, graphics is one of the easiest tasks to massively parallelize. Again, GPUs run hundreds of concurrent threads.
    I'm very interested to see the potential performance that can be milked from multi-threaded processing.

    You've been seeing it ever since the Voodoo 2.
  • Mark Dygert
    Options
    Offline / Send Message
    ... and this might be the next big thing. I just don't think it will come from these guys. They seem to be moving too slow and have horrible PR to move the ball down the field all that far in a reasonable amount of time. How he managed too get the money that he did out of the AUS gov is amazing. Especially given their views toward games as cancer instead of advancement.

    Personally I find the guy so annoying I really hope he falls flat on his face... but hey if he makes it big then awesome, he certainly worked hard for it and overcame many obstacles, a few he placed in his own path...
  • Richard Kain
    Options
    Offline / Send Message
    Richard Kain polycounter lvl 18
    vargatom wrote: »
    Completely false. Unless you want to do raytracing, graphics is one of the easiest tasks to massively parallelize. Again, GPUs run hundreds of concurrent threads.

    Isn't raytracing one of the features often touted as the future of rendering technology? And isn't the specialized nature of 3D rendering dependent on the current approach to 3D rendering? Doesn't my argument still hold water despite my lack of technical knowledge, and didn't you just prove it for me?

    The extreme specialization of GPUs is based on the traditional approach to 3D rendering. And if multi-core CPUs are now much more capable when it comes to running numerous threads, it would make sense that GPUs would become largely redundant. A CPU that can also run hundreds of concurrent threads could perform rendering functions, even within the traditional rendering pipeline.

    Your entire argument is still based on the assumption that there is one, and only one way to approach 3D rendering.
  • vargatom
    Options
    Offline / Send Message
    Isn't raytracing one of the features often touted as the future of rendering technology?

    It is a far more complex question.
    We in offline rendering use a lot of raytracing by now, but with very very heavy optimizations so that we can still run it in parallel. We have render nodes with 8 CPUs, each with 4 threads AFAIK, and they work just fine.

    The general problem with raytracing is that when a reflected ray is bounced back into the scene, a lot of optimizations like view frustrum culling, occlusion culling, delayed loading etc. stop to work. Basically, you have to try to keep the entire scene with all objects and textures in memory, or you're going to get stalled.
    The past 5 years in offline rendering have been spent on overcoming these problems and today almost everyone uses some level of raytracing. Computers got faster in the mean time, that helped too.

    Current GPUs only concern themselves with the actual polygon they're rendering, and since they know all about it, they can pre-load the necessary textures. They also receive their polygons neatly organized from the game engine. This won't work with raytracing, but there are good technology advancements in offline renderers which they can use - once we really start to move on to raytracing.

    I don't expect that to happen in the next console generation though. For now, GPUs do quite well with reflections, global illumination and shadows without raytracing. Battlefield 3, Crysis 2 and Epic's new Unreal Tech look quite fine and they actually work, too.
    And isn't the specialized nature of 3D rendering dependent on the current approach to 3D rendering?

    Elaborate please, I don't get what you mean.
    Doesn't my argument still hold water despite my lack of technical knowledge, and didn't you just prove it for me?

    I don't think so. I'd rather say your argument has been thoroughly invalidated.

    By the way, can you argue about quantum physics in any reasonable way, too?
    The extreme specialization of GPUs is based on the traditional approach to 3D rendering.

    Less and less, actually. One of the advancements was adding more and more programmability; GPUs can, for example, run the very same sparse voxel octrees that this demo probably uses, and they do it just fine.
    And if multi-core CPUs are now much more capable when it comes to running numerous threads, it would make sense that GPUs would become largely redundant.

    Intel thought just like that, but Larrabee still didn't work out.
    In practice GPUs are still a LOT faster.
    A CPU that can also run hundreds of concurrent threads could perform rendering functions, even within the traditional rendering pipeline

    Yeah, but general purpose is always going to be less efficient than specialized.
    But what does it all have to do with the complete scam that this video is? Or are you just trolling?

    Your entire argument is still based on the assumption that there is one, and only one way to approach 3D rendering.

    No, but at least I have some related knowledge to base my arguments on, unlike you. Again, are you trolling or trying to get into some meaningful discussion?
  • Richard Kain
    Options
    Offline / Send Message
    Richard Kain polycounter lvl 18
    vargatom wrote: »
    I don't think so. I'd rather say your argument has been thoroughly invalidated.

    Well, the thrust of my argument was that the current method of rendering 3D graphics was partially entrenched by the direction that hardware manufacturers had taken with GPU design. You disproved some of the technical specifics, but you thoroughly verified that my main point was valid. In fact, you pointed out that GPU's represented specialized hardware that had been custom designed for the traditional approach to 3D rendering.

    Part of the reason why the industry is reluctant to abandon the current rendering pipeline in favor of something different is because the current hardware is designed around it.
    GPUs can, for example, run the very same sparse voxel octrees that this demo probably uses, and they do it just fine.

    Again, not actually voxels. Whatever, no one is concerned with thinking outside the voxel box anyway.
    Intel thought just like that, but Larrabee still didn't work out.
    In practice GPUs are still a LOT faster.

    And because something doesn't work out once, that means its not worth pursuing.

    Yeah, but general purpose is always going to be less efficient than specialized.
    But what does it all have to do with the complete scam that this video is? Or are you just trolling?

    I thought the advantage of having multiple cores on the same processor was that the communication between them was much faster? Wouldn't the performance advantage of a 3D card be significantly degraded by the bottleneck between the CPU and the GPU? And wouldn't running all rendering code off of the CPU eliminate that bottleneck?
    No, but at least I have some related knowledge to base my arguments on, unlike you. Again, are you trolling or trying to get into some meaningful discussion?

    (shrug) I find that common sense is far more useful for arguments than technical knowledge. After all, the art of rhetoric is based on influencing others, not proving a point. Besides, you seem more than willing to fill in the blanks. And any discussion is meaningful if the participants find it to be so. I've already learned things that I did not yet know, so I'd say I've already benefited handsomely.
  • vargatom
    Options
    Offline / Send Message
    Well, the thrust of my argument was that the current method of rendering 3D graphics was partially entrenched by the direction that hardware manufacturers had taken with GPU design.

    No, you've been arguing about completely different things.

    The correlation is there though, but it's more like this: what is the most efficient way of 3D rendering on current computer technology?
    (which entails processors, memory bandwidth and background storage as well, so it's not just CPU vs GPU questions)


    For now, rasterization using GPUs has no equal for realtime graphics.


    When rendering times aren't as important, CPUs with their added flexibility take over. The price is an increase of several orders of magnitude - our renders take 30-60 minutes in average.
    In fact, you pointed out that GPU's represented specialized hardware that had been custom designed for the traditional approach to 3D rendering.

    Most efficient approach is the proper term. Traditions aren't as important, it's not like there are a lot of old priests unwilling to change anything because our forefathers used to do so.
    Although there are of course momentum issues, full direction changes will naturally be slower (adopting normal mapping took a few years, for example). But for the right price it would happen - the problem is that noone has managed to come up with anything better.

    Again, not actually voxels. Whatever, no one is concerned with thinking outside the voxel box anyway.

    He himself admitted that it is voxels, but he calls it something else so that the tech appears to be new and can draw more attention.

    And because something doesn't work out once, that means its not worth pursuing.

    It wasn't "once" - research is continuous, there are a lot of universities, game developers, graphics companies who spend on R&D.

    We've already listed the unresolved issues that keep it from being viable, several times. That is why it's still not worth pursuing further.

    I thought the advantage of having multiple cores on the same processor was that the communication between them was much faster?

    Doesn't help with the fact that a general purpose CPU sacrifices about 90% of its transistors to facilitate all those purposes. A specialized processor can do more math from the same amount of transistors, that's not changed by using multiple cores.

    Wouldn't the performance advantage of a 3D card be significantly degraded by the bottleneck between the CPU and the GPU?

    That bottleneck can be overcome in many ways. APIs and hardware have been built around it, there's more room to make things better but PCs aren't that important anyway, as consoles are the more important market for advanced 3D graphics engines.
    And wouldn't running all rendering code off of the CPU eliminate that bottleneck?

    It'd introduce other bottlenecks. Mainly the efficiency issue.

    (shrug) I find that common sense is far more useful for arguments than technical knowledge.

    You are a troll then, indeed.
    I've already learned things that I did not yet know, so I'd say I've already benefited handsomely.

    How about doing it in a more polite way? Asking is a better way than making outrageously stupid arguments and waiting for others to correct you.
  • Zack Fowler
    Options
    Offline / Send Message
    Zack Fowler polycounter lvl 11
    I think I love you vargatom. Hold me.
  • vargatom
    Options
    Offline / Send Message
    Yeah, I think I'm gonna need to take a break from this anyway...
1246715
Sign In or Register to comment.