You have to format the link without the mobile URL (youtu.be) so it looks more like this, but unfortunately Vanilla Forum software doesn't support timestamped embeds, FML. So you have to copy/paste the URL into a new tab or window.
Virtual texturing generally refers to a mechanism that allows you to only stream in the the parts of a texture that you can see at the mip level that you can see.
Unreal has its own implementation but everyone else will be building it on top of directx12
It should be largely transparent to the artist and really just means you can throw a lot of texture data around in a more efficient way than with conventional dx11 type streaming
it looks like they send the terrain to the GPU as a texture and generate the mesh there instead of making a mesh and sending that to the GPU. This isn't exactly revolutionary - in fact, it's generally what you should be doing.
you don't have to store lods, you can take advantage of your texture streaming system, you can have smooth lodding, you don't have to calculate when your lods arrive cos it's based on mipping and GPUs are far better at handling textures than anything else. There are some disadvantages - like it being more difficult to do occlusion culling - but the advantages far outweigh the disadvantages. I'm about 99% certain this is how unreal handles landscapes - it certainly stores them as textures in memory at runtime
i think where star citizen is being clever is the scale it's working at - its very, very large
it looks like they send the terrain to the GPU as a texture and generate the mesh there instead of making a mesh and sending that to the GPU. This isn't exactly revolutionary - in fact, it's generally what you should be doing.
you don't have to store lods, you can take advantage of your texture streaming system, you can have smooth lodding, you don't have to calculate when your lods arrive cos it's based on mipping and GPUs are far better at handling textures than anything else. There are some disadvantages - like it being more difficult to do occlusion culling - but the advantages far outweigh the disadvantages. I'm about 99% certain this is how unreal handles landscapes - it certainly stores them as textures in memory at runtime
i think where star citizen is being clever is the scale it's working at - its very, very large
So it's like dx 11 tessellation and displacement . I always thought that it's sort of more expensive than just doing the mesh and lods. While indeed simpler to do. Is it other way around now? As of occlusion culling it could done in patches , right ? So we would see un occluded patches displaced.
it looks like they send the terrain to the GPU as a texture and generate the mesh there instead of making a mesh and sending that to the GPU. This isn't exactly revolutionary - in fact, it's generally what you should be doing.
you don't have to store lods, you can take advantage of your texture streaming system, you can have smooth lodding, you don't have to calculate when your lods arrive cos it's based on mipping and GPUs are far better at handling textures than anything else. There are some disadvantages - like it being more difficult to do occlusion culling - but the advantages far outweigh the disadvantages. I'm about 99% certain this is how unreal handles landscapes - it certainly stores them as textures in memory at runtime
i think where star citizen is being clever is the scale it's working at - its very, very large
So it's like dx 11 tessellation and displacement . I always thought that it's sort of more expensive than just doing the mesh and lods. While indeed simpler to do. Is it other way around now? As of occlusion culling it could done in patches , right ? So we would see un occluded patches displaced.
Hardware tessellation is modifiying a mesh - there's no need for that in this scenario. It'll be a simple, regular heightfield grid with conventional smooth lodding just like you see in most game engines. the new/clever part is in how they stream texture data.
They mention patches and its the obvious way to manage that sort of thing so I expect you're correct in terms of occlusion culling.
poopipe said: Hardware tessellation is modifiying a mesh - there's no need for that in this scenario. It'll be a simple, regular heightfield grid with conventional smooth lodding just like you see in most game engines. the new/clever part is in how they stream texture data.
I am still not sure I understand the reason. if it's regular grid mesh patches and they displace it by height textures in real time it's still a hell lot more vertexes then properly re-meshed around terrain features model with lods. More data to load , more math to do in real time?
When you're stood on a 1km square tile at 1m resolution you're handling/rendering a million verts with a static lod in the same scenario on a heightfield with smooth lodding you ditch a shit load of those vertices
that's before you consider the other reasons 1: you have to get those meshes to the GPU every time they lod 2: you have to generate thousands (if not millions) to cover a planet 3: you have to generate (and store) the lods 4: you have to iterate on all this.
Storing it all as textures means you don't have to send shit to the GPU all the time (which is very slow) and that you get your lods for free
Thanks poopipe , it makes sense . Stiil the properly tessellated/decimated mesh around terrain surface features may need much less vertexes than a regular grid. Like several times less but my guess it still more traffic toward the GPU than just textures .
Replies
Unreal has its own implementation but everyone else will be building it on top of directx12
It should be largely transparent to the artist and really just means you can throw a lot of texture data around in a more efficient way than with conventional dx11 type streaming
you don't have to store lods, you can take advantage of your texture streaming system, you can have smooth lodding, you don't have to calculate when your lods arrive cos it's based on mipping and GPUs are far better at handling textures than anything else.
There are some disadvantages - like it being more difficult to do occlusion culling - but the advantages far outweigh the disadvantages.
I'm about 99% certain this is how unreal handles landscapes - it certainly stores them as textures in memory at runtime
i think where star citizen is being clever is the scale it's working at - its very, very large
So it's like dx 11 tessellation and displacement . I always thought that it's sort of more expensive than just doing the mesh and lods. While indeed simpler to do. Is it other way around now? As of occlusion culling it could done in patches , right ? So we would see un occluded patches displaced.
Hardware tessellation is modifiying a mesh - there's no need for that in this scenario. It'll be a simple, regular heightfield grid with conventional smooth lodding just like you see in most game engines. the new/clever part is in how they stream texture data.
They mention patches and its the obvious way to manage that sort of thing so I expect you're correct in terms of occlusion culling.
When you're stood on a 1km square tile at 1m resolution you're handling/rendering a million verts with a static lod
in the same scenario on a heightfield with smooth lodding you ditch a shit load of those vertices
that's before you consider the other reasons
1: you have to get those meshes to the GPU every time they lod
2: you have to generate thousands (if not millions) to cover a planet
3: you have to generate (and store) the lods
4: you have to iterate on all this.
Storing it all as textures means you don't have to send shit to the GPU all the time (which is very slow) and that you get your lods for free