Hehehe - he's not ven agreed to a new UI, and we're desingning one...
Mop, thats loking great, may I make a couple of minor suggestions?
The Texture Maps section has a confusing title. Those are for the exported data settings, am I right? The title "Texture Maps" makes me think thats where you load your maps in, so why just change to "Export settings" or "Texture exports" or something?
The Base output folder, which should be selectable, should also by default be the same folder as the input stuf, or a subfolder of it.
I still don't think you need a section for High Definition Meshes and then another for Low Definition Meshes. "Input meshes" would suffice.
You've probably thought of this already...the width and height for exports could be drop-downs, with standard sizes. Changing one automatically changes the other, unless you don't constrain.
Rick, if you can come up with a really good solution for how to choose input meshes and flag them as high and low without losing the ability to sort one from the other, and without cluttering up the interface, I'd be all for it
For the time being I think that 2 separate panels for high and low just make it easier to manage and determine at a glance which meshes you have loaded.
I've updated the pic and post with some of your suggestions (locking aspect ratio, drop-downs for size, better title for rollout), also a mockup of one of the drop-down menus to show how that's been shuffled around from the current version to be more windows-friendly.
Mop - its to bake from high to low, so having two separate sections seems redundant. It'd be much simpler to have an extra column in one input area (where you now have the colours and textures)...
OR my preferred choice would be that instead of each row being a single asset, it'd be an asset node instead, where you drag in a low and high poly. Choosing which one of those is high/low would be a simple as an HP/LP drop down, but in 99% of case the software could determine automatically based on the file size.
Just to take the drop down presets and custom config a little more, the size system could be extended to the output naming conventions. It could come with 2 or 3 sets (you've used _lighting and _local), and custom. Users could define their own preferences and automatically apply them to all fields. This would not only allow for things like _L, _N, but other languages too _le-normal, _das_purpl-mappen)
Yeah Rick I definitely like the sound of that, but I don't really know how to represent it in a UI.
If you can come up with a nice way of presenting that and organising it all, I'm sure it'd help
"Oh, I thought I could just be the ideas person. I've got lots of ideas for games."
...
Actually, I'm running out the door for a photosession, so text will suffice I hope.
Anyway, using what you already have, and using dragging instead of browsing, dragging a single model file onto a line adds it as an asset AND turns the line into a Node.
MoP's screenshot looks like the thing I got in mind. But i think gonna move the viewer options(show normals, show wireframe, etc...) into a "Display" menu on the top. Also gonna add an option to decide which RGBA channel the output will use ( for example, AO in red channel, thickness in blue convexity in blue, etc )... and gonna add a layer system too.
About the file->Load/Save, gonna use a custom scene format and with all the precomputed data needed. You will be able to import/export, for example, OBJ meshes using File->Import/Export. This is because to read and convert an OBJ each time you render a map is too slow and tedious.
Just an appreciation... the mesh "scale" should not be done in realtime... just at loading time... that is because scaling a mesh will require to update the spatial structures i'm using ( BSP, etc.. ) so gonna be too slow.
About the file paths/names, gonna allow to drag&drop and to manually edit them ( perhaps too for the 3.14.0 version ).
About the lowpoly/highpoly sections, won't exist anymore. Meshes will be loaded and shown into an unique list. When projecting, need to specify the meshes to intersect like Max does(will use all if you don't specify one)... That is because sometimes you don't need to use projection... just want to bake the AO of the lowpoly(or highpoly) mesh into itself.
About multiple lights, gonna be supported because you will be able to bake GI lighting into a texture too. Support for this is not very difficult to do... the new custom rendering will allow you to define your own rendering shaders like Renderman/MR does... so there is no extra work for me if you render a normal map, an AO map or GI/lightmaps into the texture... the only difference is to select a different rendering shader. These shaders could be written in multiple ways and scripting languages.
Custom shaders will be available too for the 3D viewer. Just need how to integrate the shader params into the UI... probably using some kind of "annotations"... the problem is that GLSL does not support them currently(so gonna need to do it using code "comments"... fugly!).
About the linux port... is almost trivial to do. Just need to change 200-500 lines of code planning it well. I won't use Microsoft-Windows-only products anymore ( .NET, VStudio, Windows installer, etc... ). Moved my pipeline to 100% open source and portable software... I can recompile a linux or Windows version just changing a few options.
I found linux quite interesting... It's free so enterprises can reduce costs a lot. Also is very efficient managing multicore CPUs/clusters and uses very few RAM( I ctrl+alt+supr and got 60Mb on a clear Ubuntu vs 320Mb of Windows ).
About the UI, you could use skins, translate/i18n it, change menus, colors, etc... All should be customizable and scriptable(like PS actions). I'm still not sure if gonna use GTK+ and glade or a custom OpenGL UI. If I choose the custom UI system gonna provide a dialog editor too.
Well, currently i'm working on the 3.14.0 new renderer, which will be the base renderer for 4.0. Is focused on super-reduced memory consuption(32Mb vs 2/3Gb) and 200% speed increase. I'm also preparing a full GPU render, but need to wait for OpenGL 3.0(which I hope will be released this month).
Once the 3.14.0 new renderer is out gonna start xn4.
I'm playing a bit with subdivision but I got a problem with the ray cage.
The cage is just an extrussion of the lowpoly(subdiv0) mesh. If I subdivide the lowpoly, for example, to subdiv2, then I need to re-create the cage because the topology has changed.... So I could need to use the subdiv2 mesh's vertex normals and to project them until they touch the original subdiv0 cage... then perform the ray tracing as xn3 does. Is that ok?
check boxes to bake Diffuse / normal / AO in one go, maybe even the possibility to add custom slots where you could add you own map to render by default. Maybe there is a way to do it right now but I don't see any right now (I liked it in previous versions and kinda miss this).
Just let me put a suffix next to the checkboxes this way all my objects will have _N or _D at the end of the map.
Also I'd like to see groups baking of some sort. I like to do groups of objects for baking, like for example, separate my high res / low res model in different groups that doesn't intersect with each others (often the highs have a lot of small instanced objects), to avoid baking part by part and avoid rays penetrating in other objects, and at the end I compute the final normal/AO/Diffuse in PS. But an option to set up groups and compute the final map for me would be ace. Not sure if i m clear but I could make a quick example...
Even better, tag objects (by giving then specific names, like a prefix) to identify groups and then import each groups in xnormal and let him split each group by itself. This way it would calculate the AO on all objects at once and render it separatly but with accurate result (right now if I have an object at the very top it's not getting a lot of AO as the other objects are on another group and the AO isn't calculated on everything). Again maybe there is a way to do it but no one I know was able to do it...Would work for vertex AO as I use sorrely this tool with GREAT results. Maybe an option to subdivide the high before calculating AO and saving the SBM on top of that to have really clean Vertex AO results?
Also, a way for Xnormal to bake materials that you set up in your 3D proggy...I know it sound tricky, but as I do right now it's a long and tiring process...I set my materials in Max on my high to have a base diffuse, and then Render textures the high mesh after auto unwrap (Sometimes it's a pain, because of max always wanting to do everything multi subs, so when you have mutliple objects with multi subs...Anyway...) Then Export my highs with UVs and bake the diffuse texture on the low mesh. Why not try to export from max (or maya) maybe just converting to vertex color...this way I'd just not have to bother about having to map quicly map and then bake the diffuse of my highres...I'd just toss some materials on parts I want them and Export my Highs without any mapping coordinates but with materials that Xnormal could see when baking the low meshes...
Now i'm playing with subdivision and got that cage problem. I don't know how to use the subdiv0 cage ... perhaps firing a ray from the lowpoly subdiv1 or 2 until it reaches the subdiv0 cage... but then, what? I'm a bit lost. To use a cage with 100k vertices is not very good...
I'm not entirely sure I understand what you're asking, but don't see why you'd need to subdivide the game-resolution mesh as I would've thought that you only need subdivision support for the hi-res mesh. It's possible that I'm completely mis-interpreting you though.
Yep, let me explain a bit more the problem. Imagine you have this:
1. Lowpoly mesh without subdivision applied.
2. Highpoly mesh, sculpted with ZBrush.
The problem with the current "height map" is that it's not a real displacement map. To be a real displacement map and to be useful to reconstruct the sculpted mesh using the lowpoly + displacement map I need to perform subdivision on the lowpoly model... If not, the height map is just that.. heights... not displacement distances.
Ok, said that, imagine this is my lowpoly model unsubdivided(I call this subdiv0) and it's cage:
Now I perform subdiv2 on the box so it results this:
As you noticed the topology has changed... UVs, normals, vertices... all changed.
Both images combined:
Now imagine I want to render the displacement map and to use the cage to fire the rays... It's a problem because the topology on the lowpoly mesh changed!
I'm gonna post some screenshots of xn4 soon on my blog. I've the Alpha 1 almost done for linux, windows and macosX ( and opensolaris if they solve some problems )
Hey jogshy, I finally started playing around with your program. Its pretty neat. Here is something that I think would help artists and it may be waaay outside of what your design goals for xnormal to be.
A small window that displays your model in a a full dx environment with shader support. That watches your PS files and automatically changes them into pngs (something that wont take long to compress) and updates the viewport automatically. This viewport can be pinned to always be in front so you can have it it open while having PS fullscreen. You can also rotate the model, have it spin slowly by itself, or zoom in and out.
The main issue with dealing with a full 3d program and PS is switching back and forth to see the changes, that and having to create actions in PS to auto save the updated PS file as a copy in another format that directX can understand in order to use the 3d programs real time full shader preview.
jogshy what are you doing about the realtime renderer now that you've got it running in OS's other than windows? software & gl? some of the recent stuff you've been pimping's been dx10 specific, so i'm a bit curious.
in any case great news, and glad you've dropped all those dependencies! the update's gonna be slick.
Does it look at all like MoP's mockup, at this stage?
It's a native look-and-feel UI with a 3D viewport embedded ( which you can disable completely so no extra rendering time will be used in case you want to execute just the HM2NM tool, etc ). I changed the right-collapsible window by floating/docking windows though... in that way you can move the windows with more freedom or close them if you want.
jogshy what are you doing about the realtime renderer now that you've got it running in OS's other than windows? software & gl? some of the recent stuff you've been pimping's been dx10 specific, so i'm a bit curious.
The realtime rendering system is completely "abstracted/interfaced/encapsulated" in xNormal 3(and also in xn4). That means I'm able to plug or unplug any API like OpenGL, Dx9, Dx10, etc.
xn4 is a bit different than xn3 because I'm going to allow the users to define their own shaders and materials.
A xn4's shader is just a XML file with some profile sections ( for example DX9 SM3, DX10 and GLSL ). Inside these sections you can place the .FX(or GLSL) code and the techniques and states(zbuffer,culling,etc)... and there are also some extra XML elements to define the constants and basic LUA-scripted behaviors. For example:
<DX9SM3>
<Script>
gridMaterial = {
-- The user defines here the parameters of the shader.
-- The shader property editor will take a look into all these properties when the user chooses
-- "edit material properties" in xn4
rgbLightDiffuse0 = {0.1,0.2,0.3}, -- default/reset value of 0.1,0.2,0.3. The rgb at start means this attribute is a color
rgbLightDiffuse1 = {0.1,0.2,0.3},
rgbLightDiffuse2 = {0.1,0.2,0.3},
t2dDiffuse = "default.dds",
rtDepth = { width=512, height=512, format=R32F } -- extra render targets to render
OnPreDraw = function(self)
-- This is used to render shadow maps, etc... For example:
xn4_beginDraw(rtDepth); -- specify the render target to use
xn4_activateEffect("DrawShadowMap");
xn4_setTechnique("Draw1");
xn4_setAttribute ( "g_GridTex", self.t2dDiffuse );
xn4_draw();
xn4_endDraw(rtDepth);
-- would be possible to access all the meshes in the scene to perform adaptive-shadow maps, etc
end,
OnDraw = function(self) -- This is called to paint the meshes into the framebuffer
-- Activate the FX we want to use
xn4_activateEffect("DrawGrid");
//List all the light affecting this mesh and pass the lights color to the shader's uniforms/constants
//You've read well... xNormal 4 supports multiple lights, not one like xn3.
//The xn4_XXX calls set internally the OpenGL/DX graphics drivers shader uniforms/constants
for i=0, min(3,xn4_getNLightsAffectingMesh())
local light = xn4_getLightAffectingMesh(i);
xn4_setColorAtt ( "rgbLightDiffuse"+i, light.col )
end
--setup textures/samplers
xn4_setAttribute ( "g_GridTex", self.t2dDiffuse );
--Draw the mesh.
xn4_draw();
-- If you need multiple RTs or passes, just continue drawing here. Reassign params and
-- call xn4_draw() as many times you need.
-- finish rendering
xn4_endDraw();
end,
OnPostDraw = function(self)
-- this allow to compute SSAO, depth of field, etc post effects.
end
};
</Script>
<FX Name="DrawGrid">
texture2D g_GridTex;
sampler2D GridTexSampler =
sampler_state
{
Texture = <g_GridTex>;
MinFilter = LINEAR;
MagFilter = LINEAR;
MipFilter = NONE;
AddressU = wrap;
AddressV = wrap;
};
void gridVS ( in float4 inPos : POSITION,
in half2 inUV : TEXCOORD0,
out float4 outPos : POSITION,
out half2 outUV : TEXCOORD0 )
{
outPos = mul(inPos,g_mNegZInvCamProjTM); // there are some "global" variables like this one: the concatenated camera-projection matrix
outUV = inUV;
}
half4 gridPS (
in half2 inUV : TEXCOORD0_centroid) : COLOR
{
return tex2D(GridTexSampler,inUV);
}
technique Grid
{
pass P0
{
ZEnable = true;
ZWriteEnable = true;
CullMode = None;
FillMode = Solid;
AlphaTestEnable = false;
AlphaBlendEnable = false;
VertexShader = compile vs_3_0 gridVS();
PixelShader = compile ps_3_0 gridPS();
}
}
</FX>
</DX9SM3>
<GLSL>
<Script>
//...Same than before... use LUA script to setup the shader constants, textures, etc... and to edit the shader in the shader property editor.
</Script>
<Technique Name="DrawWireframe">
<VertexShader Name="vsOne">
//Some GLSL code
</VertexShader>
<PixelShader Name="psOne">
//come GLSL code
</PixelShader>
<DepthState>
<ZBufferEnable>true</ZBufferEnable>
</DepthState>
<BlendState>
<AlphaTest Enabled="false"/>
</BlendState>
<RasterState>
<Antialiasing>off</Antialiasing>
</RasterState>
</Technique>
</GLSL>
<DX10>
</DX10>
etc etc
Of course, I'll provide the documents about all this... including the XML schemas for validation.
If the user don't want to support a profile ... it can be left void, so the graphics driver will render the mesh using a default shader. The profile section will allow to add new rendering systems... like DX12 in the future.
I'll support Dx9 SM3, DX10, DX11 and OpenGL but initially I'm implementing just the OpenGL 2 driver because it's the most portable. After all this is done, I plan too to implement a visual node system like UT3 one... so the nodes can be compiled automatically to this XML system(or to use other "profile" and use <operation type="add" param1="node128.out.r" param2="node2390.out.a"> instructions )
On the other hand, I plan also to allow custom shaders for the rendering system. The user could choose between some standard ones (normal map, AO map, etc)... but you could make your own render shaders in a similar way to Renderman ( gather rays, etc )... you could bake even lilght maps! All gonna be scripted.
Btw, I'm gonna try to support LUAEdit's debugging system... so you could debug very easily the LUA scripts.
And yep... you could use more than one material per mesh ( I call this "face clusters"... and work like 3dsmax's MultiSubObj materials ).
You Sir are a god amongst programming men.
Xnormal already kicks ass and now your taking it up another notch or five.
Can't hardly wait for whats to come.
(I mean it, kick ass stuff)
Sorry this is a simple Q. How do I set up the renderer to output more than one map type at once? I could have sworn I did it before, but darn if I can remember what I clicked on to choose which maps I want to output at once.
Bump for a good program if nothing else.
Edit: Hmm, I wasn't using the latest version on this computer. Updating to 3.16.3.38 fixed it.
my gosh! this will really rock Santiago! Looking forward to see the node graph built in!
Here is a screenshot showing the xn4's shader editor:
Node-based as promised You won't need to write a line of code.
I think it's much more intuitive in that way, isn't it?
It will be used for both realtime and offline shaders ( the node list is incomplete currently... when you select the "offline" mode instead of the "realtime" one some extra nodes appear... like closestRay, gatherRays, getNearestPhotons, etc ).
Ofc, you can scroll the window and move nodes, define groups, perform multiple selections, etc... It's far from complete currently.
I was going to put a preview inside each node... but instead that, I think I'm gonna just update the selected mesh in the main viewport with the result at realtime. It works as follows: you select a group of mesh's faces and then you assign a material to them.
On the other hand perhaps I should add pre-draw, draw and post-draw stages. In that way you could use deferred shading, custom shadows or to write and use multiple render targets for post-processing effects like SSAO or depth of field... but I'm not sure about that... perhaps it could be better to perform these effects in a fixed pipeline and to allow the artist just to control the framebuffer pass... I really don't want to complicate it excessively... So, probably, I'll create a default material with diffuse, emissive, specular, etc slots and you will be able to plug some nodes there(like UE3 does).
Ok guys, I'm gonna ask you something important...
A guy asked in my blog asked about the possibility to use xNormal as an offline renderer for animations ( to compute, for example, the AO of an scene's frame ).
The way to achieve that is to import/export complete scenes and not only meshes...
Imagine: you load a Shrek scene in Maya... then you export it using the new "SBM" exporter which saves each mesh triangle-set frame by frame + camera + lights.
Then you load it in xNormal 4, assign the proper materials and render a frame(or bake to textures ).
Is that ok with you? Any problems with this approach? I see some:
1. The scene gonna occupy A LOT because all the deformable meshes/patches need to be saved as a triangle "soup". Meshes need to be stored in that way because your favourite program can use some non-standard interpolation(like quadric tensors, strange beziers, custom subdivision algorithms, etc)
2. You're gonna rely on my custom scene exporter... but I didn't create a SBM exporter yet for blender,lightwave,modo,etc... I'm targeting max and maya to start. I could pseudo-implement a Renderman(RIB) scene importer but won't be 100% compatible and that will be very complex. I also don't like the tex-based formats... they're slow to parse and occupy a lot.
3. What's better? To assign the materials in your favourite modelling/animation program on in xnormal? Assigning them in xnormal is less work for me because I don't need to create "material plugins" for max/maya.
Of course, you won't be forced to use this "scene" approach... you'll be able to construct a static scene importing meshes as you did in xn3.
Hey Jogshy, what a great news to see this node-based shader construction ! I hope we'll be able to import/export shaders built in Xnormal 4 (or even with another shader contruction package?!). Would love to be able to share my shaders or give someone else's shader a try
Hey Jogshy, what a great news to see this node-based shader construction! I hope we'll be able to import/export shaders built in Xnormal 4! Would love to be able to share my shaders or give someone else's shader a try
Yep, well... currently I'm just porting the existing "fixed pipeline". Once it's working I could expand the system.
Btw, if you have any comments about the "scene importer/exporter" I'll would like to hear them. I want to solve the problem of using animations with xNormal for films/videos.
I would suggest making the 3d preview menus less cluttered.
EDIT Never mind i see it and it looks way better
I agree. The new 3D viewport gonna be super-clean. Just a full screen button + 3 ones for the camera movement. All the other things are managed via dialogs.
do you think its easy to port over shader knowledge from something like FXComposer/UE3 in to xNormal4?
For realtime shaders, my intention is to allow the artists to use GLSL/HLSL OR the node-based tool.
If you don't like to write code then use the node tool.
If you don't like the node-tool, write code
Like it allows both options, you'll be able to use an external tool like RenderMonkey or Mental Mill to generate a HLSL shader and use it in xn4.
For offline shaders currently you have two options: use C++ or use the shader-node tool.
Changing of theme, I think I definitely gonna support "frame-based scenes". In that way you could use xn4 in films or to preview character animations. Btw, xn4 will be able to bake lightmaps too as is a GI (biased/unbiased) render too.
you could write an interpolation check for exporting an animation format:
first save keyframes then point in the middle between frames and check if your interpolation fits the given interpolation, if not save another keyframe given fro mthe orginal, (maximum 4 times per gap) because at some point the orginal and your interpolation differ minimaly and not noticable, or
guessing the interpolation by picking samples between 2 keyframes and then see which interpolation method works best and then only save keyframes with resulting interfolation method (this solution should work in most cases )
i only recomend this because the screenshot method uses loads of memory and is only important if youre rendering multible layers in diffrent software and have to be exactly the same to avid whitelines or errors like that
you could make "xBake" where you just expose the baking engine via C dll bindings. And it's fed from end-user with the needed data.
That way it can be integrated in any app, and it also lifts the need for you to implement tons of viewport renderers, fileformats..., as people can make the ui/viewport / data loading... themselves. And I am sure opensource/free frontends to your closed-source baker will pop up quickly... and it will lift quite some work from you. And xBake will be totally focussed on being a blazing fast baker engine.
xBake would be a pipeline's dream! And from what it sounds the stuff that would cause most work for xnormal4 and the reasons you were about to quit, where mostly related to "frontend" issues... which then would be moved outside your "main" responsibility
and xBake would move a lot of "pressure" away to deliver the ambitious stuff you had in mind for xNormal. Which you can still do but xBake would allow other coders to support you, while keeping xBake closed source, and you could make sure license wise that credit has to be given or "paid" or whatnot...
Basically split xNormal into xEngine and xBake, releasing xBake as pure "data, non window needed" bake engine. And then others can do dirty work of integrating into max and whatnot (always referring to you), and whilst that you can have more fun with xEngine and once that is further you can marry both again for your own new xNormal
One issue would be the fact that many programs using your baker can come up and that this may move your name out of the spotlight a bit. But maybe with mandatory logos/splashscreens and so on, one could make sure it's still sure whose great and fast bake engine is used here.
i would know if it could be possible to support the cinema 4d aka
bodypaint 3d software. Because maybe xnormal could be a cool complement
to each other. i think....
Painting here, baking there & so on
Nope. I thought about cancelling it but due to the people's interest I decided to continue its development. The Alpha 1 should be available by June for Windows, linux, MacOSX and Solaris.
Replies
Looking forward to more Xnormal goodness!
Mop, thats loking great, may I make a couple of minor suggestions?
The Texture Maps section has a confusing title. Those are for the exported data settings, am I right? The title "Texture Maps" makes me think thats where you load your maps in, so why just change to "Export settings" or "Texture exports" or something?
The Base output folder, which should be selectable, should also by default be the same folder as the input stuf, or a subfolder of it.
I still don't think you need a section for High Definition Meshes and then another for Low Definition Meshes. "Input meshes" would suffice.
You've probably thought of this already...the width and height for exports could be drop-downs, with standard sizes. Changing one automatically changes the other, unless you don't constrain.
For the time being I think that 2 separate panels for high and low just make it easier to manage and determine at a glance which meshes you have loaded.
I've updated the pic and post with some of your suggestions (locking aspect ratio, drop-downs for size, better title for rollout), also a mockup of one of the drop-down menus to show how that's been shuffled around from the current version to be more windows-friendly.
OR my preferred choice would be that instead of each row being a single asset, it'd be an asset node instead, where you drag in a low and high poly. Choosing which one of those is high/low would be a simple as an HP/LP drop down, but in 99% of case the software could determine automatically based on the file size.
Just to take the drop down presets and custom config a little more, the size system could be extended to the output naming conventions. It could come with 2 or 3 sets (you've used _lighting and _local), and custom. Users could define their own preferences and automatically apply them to all fields. This would not only allow for things like _L, _N, but other languages too _le-normal, _das_purpl-mappen)
If you can come up with a nice way of presenting that and organising it all, I'm sure it'd help
"Can I get a job making games?"
What skills do you have?
"Oh, I thought I could just be the ideas person. I've got lots of ideas for games."
...
Actually, I'm running out the door for a photosession, so text will suffice I hope.
Anyway, using what you already have, and using dragging instead of browsing, dragging a single model file onto a line adds it as an asset AND turns the line into a Node.
So you start with this:
[ ]
drop something into that empty slot and get
|[HP][+][asset 1]____________________[...]|
|[ ][ ][NO CORRESPONDING ASSET]_____[...]|
|[ ][ ][TEXTURE ASSET SLOT]_________[...]|
You could then browse/d&d an asset onto line two.
You could do it all with Maya style hyper nodes, but I thinks thats way too distracting.
MoP's screenshot looks like the thing I got in mind. But i think gonna move the viewer options(show normals, show wireframe, etc...) into a "Display" menu on the top. Also gonna add an option to decide which RGBA channel the output will use ( for example, AO in red channel, thickness in blue convexity in blue, etc )... and gonna add a layer system too.
About the file->Load/Save, gonna use a custom scene format and with all the precomputed data needed. You will be able to import/export, for example, OBJ meshes using File->Import/Export. This is because to read and convert an OBJ each time you render a map is too slow and tedious.
Just an appreciation... the mesh "scale" should not be done in realtime... just at loading time... that is because scaling a mesh will require to update the spatial structures i'm using ( BSP, etc.. ) so gonna be too slow.
About the file paths/names, gonna allow to drag&drop and to manually edit them ( perhaps too for the 3.14.0 version ).
About the lowpoly/highpoly sections, won't exist anymore. Meshes will be loaded and shown into an unique list. When projecting, need to specify the meshes to intersect like Max does(will use all if you don't specify one)... That is because sometimes you don't need to use projection... just want to bake the AO of the lowpoly(or highpoly) mesh into itself.
About multiple lights, gonna be supported because you will be able to bake GI lighting into a texture too. Support for this is not very difficult to do... the new custom rendering will allow you to define your own rendering shaders like Renderman/MR does... so there is no extra work for me if you render a normal map, an AO map or GI/lightmaps into the texture... the only difference is to select a different rendering shader. These shaders could be written in multiple ways and scripting languages.
Custom shaders will be available too for the 3D viewer. Just need how to integrate the shader params into the UI... probably using some kind of "annotations"... the problem is that GLSL does not support them currently(so gonna need to do it using code "comments"... fugly!).
About the linux port... is almost trivial to do. Just need to change 200-500 lines of code planning it well. I won't use Microsoft-Windows-only products anymore ( .NET, VStudio, Windows installer, etc... ). Moved my pipeline to 100% open source and portable software... I can recompile a linux or Windows version just changing a few options.
I found linux quite interesting... It's free so enterprises can reduce costs a lot. Also is very efficient managing multicore CPUs/clusters and uses very few RAM( I ctrl+alt+supr and got 60Mb on a clear Ubuntu vs 320Mb of Windows ).
About the UI, you could use skins, translate/i18n it, change menus, colors, etc... All should be customizable and scriptable(like PS actions). I'm still not sure if gonna use GTK+ and glade or a custom OpenGL UI. If I choose the custom UI system gonna provide a dialog editor too.
Well, currently i'm working on the 3.14.0 new renderer, which will be the base renderer for 4.0. Is focused on super-reduced memory consuption(32Mb vs 2/3Gb) and 200% speed increase. I'm also preparing a full GPU render, but need to wait for OpenGL 3.0(which I hope will be released this month).
Once the 3.14.0 new renderer is out gonna start xn4.
The cage is just an extrussion of the lowpoly(subdiv0) mesh. If I subdivide the lowpoly, for example, to subdiv2, then I need to re-create the cage because the topology has changed.... So I could need to use the subdiv2 mesh's vertex normals and to project them until they touch the original subdiv0 cage... then perform the ray tracing as xn3 does. Is that ok?
check boxes to bake Diffuse / normal / AO in one go, maybe even the possibility to add custom slots where you could add you own map to render by default. Maybe there is a way to do it right now but I don't see any right now (I liked it in previous versions and kinda miss this).
Just let me put a suffix next to the checkboxes this way all my objects will have _N or _D at the end of the map.
Also I'd like to see groups baking of some sort. I like to do groups of objects for baking, like for example, separate my high res / low res model in different groups that doesn't intersect with each others (often the highs have a lot of small instanced objects), to avoid baking part by part and avoid rays penetrating in other objects, and at the end I compute the final normal/AO/Diffuse in PS. But an option to set up groups and compute the final map for me would be ace. Not sure if i m clear but I could make a quick example...
Even better, tag objects (by giving then specific names, like a prefix) to identify groups and then import each groups in xnormal and let him split each group by itself. This way it would calculate the AO on all objects at once and render it separatly but with accurate result (right now if I have an object at the very top it's not getting a lot of AO as the other objects are on another group and the AO isn't calculated on everything). Again maybe there is a way to do it but no one I know was able to do it...Would work for vertex AO as I use sorrely this tool with GREAT results. Maybe an option to subdivide the high before calculating AO and saving the SBM on top of that to have really clean Vertex AO results?
Also, a way for Xnormal to bake materials that you set up in your 3D proggy...I know it sound tricky, but as I do right now it's a long and tiring process...I set my materials in Max on my high to have a base diffuse, and then Render textures the high mesh after auto unwrap (Sometimes it's a pain, because of max always wanting to do everything multi subs, so when you have mutliple objects with multi subs...Anyway...) Then Export my highs with UVs and bake the diffuse texture on the low mesh. Why not try to export from max (or maya) maybe just converting to vertex color...this way I'd just not have to bother about having to map quicly map and then bake the diffuse of my highres...I'd just toss some materials on parts I want them and Export my Highs without any mapping coordinates but with materials that Xnormal could see when baking the low meshes...
Now i'm playing with subdivision and got that cage problem. I don't know how to use the subdiv0 cage ... perhaps firing a ray from the lowpoly subdiv1 or 2 until it reaches the subdiv0 cage... but then, what? I'm a bit lost. To use a cage with 100k vertices is not very good...
1. Lowpoly mesh without subdivision applied.
2. Highpoly mesh, sculpted with ZBrush.
The problem with the current "height map" is that it's not a real displacement map. To be a real displacement map and to be useful to reconstruct the sculpted mesh using the lowpoly + displacement map I need to perform subdivision on the lowpoly model... If not, the height map is just that.. heights... not displacement distances.
Ok, said that, imagine this is my lowpoly model unsubdivided(I call this subdiv0) and it's cage:
Now I perform subdiv2 on the box so it results this:
As you noticed the topology has changed... UVs, normals, vertices... all changed.
Both images combined:
Now imagine I want to render the displacement map and to use the cage to fire the rays... It's a problem because the topology on the lowpoly mesh changed!
A small window that displays your model in a a full dx environment with shader support. That watches your PS files and automatically changes them into pngs (something that wont take long to compress) and updates the viewport automatically. This viewport can be pinned to always be in front so you can have it it open while having PS fullscreen. You can also rotate the model, have it spin slowly by itself, or zoom in and out.
The main issue with dealing with a full 3d program and PS is switching back and forth to see the changes, that and having to create actions in PS to auto save the updated PS file as a copy in another format that directX can understand in order to use the 3d programs real time full shader preview.
Even better news for mac-users, I imagine.
Does it look at all like MoP's mockup, at this stage?
in any case great news, and glad you've dropped all those dependencies! the update's gonna be slick.
The realtime rendering system is completely "abstracted/interfaced/encapsulated" in xNormal 3(and also in xn4). That means I'm able to plug or unplug any API like OpenGL, Dx9, Dx10, etc.
xn4 is a bit different than xn3 because I'm going to allow the users to define their own shaders and materials.
A xn4's shader is just a XML file with some profile sections ( for example DX9 SM3, DX10 and GLSL ). Inside these sections you can place the .FX(or GLSL) code and the techniques and states(zbuffer,culling,etc)... and there are also some extra XML elements to define the constants and basic LUA-scripted behaviors. For example:
Of course, I'll provide the documents about all this... including the XML schemas for validation.
If the user don't want to support a profile ... it can be left void, so the graphics driver will render the mesh using a default shader. The profile section will allow to add new rendering systems... like DX12 in the future.
I'll support Dx9 SM3, DX10, DX11 and OpenGL but initially I'm implementing just the OpenGL 2 driver because it's the most portable. After all this is done, I plan too to implement a visual node system like UT3 one... so the nodes can be compiled automatically to this XML system(or to use other "profile" and use <operation type="add" param1="node128.out.r" param2="node2390.out.a"> instructions )
On the other hand, I plan also to allow custom shaders for the rendering system. The user could choose between some standard ones (normal map, AO map, etc)... but you could make your own render shaders in a similar way to Renderman ( gather rays, etc )... you could bake even lilght maps! All gonna be scripted.
Btw, I'm gonna try to support LUAEdit's debugging system... so you could debug very easily the LUA scripts.
And yep... you could use more than one material per mesh ( I call this "face clusters"... and work like 3dsmax's MultiSubObj materials ).
Xnormal already kicks ass and now your taking it up another notch or five.
Can't hardly wait for whats to come.
(I mean it, kick ass stuff)
Bump for a good program if nothing else.
Edit: Hmm, I wasn't using the latest version on this computer. Updating to 3.16.3.38 fixed it.
Node-based as promised
I think it's much more intuitive in that way, isn't it?
It will be used for both realtime and offline shaders ( the node list is incomplete currently... when you select the "offline" mode instead of the "realtime" one some extra nodes appear... like closestRay, gatherRays, getNearestPhotons, etc ).
Ofc, you can scroll the window and move nodes, define groups, perform multiple selections, etc... It's far from complete currently.
I was going to put a preview inside each node... but instead that, I think I'm gonna just update the selected mesh in the main viewport with the result at realtime. It works as follows: you select a group of mesh's faces and then you assign a material to them.
On the other hand perhaps I should add pre-draw, draw and post-draw stages. In that way you could use deferred shading, custom shadows or to write and use multiple render targets for post-processing effects like SSAO or depth of field... but I'm not sure about that... perhaps it could be better to perform these effects in a fixed pipeline and to allow the artist just to control the framebuffer pass... I really don't want to complicate it excessively... So, probably, I'll create a default material with diffuse, emissive, specular, etc slots and you will be able to plug some nodes there(like UE3 does).
Wouah!
A guy asked in my blog asked about the possibility to use xNormal as an offline renderer for animations ( to compute, for example, the AO of an scene's frame ).
The way to achieve that is to import/export complete scenes and not only meshes...
Imagine: you load a Shrek scene in Maya... then you export it using the new "SBM" exporter which saves each mesh triangle-set frame by frame + camera + lights.
Then you load it in xNormal 4, assign the proper materials and render a frame(or bake to textures ).
Is that ok with you? Any problems with this approach? I see some:
1. The scene gonna occupy A LOT because all the deformable meshes/patches need to be saved as a triangle "soup". Meshes need to be stored in that way because your favourite program can use some non-standard interpolation(like quadric tensors, strange beziers, custom subdivision algorithms, etc)
2. You're gonna rely on my custom scene exporter... but I didn't create a SBM exporter yet for blender,lightwave,modo,etc... I'm targeting max and maya to start. I could pseudo-implement a Renderman(RIB) scene importer but won't be 100% compatible and that will be very complex. I also don't like the tex-based formats... they're slow to parse and occupy a lot.
3. What's better? To assign the materials in your favourite modelling/animation program on in xnormal? Assigning them in xnormal is less work for me because I don't need to create "material plugins" for max/maya.
Of course, you won't be forced to use this "scene" approach... you'll be able to construct a static scene importing meshes as you did in xn3.
Awesome !
Btw, if you have any comments about the "scene importer/exporter" I'll would like to hear them. I want to solve the problem of using animations with xNormal for films/videos.
EDIT Never mind i see it and it looks way better
do you think its easy to port over shader knowledge from something like FXComposer/UE3 in to xNormal4?
For realtime shaders, my intention is to allow the artists to use GLSL/HLSL OR the node-based tool.
If you don't like to write code then use the node tool.
If you don't like the node-tool, write code
Like it allows both options, you'll be able to use an external tool like RenderMonkey or Mental Mill to generate a HLSL shader and use it in xn4.
For offline shaders currently you have two options: use C++ or use the shader-node tool.
Changing of theme, I think I definitely gonna support "frame-based scenes". In that way you could use xn4 in films or to preview character animations. Btw, xn4 will be able to bake lightmaps too as is a GI (biased/unbiased) render too.
is this going to be free?
first save keyframes then point in the middle between frames and check if your interpolation fits the given interpolation, if not save another keyframe given fro mthe orginal, (maximum 4 times per gap) because at some point the orginal and your interpolation differ minimaly and not noticable, or
guessing the interpolation by picking samples between 2 keyframes and then see which interpolation method works best and then only save keyframes with resulting interfolation method (this solution should work in most cases
i only recomend this because the screenshot method uses loads of memory and is only important if youre rendering multible layers in diffrent software and have to be exactly the same to avid whitelines or errors like that
you could make "xBake" where you just expose the baking engine via C dll bindings. And it's fed from end-user with the needed data.
That way it can be integrated in any app, and it also lifts the need for you to implement tons of viewport renderers, fileformats..., as people can make the ui/viewport / data loading... themselves. And I am sure opensource/free frontends to your closed-source baker will pop up quickly... and it will lift quite some work from you. And xBake will be totally focussed on being a blazing fast baker engine.
xBake would be a pipeline's dream! And from what it sounds the stuff that would cause most work for xnormal4 and the reasons you were about to quit, where mostly related to "frontend" issues... which then would be moved outside your "main" responsibility
and xBake would move a lot of "pressure" away to deliver the ambitious stuff you had in mind for xNormal. Which you can still do but xBake would allow other coders to support you, while keeping xBake closed source, and you could make sure license wise that credit has to be given or "paid" or whatnot...
Basically split xNormal into xEngine and xBake, releasing xBake as pure "data, non window needed" bake engine. And then others can do dirty work of integrating into max and whatnot (always referring to you), and whilst that you can have more fun with xEngine and once that is further you can marry both again for your own new xNormal
One issue would be the fact that many programs using your baker can come up and that this may move your name out of the spotlight a bit. But maybe with mandatory logos/splashscreens and so on, one could make sure it's still sure whose great and fast bake engine is used here.
i would know if it could be possible to support the cinema 4d aka
bodypaint 3d software. Because maybe xnormal could be a cool complement
to each other. i think....
Painting here, baking there & so on
What do you think..?
greetz
designamyte
What about just making it payware for professional use?
Well, I'm a complete c4d noob... That will take some learning time.
Nope. I thought about cancelling it but due to the people's interest I decided to continue its development. The Alpha 1 should be available by June for Windows, linux, MacOSX and Solaris.
i think your skills should expand your time and money.
So take a brake and make your right descision. We all have
only one life
support c4d:
here is the "sdk" - developer link:
http://www.maxon.net/pages/support/plugincafe_e.html
They listening to their users / developers. That´s pretty cool.
Here ist the c4d community whre u can ask the users for its usability:
http://forums.cgsociety.org/forumdisplay.php?forumid=47
So if u are interessted in, if u have time, take a look....
Hope you can reach more customers, to earn more money to develop your great ideas further
all the best
designamyte
PS. thank you very much for your great tool...!!!
(i use it in lot of projects)
no prob. i´m curious what the future brings...:)
Maybe voxel-file support / import from 3d-coat..??
http://3d-coat.com/v3_voxel_sculpting.html
sólo los mejores
designamyte
Currently the alpha 0 is not very functional but I have the basis done and now it will grow exponentially
thx