Looking to gain some ideas for what people would like to see in an updated version. I am rebuilding from the ground up and planning to use opengl this time around instead of directx. Would people prefer directx or opengl? My goal for polyviewer is to become a one stop shop for previewing models in a real time environment thus I want most of the things people would like to see available and would make them want to use it.
Things I plan on doing:
- Support custom shaders
- 64-bit version, possibly linux/mac if there is want
- hdr and image based lighting
- volume shadows instead of shadow maps (better accuracy)
- Standards: normal maps, diffuse, specular (suggestions on how to handle alpha channels?)
- 3 lights, but with full control over position and direction and presets. maybe allowing the user to click somewhere on the model as the focus of a 3 point lighting setup.
- backgrounds as cube maps both as a cross and as seperate textures, also a single background or spherical map (to be used in combination with hdr maps)
- auto texture reloading up regaining focus
- text editor for authoring/editing a shader.
- select objs and transformation support
- focus on 1 model or on all
- skeletal support for posing (a maybe, I am curious about building in basic support at one point)
- save out screenshots and 360 rotation films and as a series of screenshots for those that choose to edit things themselves.
- emissive textures/bloom (controls for amount of bloom)
- depth of field
- Model formats: obj, directx, xsi, fbx (others?)
- Image formats: jpg, gif, png, tga (others?)
- anti aliasing and ansitropic filtering
If anyone has other things they would like to see, please post suggestions. Also concerning the interface, if anyone has suggestions on what they would like to see and improvements over the current iteration please feel free to suggest them.
here's a link to the current interface:
http://craig.young.81.googlepages.com/screenshot.jpg
*note that there is a status bar in the latest iteration.
Replies
Couple ideas i've had since are plans to add a toolbar and have the render technique selectable through there.
Thinking also of realtime ambient occlusion as well (sorta curious about it and want to learn)
I'm trying to think of a new way to do the handling of all the data so suggestions/ideas on what you the users would like is as I stated already, very welcome. either way I plan to have the data section the option of being hidden.
EDIT: Couple more ideas: post processing filters and as mightypea suggested selectable texture update checking, to make it seperate from the current route of checking upon regaining focus to checking every couple seconds so one working on a texture can save and see the changes without switching applications if they're running a dual monitors setup. heck, even tempted if there's enough interest to add particle support
-If you could simplify your material tree, that'd be nice. Right now it's a bit of a chore to drill down into the right bit and change it.
-definately consider drag&drop support wherever it makes sense. for instance: dragging tex_N, tex_d and tex_s into the program should put them in the right slots for normal, diffuse and spec. have like a setting where you change it to whatever naming standard you use
-if you could have a model automatically try to load the right textures when you load the model, using these naming conventions... that'd just be really nice. Load the obj. and it looks for some tga's in that folder, you know?
-drag&drop a model onto the viewport loads it?
(9:35:35 AM) TheKeg: textures auto-update already
(9:35:45 AM) TheKeg: only when the viewer regains focus though.
(9:35:58 AM) mightypea: Is there any way around that?
(9:35:59 AM) TheKeg: less cpu load
(9:36:01 AM) TheKeg: there is
(9:36:08 AM) mightypea: That's the way max works, and it's annoying if you have two screens
(9:36:12 AM) mightypea: Maybe a toggle?
(9:36:21 AM) TheKeg: I could always setup a thread that checks at set intervals
(9:36:28 AM) TheKeg: yea. easy toggle
(9:36:35 AM) mightypea: If it'd be a toggle, that'd be great.
(9:37:12 AM) TheKeg: probably limit the checking time to every 2-3 seconds. cause if you have 20 textures and you're checking the last modified time on each...
(9:38:08 AM) mightypea: Well, how about allowing the user to set that, as well?
(9:38:15 AM) mightypea: That way he could tailor it to his needs/project
(9:38:44 AM) mightypea: And possibly have that toggle per texture, if that doesn't complicate things too much
(9:39:01 AM) mightypea: Since usually you'll only be working on one texture at a time.
(9:39:08 AM) TheKeg: doesn't complicate things much at all
(9:39:35 AM) TheKeg: probably create a simple dialog that'll list all loaded textures with a select all/none deal
I'd say stay away from bones for now so as to keep things a bit more focused. Skeletal stuff is a pain in the ass, especially when dealing with multiple formats or packages. Collada if anything, but this is really less important to me than all the other stuff you have on your plate. If I want to test a pose in an external viewer I'll just pose the model in XSI and then export the mesh from there.
I think it be great to be able to have a way that tells the viewer what to use as the alpha. So if you had an image that in theory was using four or 5 channels for say, spec, alpha, gloss, bump, mask the user can tell the viewer what the channel is supposed to do what easily. Also be able to preview multiple textures on one object an have blend options for them. Auto update that checks for texture save even if it's not in focus. Although it might make things too slow. I'm thinking it might be annoying to have to select the viewer if you have two monitor set up but maybe it's point less to have it auto update like that. It would be great if your viewer could export the uv coord as vectors.
A photoshop type layer system, you can hide easily what map is doing what and also easily change how much it affects things like the opacity slider does.
Gwot: a node based editor would be nice, but I almost think that would be more a side project. simple text based works for now. If I do a node based setup, it'll probably be similar to mental mill.
Marine: I plan on adding custom shortcuts and allowing the user to modify the controls to their liking.
Sage: the auto-update of textures is planned. as I was talking with mightypea about it. I plan on having a dialog that lists all currently loaded textures and allow the use to check off which ones he want checked for updates. probably make a ratio so the more textures you have checked the higher the interval will be to help reduce cpu loads. As for the multiple textures and blend options, I will consider and look into it.
It's honestly not an idea i'll implement until later down the road.
diffuse map alpha = transparency
spec map alpha = spec power
spec map rgb = spec color
emissive map alpha = bloom amount
normal map alpha = parallax occlusion
I hope within a week I will have at least a test bed version that loads a model and shows it with a texture to test obj loading due to those that had issues with loading files before.
Eric: that's generally how I had it handled before except for the bloom and probably how I will start off.
By facilitate shaders creation I think stuff like:
- screen space coordinates shadow buffer (btw why using old school stencil? what about penumbra and all the obvious advantages of shadow maps?)
- automatic (hidden?) functions for multilamps
- MRT access to Z buffer for postFX
- easy access to mesh info and transforms as parameters (rotation transform for object space normal map for example)
- awesomeness would be the ability to render dynamic (realtime) cubemaps
most of the features you list are afterward easily implemented (bloom, ibl, dof, ..)
I think in general giving access to most of the engine is a good idea, but it should somehow not be shown too much to the user not to confuse him. but still giving him the ability to do his own stuff if he wants to do it.
Have you looked to QT for interface and openGL stuff?
Are you planning on making this crossplatform, at all? As it's that context in which I hear QT mentioned most often. Another thing to consider is Juce, although I can't say much about, other than that it's apparently easy to use, and it's popular with audio-programmers. I don't know how useful it'd be to make this cross-platform, and I know even less about how hard it'd be, and I'm sufficiently grubby that, if it'd slow down the windows development at all I'd gladly go on record to say "fuck those copyleft communists and maccies". Quite the testament to my character, I know.
edit: and here's the website for Juce: http://trolltech.com/products/qt/
OpenGL I have decided against. CrazyButcher made some valid points about it. If there is want for a linux/mac build. it would be opengl but lack alot of plans I want to add for the windows version
Brice: I am looking at different options for shadows. I want to have nice soft area shadows. I plan on starting with a similar version to what's already available. adding hdr and such to it. I want to enable each light to cast shadows as well. realtime cubemaps I don't see making much sense. there will no animation support, maybe down the road at some point. The purpose is to be a simple viewer people can have open while working in photoshop instead of having xsi, maya, max, unrealed, etc loaded just to see how the textures will look in a realtime environment.
Current viewer uses shadow maps and I found them easy to implement but the bias I found a pain to resolve. I havn't looked into supporting object maps. never really thought about that since tangent normal maps are the standard and the extra cost is negligible now.
these 2 points I don't understand fully so if you could explain them:
- automatic (hidden?) functions for multilamps
- MRT access to Z buffer for postFX
http://developer.nvidia.com/forums/index.php?showtopic=853
so the "show stoppers" I mentioned in the pm, might be gone with the next Cg runtime release.
Still I think picking Dx is the better option for your project. (better support under windows which is what counts)
---
look into "variance shadow mapping" for less bias issues.
about those 2 points by Brice I would guess:
- multilamps = some system that would easily make it possible to add more lights, and modify shader/passes accordingly ?
- afaik under dx9 it's impossible to access depth texture intensity value (which is needed for postfx), therefore you need to render depth to a regular color texture.
You would either render the objects twice, once writing depth as color into texture D, and the second time you render them fully shaded into texture C. Later you can use D and C to mix postfx upon them.
The other option is render once and store depth in the same pass to a second render target (MRT = Multiple Render Targets). So basically you have both C and D active the same time and render all color related stuff into C and depth into D with a single pass.
I would go for the first option (two dedicated passes), because for MRT you would need to modify all pixel shaders accordingly to output depth to a second render target. This doesnt mix so well with custom shaders by users and is more backwards compatible, easier to maintain & implement.
ON THE MOUTH.
as CB said for multilamps it'd be nice for coders to be able not to mess around with your internal way of dealing with lights, attenuation etc. I use KJapi which basically works like a black box: you use a generic function referring to a compiled shader with a few parameters such as Vnormal, Vpos and it outputs float3 specular contribution and diffuse contribution. As for shadows you just need to input Screenspace Uv coordinates and sample the 'shadow buffer'.
as for the the object space normal map, why ask users if you just assume they'll just use tangent space? most engine don't support them, and it's often a pain to have it to work with custom shaders since you don't always have an easy way to access the transform of the object. Implementing it might not be your goal, but leaving the ability to others to do so is a plus.
realtime cubemaps (or the ability to render one on the fly) is just an unneeded eye candy feature so that you can take nice screenshots. If your goal is to have a visualization system for texturing then you don't need hdr or shadows either
All of the actual shader, lighting, maps, model viewing stuff is tops, very cool, needed wanted stuff..
I think it would be slick to have a 3d model viewer that had diagnostic features that would help in pointing out errors of a model's construction or the efficacy of its execution.
Features such as visual displays of Smooth Groups, Mat Ids, UV seams That would show the model in an exploded view and readouts on how each of these effect your model's performance.
I'd be interested in seeing how my models are drawn, how an engine calculates and processes strips on the geometry and other graphical processes that I could watch. I'd like to see draw steps and see how those steps effect the asset's footprint, if that is at all possible. I basically would want viewer options that would show me a model's DNA.
Sugary features like Zbrush's, i think it was Zmapper's, geo morph from 3d model to UVs in a way... I think that stuff is cool and would make me actually use a viewer.
Being able to rip Models from other games and view them in the same manner would be dope as well...
omg, it's the internet!
YUS!
PerfHUD by nvidia can visualize how a frame was rendered.
the other stuff you ask for, basically doesn't apply. If the model is made of strips or just flushed as triangle soup depends on the engine. Which means you would only see how Keg's engine would do it, not how others do, nor would he try to optimize to death for modelviewer.
I guess the only thing one could visualize is "vertex splits". ie some colored spheres with colors depending on what kind of duplicate (uv, normal, color or tangent)
And mesh splits due to different shaders (although you would know that in your 3d app, too, as you assign the materials/shaders somewhere).
Visualizing splits in the uv and such would be possible as well as exploding out the mesh into it's uv coordinates.
I have issues seeing the purpost/point of supporting smoothing groups as it's only really supported with 3dsmax. I know there are lots of max users, but when it comes to next gen assests, I've been led to believe keeping everything in one "smoothing group" was the way to go. last version I had to do enough hand holding just to get 3dsmax files to work since the lovely default obj exporter was crap. that's apparently changed in 2009, but most people don't have that.
the route I had implemented in the old viewer was simply handle triangles and convert quads into triangles. not the best route if someone wants to look at their wireframe, but it is how they end up looking anyways. I try to optimize overall, but it's not a major priority since not much is really being rendered other than a model or two.
Thanks for the extra clarification crazy butcher, saves me the hassle of explaining it all
Been having trouble finding the motivation/interest to really put much effort into this project. I have the gui built along with basic directx running, depending on how much free time I get I'll try to get a test version out to those who have had trouble with previous versions.
Would people want me to implement support for smoothing groups? if there's enough people wanting it, it can be done. Also for rick, I have wxwidgets up and running as the main gui now. And since I want to do some opengl for practice I think I'll build a decent glsl version with support for cgfx files some point down that road that I should be able to build for mac and linux users although I won't put it as a priority as the demand is not nearly as strong.
So, I recently downloaded your polyviewer and tested it out. First thing I noticed I couldn't run it on my main OS which is Vista 64 bit, so I opened it on my Older OS window xp 32bit and it worked fine. I know this is something that you plan to fix, but I am sure for many this is one of the most important things you have yet to accomplish. Although I don't know what this requires programming wise, so it may still be best to continue as you are with updates.
The emmisive needs some more work, I tried it on a sword and it looked like the glow was just planes offset from the surface of the sword (it is most clear at the point of the sword). That would be awsome if you could get the emmisive working like it does in the unreal engine.
Of course there is a need for an opacity map. Personally, I prefer to simply create a seperate opacity map rather than have an alpha tied to a diffuse that must be saved with it.
It would also be nice to have some normal map options. For example, in XNormals I can invert the Y value to -Y so that my normals display correctly (normals that were generated in 3ds max). It saves me the time of having to manuelly invert the values in photoshop.
Lastly, I love the material editor in Unreal Ed. The GUI for it is the best in my opinion once you know how to use it, unfortuantly besides the emmisive and opacity results you get from Unreal, I get better all around results in other programs like xnormal. So for your program, possibly the last thing you try to accomplish could be something like unreals material editor. I think you could keep everything in your program the same User interface wise, but add this option of opening a more complex material editor like unreals as I mentioned.
Finally, I love the way you set up the 3 point light system, but like you mentioned it would be nice to have more options like setting up spots and so fourth. I am really excited about this project though, so please keep at it. This could quickly become the standard viewer if you work at it enough. It could easily be like the crazy bump of model viewers.