Hello everyone!
My name is Max, and I'm a student at Interactive Media Design Laboratory, Nara Institute of Science and Technology, Japan (
http://imd.naist.jp).
I'm researching future user interfaces for 3D design and animation.
Sadly, researchers in my field hardly know anything about how 3D artists actually work.
I hope to change that, and make future R&D more oriented towards your real-world needs.
Therefore I started a survey, and would like to ask you to participate:
http://survey.makx.org
Your help is very much appreciated!
I know you hate SPAM, but this is highly related to your work
(and
my work, since I used to work in the industry and hope to return there some day).
This is an academic research effort and not exploited commercially.
I hope to publish the results at an international conference
and thus help to make better user interfaces in the future.
IMPORTANT NOTE:
The research relates to the
professional use of 3D software,
so it will ask you whether you work in "media production".
This of course includes freelancers and independent artists, but not mere hobbyists.
I'm very sorry, but the requirements are just too different.
Thank you very much for your understanding.
Replies
To be honest: I'd prefer to live in Osaka, because it can be a bit boring here.
But the University is good and it's not too far to go Osaka anyway...
Thank you very much!
Plus, since I'm not saying why I brought it up, it might get more people to take the survey to try to figure out what a Peltier has to do with it.
I have to say, anyone who thinks that talking to the computer "Move Tool!" is a good idea has never worked in an office environment. Apart from the constant background noise that could be hard to filter out, if the person next to me was essentially narrating their work process, I'd kill them with extreme force.
Thank you very much! This is some great feedback for us!
To explain the background of the question a bit more:
there are a lot of glove-based user interfaces, but they are usually criticized for being unpractical - by other researchers. Since no product is available that uses a glove, no one knows what the users actually think.
@Rick Stirling:
Thank you very much!
You may be surprised, but exactly this was suggested by some researchers:
You'd control your software in a "Blade Runner" kind-of-way:
"Create Cube, Move x 10, ..."
@Jonas Ronnegard:
I think so too: during the week it's nice, clean air and relaxed atmosphere.
But the weekend should not be spent in Nara.
Besides the survey, I would strongly recommend you (and your fellow researchers !) to purchase a few Gnomon DVDs covering 3D modeling. They are recordings with a narrated voiceover, showing artists creating 3D assets just like they would in production.
This one is excellent :
http://www.thegnomonworkshop.com/store/product/544/Character-Design-and-Modeling-for-Next-Gen-Games#.Uwp3S_RdUeY
Here is a sample clip, which I believe is 2x speed :
[ame="http://www.youtube.com/watch?v=r8AxXG_6jyA"]Character Design and Modeling for Next-Gen Games - YouTube[/ame]
This will certainly put the "tech dream" of 3D voice commands in perspective In other words : professional 3D artists are extremely fast when it comes to navigating a 3D viewport and editing mesh components. Any UX design slowing down this process would be pretty much a guaranteed failure within the pro 3D crowd. (unless we are talking about "easy to use" 3D software for beginners, but this doesn't seem to be the subject of your work).
It doesn't really matter how technically advanced the input device is ; if the actual action being performed ends up being more convoluted, slower and less precise than if it was done with a mouse, then it's not worth it
Hope this helps !
Frankly, I think something simple like a regular mouse with an added axis (you press a thumb button and then you can raise/lower the mouse) would fit much better in most people's workflows.
And what about touch screens? They're assimilated into popular culture to the popint that 5 years old often are given their own tablets. Using a tablet as a semi-tactile interface extension seems a fairly viable option as well.
I realize that 3D sounds very interesting, but if you think about it, most of what we do is in 2D anyway. My desk (physical desk) is very much a 3D environment but the way I interact is almost two dimensional: I only use what I can see. I'm not using extra tools behind my monitor, I don't have a tablet stacked above my keyboard. So everything I use can be interpreted as a photosphere, something easily navigated as 2D content.
Well, if the paper get's accepted, I'll have a chance to meet them. ;-)
@pior:
That is great feedback! Thank you very much!
I hadn't considered Video Tutorials as a source yet, but it makes perfect sense to analyze some of them.
@Snader:
Thank you very much for your feedback.
I think you have a good point there: we organize our world very 2-dimensional:
even cupboards and drawers are just layered 2D-planes if you think about it.
The reason 3D-UIs are so appealing is more on the input side:
with a mouse you have 2 dimensions to control 3 dimensions of looking (viewport) and 3 dimensions editing the content.
It seams more feasible to do this with two hands, similar to when you're working with a real model.
@ZacD:
Thanks for the comment. I recently had the chance to test the Oclulus Rift myself and found that the greatest advantage over other HMDs is the comfort: it's light and fits great on the face. I wouldn't mind wearing that all day.
Problem is that currently the resolution is too low to use it for anything else than gaming...
voice commands could still be useful actually .. if u do not work in an office ( freelancers for example )
i personally do that often in a non practical way when im bored while working on something tedious ( move that there ... extract this . delete that )
so it could at least shortens the keyboard shortcuts u have to memorize for each program
in terms of speed of workflow ... humans can always adapt with some exercise .
Thank you very much for your feedback. You got a good point.
I can't participate in the discussion (else I might influence the survey),
however I should point out the difference between human language and computer input:
while terms like "to the back", "a little bit", "over there" are perfectly fine for humans to understand, computers don't have any understanding of the content content or context.
So commands would have to be much more technical and precise, like:
"move x 3cm, y 6cm", "extrude face in normal direction 3 cm", "delete vertex 23"...
@MDiamond:
Actually, I'm getting more upset about "Google Glass" being brought up whenever I talk about this. Happy this didn't happen here so far ;-)
A possible use for them for use at home/freelancing would be to act as common modifier key toggles (Shift/Alt/Ctrl) for apps such as Zbrush. You can already customise Zbrush a lot to make it almost a completely pen tablet driven program, but you still have to hit modifier buttons on the tablet itself for full functionality, which I find a little awkward.
Single-syllable (or ideally, customisable) voice commands could replace the tablet button presses nicely I think, so you'd be working solely with the stylus and your voice.
See, that's the problem right there : clearly saying this in the proper voice recognition tone (that is to say the "talking to a misbehaving puppy voice"), would take something like 5 seconds ; whereas performing the same action using current (and pro-optimized) mouse and kb input methods would take something like 1/10th of a second. Hit shortcut for "extrude along normals", "type in the number3", "hit enter".
That is ... 50 times faster Not to mention that in the case of artistic freeform modeling, the extrusion distance will likely need to be adjusted after the fact. That's usually performed by quickly dragging the mouse cursor as desired, which also takes no time at all. Yet a voice command to do just that would take the full 5 seconds again
Now of course there could be uses for voices commands in some specifics contexts. For instance, selecting the Move brush is Zbrush is a succession of 3 key presses : B, M, V. In that case I could imagine a voice command being quite fast : "Select move brush", which takes about one second to enunciate. But it is still 10 times longer than hitting B-M-V on a keyboard using muscle memory
The survey mentions physical puppet rigs for posing, and I think this is an interesting concept. There is one available already, and I must say I would love to try it one day : [ame="http://www.youtube.com/watch?v=PwsTo4LQoGg"]QUMA 3D-CG Motion Capture Device Video in English - YouTube[/ame]
So, what else could be improved ? Personally I would say that the review process of 3D models could be re-imagined. That is to say, being able to manipulate an object easily and smoothly in 3d space from all angles, in a fully natural manner ; being able to observe it, spin it around, zoom in on details, and so on.
The 3DConnection 3D mice devices are interesting for that but they require very slow and careful operation since they work with relative positioning. However I could imagine a simple wireless device, maybe shaped as a sphere, full of accelerometers and calibrated in order to provide absolute positioning. This sphere would then act as a proxy, driving the arbitrary position and rotation of an object being reviewed on screen. Rotate the sphere around itself, and the model rotates and spins freely and precisely, on all 3 axis on the screen - just like with a trackball. Bring the ball closer to you, and the model gets closer to the screen. Move the ball sideways in space, the model pans around. That could be neat
Squeezing the ball once would lock the object in its current angle and position, effectively disabling "ball input". You can then pass the ball to someone else, who can then squeeze it once more to re-enable spacial input.
Squeezing the ball for an extended time (like, 2 seconds) would reset the scene, putting the 3D object back to the center of the screen, disabling ball input again until it is squeezed once more.
Of course there would be an option (enabled by a little toggle switch, somewhere on the surface of the ball) to switch between two modes : fully free trackball rotation, or rotation locked to y-up, just like in every 3D program.
Yay for tech dreams !!
Thank you for participating.
That is definitely a good point!
@pior:
Yes, that concept is definitely interesting.
The first commercial case was the Dinosaur Input Device they used for Jurassic Park:
http://www.blep.com/rd/special-effects/dinosaur-input-device/
http://graphics.pixar.com/library/DinoInputDevice/paper.pdf
And some years ago, someone suggested using one of those wooden puppets and Augmented Reality techniques for that:
http://www.barakonyi.net/papers/barakonyi_siggraph06_sketches_0396.pdf
but as far as I know the QUMA is the only ready-for-application product right now.
The drawbacks with this concept are that it can only be used for straight-ahead rigid animation of one type of figure. You can't go back to an earlier state, you can't animate hair or muscles, you can't use the Dinosaur thingy for humans and vice versa, and you can't use it for modeling or rigging either.
About the ball input device:
Great idea. It's this kind of tech-dreaming that made me leave my job and get into research ;-)