That is pretty impressive as a proof of concept and maybe even as a consumer product to sell with Morpheus and play Station move. Don't know if Artists want to work like this, still though must be really cool as an creative experience. I imagine since i never used the Oculus and i have no idea how that really looks and feels like.
This looks amazing! As it makes more sense then sculpting with a pen I can totally imagine that this will one day be the best way for sculpting. I wonder what the perfect input device for such sculpting then would look like. Btw, there is also already a device similar to the Hydra that simulates pressure, simulates the feel of touching.
Ironically, the best that might come out of this is not necessarily the VR and stereoscopic aspects, but rather, the development of keyboard-free and mouse-free interfaces for a number of programs.
Feature-wise sculpting apps are great the way they are today, but the paradigm of not being able to sculpt while rotating the model and/or the camera is a big drawback compared to real clay work - even tho we got accustomed to it by now.
I agree, the part in which I look forward to the most is being able to move my hands freely and precisely to manipulate objects in a real life 3d space.
Also the potential of VR being able to simulate our very own virtual cintiq screen and work atmosphere seems very cool. The image below was posted on reddit:
Ironically, the best that might come out of this is not necessarily the VR and stereoscopic aspects, but rather, the development of keyboard-free and mouse-free interfaces for a number of programs.
Feature-wise sculpting apps are great the way they are today, but the paradigm of not being able to sculpt while rotating the model and/or the camera is a big drawback compared to real clay work - even tho we got accustomed to it by now.
well, there are the 3d connexion devices (space-mouse/-navigator/whatever they call them this season) that in theory would be able to do just that. none of the standard tools in our field even support the dual-input paradigm though so best case you end up with a fancy camera controller if there's even a plugin for your software available.
Hehe yeah, I've had a SpaceNavigator in the past (was given to me by a coworker). It is a very interesting piece of equipment and actually has some uses and advantages, but its core problem was speed : with this device, the time gained by being able to manipulate the camera independently from the cursor was negated by the awkward input scheme of the unit. A modeler with a few years of experience would still be faster with regular cursor-based input shared between viewport manipulation and editing.
It is however quite great when it comes to letting a neophyte manipulate your scene for review or presentation. It brings an instant smile to the face of say, a producer, AD or 2D artist
Hehe yeah, I've had a SpaceNavigator in the past (was given to me by a coworker). It is a very interesting piece of equipment and actually has some uses and advantages, but its core problem was speed
i agree completely with all you said. the spacemouse is also really finicky depending on scene scale. i believe all this is down to programs being built for regular pointing devices, input schemes and the way the camera in 3d apps works/cheats. i reckon the 3dconnexion drivers could be as good as imaginable and not in the least able to change that.
i did however first come into contact with spacemice on internal tools at a 3d viz company where it was a mandatory input device for the software and did basically what you wanted - let one manipulate the object with one hand and still operate a wacom with the other.
i believe very early versions of maya also featured dual input (wacom stylus+puck only?) but perhaps that stuff never made it into the public.
anyway, just saying content creation tools would have to catch up at a very basic level before any input devices evolving through mass popularity of VR would make a difference. and technically we already have those devices but not the software to make use of them properly.
I could see the Oculus combined with 2 camera mounted to it, that record the surrounding in 3d to be rendered in the Oculus and be augmented with the sculpted mesh combined so you can still see your surrounding.
Ironically, the best that might come out of this is not necessarily the VR and stereoscopic aspects, but rather, the development of keyboard-free and mouse-free interfaces for a number of programs.
Feature-wise sculpting apps are great the way they are today, but the paradigm of not being able to sculpt while rotating the model and/or the camera is a big drawback compared to real clay work - even tho we got accustomed to it by now.
Completely agree. This is just a messy prototype integration with the Razer Hydras, which aren't accurate enough for this work. Now, the STEM controllers on the other hand....
Not to mention stuff like Control VR which allows for full finger movement on both hands to be translated into in-game finger movement, which also just passed its Kickstarter goal.
Replies
Really cool, but looks like such a pain in the ass o.o
using 2.5d with a 3d headset will be anything but not cool
Here's a painting one:
[ame="http://www.youtube.com/watch?v=uFWw6hGIKmc"]Tilt Brush Preview Trailer - YouTube[/ame]
Feature-wise sculpting apps are great the way they are today, but the paradigm of not being able to sculpt while rotating the model and/or the camera is a big drawback compared to real clay work - even tho we got accustomed to it by now.
Also the potential of VR being able to simulate our very own virtual cintiq screen and work atmosphere seems very cool. The image below was posted on reddit:
well, there are the 3d connexion devices (space-mouse/-navigator/whatever they call them this season) that in theory would be able to do just that. none of the standard tools in our field even support the dual-input paradigm though so best case you end up with a fancy camera controller if there's even a plugin for your software available.
It is however quite great when it comes to letting a neophyte manipulate your scene for review or presentation. It brings an instant smile to the face of say, a producer, AD or 2D artist
i agree completely with all you said. the spacemouse is also really finicky depending on scene scale. i believe all this is down to programs being built for regular pointing devices, input schemes and the way the camera in 3d apps works/cheats. i reckon the 3dconnexion drivers could be as good as imaginable and not in the least able to change that.
i did however first come into contact with spacemice on internal tools at a 3d viz company where it was a mandatory input device for the software and did basically what you wanted - let one manipulate the object with one hand and still operate a wacom with the other.
i believe very early versions of maya also featured dual input (wacom stylus+puck only?) but perhaps that stuff never made it into the public.
anyway, just saying content creation tools would have to catch up at a very basic level before any input devices evolving through mass popularity of VR would make a difference. and technically we already have those devices but not the software to make use of them properly.
https://www.youtube.com/watch?v=3ATQG9mnm34
..sculpting with hydras looks clumsy as hell..would be a fun toy though.
Completely agree. This is just a messy prototype integration with the Razer Hydras, which aren't accurate enough for this work. Now, the STEM controllers on the other hand....
Not to mention stuff like Control VR which allows for full finger movement on both hands to be translated into in-game finger movement, which also just passed its Kickstarter goal.