Hey guys, I started 3D modeling a couple of years ago as a hobbyist but I really want to pursue it as a career and before that, I want to hone my skills to a presentable level. I am really interested in vehicle modeling for games as I really like cars. I typically model using the blueprints technique and there's always some errors and inaccuracies . The final results are not accurate even when I try my best to eye out the details, they turn out to be completely off compared to the real deal. My question is that how do I camera match real life images such as this one that I found on Google so that I can get some accuracy in my modeling going on. It is really bothering me. I've searched the internet and it seems like there used to be a program called ImageModeler by Autodesk but it has been discontinued now. Would you guys please help me out with this? I tried experimenting the other day with a photo of car. I tried matching a cylinder to the wheel but I failed to get it right and even when I did get it right to some extent, the rear wheel just couldn't match with the cylinder at all even though I put in the correct focal length that was used to take the picture. Any help would be appreciated.
Replies
First you need the tires and rims right. Those is a data set that is doable without imageplanes and it does get you started. Than you need a rough base blockout. If thats ready its just a matter of moving the cameras and playing with the vield of view. To get the cameras right you need about three days of constant moving verticies and moving the cameras. There is no easy way.
There are ways to make the camera matching simple-ish actually.
What is used in the image you show I think is the camera match feature in 3Ds max: https://www.youtube.com/watch?v=9cCVWcvgWno
For it you need a base model that you did based on blueprints and measurements and then after matching it you adjust it to fit.
What I'm doing lately is finding a reference turnaround video of the vehicle and sending it through Meshroom to photoscan it, then I extract the cameras created which means I have many cameras already lined up and ready to model. The resulting model from the photoscan is usually very crappy but the camera matching works quite well and it's quick.
iirc ImageModeler got integrated into 3Ds Max as the Camera Match thing.
I see. I will definitely try to match the wheels first and then working from there. Thank you for the help!
I see but what if I don't have access to blueprints from the start? And yes that video looks pretty neat but I don't use Max, I use Maya. I guess it's time to learn Max haha. Nonetheless, thank you so much for your valuable input.
I've got another question. The taillights in the model looks like they were added after subdividing the mesh, is that the case? like is it a normal practice to add details after subdivision when making models for real-time pupose?
SubD does add a lot of extra not needed geo. Sometimes its faster tu use SubD but you need to clean up the extra geo afterwards. Do what ever is needed. There are no rules.
Gotcha. Thank you.
Every automated system we ever tried got us only the half way. First they look awsome but after a while of modeling you notice something is off you have to adjust and match by hand. The montioned three days arnt for matching the images planes only. You are doing a lot of modeling during that time. It just needs time to get it right.
Interesting. So far when I've been able to use photogrammetry as a base, even if not necessarily good photogrammetry the resulting camera positions calculated match really well. The problem is usually finding good sources for the photogrammetry as you rarely have access to the car and videos found online are often challenging to work with.
One thing usually not mentioned in this approach is that real world cameras have barrel distortion, while most virtual cameras are purely linear perspective. Matching is pretty hard without this. Some renderers can add this distortion, but I haven't seen any realtime viewports that do it.
That is something I have never thought of. Do you have any idea on how to overcome such technical limitations? The picture that I've attached looks like a spot on match except maybe the roof.
Look up workflows for special effects in movies, they have to match live footage and cg all the time. Nuke compositing software for example, it's used to match plates.