Home 3D Art Showcase & Critiques

Architectural Interior - Cathedral (Maya and Maxwell)

mattyinthesun
polycounter lvl 4
Offline / Send Message
mattyinthesun polycounter lvl 4
Hi everyone

First time poster here, but have been lurking, looking at the awesome work on this site for some time.

I did this work as a personal project a few years ago now. The idea was to try and accurately model a real cathedral interior, based off photographs using Photogrammetry as a basis, but the main modeling work done in Maya. I still have to do a low res game version of the interior and texture, but thought I would share none the less.

Modeling is a mix of NURBS, polys and SUB-D, all done in Maya. Rendering is an early beta and Maxwell Render. The photogrammetry stage was done in Photomodeler - now I would most likely use Autodesk ReCapPro (no real automatic photo to 3d solution was out when I did this model), and perhaps switch to photomodeler for the trickier stuff.

cathedral01_detail.jpg

cathedral01_dome.jpg

cathedral01_organ.jpg

Here is a wire over photo, to check accuracy of the main geo - I did a lot of these shots as I was working, just to check accuracy as I went.

cathedral01_wirephoto.jpg

Replies

  • fab-camp
    Offline / Send Message
    fab-camp polycounter lvl 6
    hey,
    like that a lot. I might have to take a deeper look at the possibilities of a photogrammetric modeling process. waiting to see more.
  • Cibo
    Offline / Send Message
    Cibo polycounter lvl 10
    How the hell?
    So much precision.
    Wonder what is your workflow. :poly124:
  • mattyinthesun
    Offline / Send Message
    mattyinthesun polycounter lvl 4
    As for the process - Photogrammetry has changed a lot since I did this project - here is how I did it back then, and how it could be done now -

    First off, the camera and lens - with photogrammetry (even current) it is best to use a decent dSLR with a FIXED lens. If anything changes in the lens, then the absolute center of the camera changes, and a whole bunch of other stuff that will mess up photogrammetry. Whilst modern methods can negotiate this somewhat, it is still worth using a fixed lens (prime lens). I like a 20mm for architecture.

    Take a bunch of photos all around the object. Try to orbit around, NOT pan, get high and low, total coverage. For the inside of the cathedral, I took about 1000 images if I remember correctly. I only used about 60 of them in the photogrammetry process.

    Photogrammetry works by trying to compute "like points" (corner of a molding, wall corner, any point that can be recognized in three pictures) between images, then using the difference in those points to work out camera positions. The more "like points" that can be found, the more accurate the placement of the cameras in 3d will be - also the points need to be well distributed across an image - not just bunched in the center.

    Back when I did this cathedral, those "like points" had to be placed by hand across all images. The software I used (Photomodeler) had a slick UI for placing them, so it wasn't too laborious - you would place say 40 points, then run the solution and check your error deviation. Nowadays you just load in a ton of images (preferably in some logical order), and most photogrammetry software can find points automatically that match between images. The user can go in and add additional points. These can be helpful for modeling work - having locator points that show a corner of molding, the point on the tip of an ornate column etc.

    below is an image that shows a mid point through the placement of cameras and locators in Photomodeler:
    photomodeler01.jpg

    You can see the camera positions there, and the points and lines that were created in 3d space from the images. These were my modelling aids once I got all this in to Maya.

    To get all this data in to maya, I exported CSV files of the camera locations and camera information (film back size, image name etc), and wrote a quick MEL script that would import all this data in to Maya. I then wrote another Mel script to hotkey tab through all the camera locations. What was pretty cool is that now I had all of the cameras in the right position in Maya, with the right photographs loaded in as background images, and all the point locator details.

    From here it was just a question of building the geometry. This was mainly done with NURBS and straight polys, some SUBD work - mainly the column tops.

    Nowadays things are quite different for photogrammetry. One of my favorite tools is Autodesk Recap Pro, which is the pro version of their 123D Catch software. This, pretty much automatically, will take a series of photographs you took, find matching points, line up the cameras, then build a rough mesh of what is in the photograph, texture it via projecting the photographs on it, and kick it out as FBX, that you can load in to maya - with camera positions and all. In short - the photogrammetry for the cathedral took about 2 weeks of manual work, just in the evenings. Now with the new tools available, it would most likely be less than an hour of manual work, and an hour or two of computation time, and I would get a rough mesh out of it, not just some points and lines. Not bad. not bad at all.

    Here is a shot of the project in Maya - you can see the cameras came through - I did a lot of modelling in those camera viewports. With the correct background image in place, it really just became a job of extruding curves until stuff matched over images. TAB through to another image of the same spot by a different angle to check everything looks OK, and off you trot.

    maya_cathedral.gif

    So, fast forward 5 years or so, and this is now what you get:

    I recently shot some pictures of a URAL motorcycle and sidecar, as a modelling project. Before I left their garage area I took a quick movie with my dSLR of a walk around the bike. I extracted every 10th frame from this movie, and loaded those in to Autodesk ReCap. Those got sent off to the cloud, and a few hours later I got my FBX back, with great camera alignment, and a rough 3D model of the bike, with all the cameras to load in to Maya and start modelling from:

    (photo and the rough autogenerated model in modo)
    ural_photo_to_model.jpg

    cameras imported in to maya:
    ural_maya.jpg

    Maya handles the camera data much more reliably than modo, which for this project is a big deal. For some reason the cameras, once in Modo, didn't line up correctly to the model, however in Maya they did. I imagine it is something with the film back settings no going through properly, but I have no idea. Anyhow, this modeling project is on hold for a little bit.

    Anyhow, hope that wasn't too boring. I honestly see photogrametry as the way forward for a lot of realistic modeling - at least to help get rough proportion down. I have used some photogrametry automatic 3d model geo straight before - rock surfaces and so on - it works quite well for. Another GREAT use is that of cloth - take photos of your cloth object (pants, cloth over box etc), generate the 3d mesh for it automatically, then ZRemesher it in ZBrush, then do a little cleanup work in there, and you have some really good looking cloth much quicker than you could manually sculpt it from starting off blank.
  • JavierP
    Very nice, I like it.
Sign In or Register to comment.