I discovered photogrammetry while browsing online, and was very interested by the subject. So, research time !
First of all, principle of operation : You basically take tons of photos of objects from different angles, feed the pictures to a program and it spits out a 3D model of the object with the corresponding texture !
Well it’s certainly more complicated than that, but oh well…
What about real life applications ? Why would anyone ever want to have a 3D model of a specific object ?
Well there’s tons of applications :
- Save yourself the time of modeling a 3D object : just make it in real life and photograph it : done. You don’t have any of the expensive equipment ? It’s fine, just edit the object via the computer later !
- Need to create a scene for a 3D animation or video game ? Just capture real life : and you’re not even restricted to what already exists : capture a plant, a wall and a piece of floor, and bring them together to create a custom scene !
- You can also create 3D maps of places with picture taken from a drone of an RC plane
A very simple program that does this is named 123D catch. It lets you capture photos with your phone and then make all the calculations on servers and brings you back the model. Cool. but I tried it a couple of times and the results were never that awesome… Because of the poor lighting i used and the fact that it was my phone that took the photos, probably. But some cool results can be obtained, though, as you can see on the site.
So then I wanted to play around and found the program named “Agisoft Photoscan”.
After seeing some of the results it could create online, I decided to give it a try.
I went in my garden with my mobile phone on a cloudy day (very important) and took a bunch of photos of certain objects and scenes, making sure I had pictures of them from all sorts of angles :
The cloudy day was very helpful : the uniform cloudy gray mass in the sky diffused the sunlight in such a way that there were no shadow, which would have been confusing for the program.
Running the pictures through Photoscan lead to mixed results ; some objects were recognizable, but that was it, and some were really nice, in my opinion :
Here I took pictures of this bench :
And the 3D model that was generated wasn’t that bad (for a home program and phone pictures, anyways) :
As you can see the underside of the bench wasn’t modeled all that well, because it simply wasn’t visible on any of the pictures. As for the Tree, the transparency wasn’t even recognized and replaced with the white sky behind. The bushes are a little bulky because they were moving between the photos due to the light wind.
I then tried to scan a transparent glass table :
But it went terribly wrong because of the transparency and reflections on the glass top… The program was confused about what was where and so on… :
Now the successful try : I photographed a little Buddha sitting under a tree, and it turned out very good ! Here is the original image taken with my phone (a bit blurry and poor colors) :
And the resulting 3D model :
Not bad, but I thought it could be improved with a better image quality, as my phone makes kinda blurry pictures. So I repeated the process with a proper DSLR, which turned out to give better results, especially on the colors :
“Now what ?”, you could ask. Well it’s possible to play with the model as you wish. For example, why not making a fluid simulation and pouring some water on top of all that ?
Or more simply, placing random virtual objects on the scene ?
But remember, this is a 3D model, so you can look at it from different angles.