I just took a glance at
photosynch and
am really impressed with the applications of computer vision and image
processing techniques to create a really unique application.
The basic
idea is to take a pile of photos that are related to each other somehow
(imagine taking zillions of pictures of the Taj Mahal from tons of
different places) find similar features in all the images and try to
reconstruct a mock 3d space that shows the spacial relation between
all of your photos. This is really cool as you might be able to create
a very interesting photo tour from your photo collection in a 3d
navigatable space.
Oddly, I was trying to come up with a similar idea to link videos stills
in QuickTimeVR movies and try to use the linkable features in QuicktimeVR
to provide clickable hotspots that would take you to another photo that
was a picture of the same scene however this is far slicker and if it works
with very little intervention from the user besides pointing to a pile
of photos and letting it do its job that would be great.
However, there are still caveats. The whole process takes hours or days
to currently do and the current technology preview is only for a pre-rendered
project. The true acid test will be in my opinion the ability to
just point to a folder of pictures and have it do its job with as little
possible human intervention as possible. That is a not a trivial problem
but I’m sure we’ll see something interesting especially since it has
two (very well)
known researchers
in the computer vision field. I’m really looking forward to the results
of their labor. My last question is how many technologies behind this
are patented already. It’d be great if it an OSS implementation inspired
from this project could be made however patents are a sticky problem.
( 6 or 7 REQUIRED)