Skip navigation

Tag Archives: c++

Over Christmas I had a bit of spare time on my hands so I decided to get to grips with a few new c++ libraries. I already had some experience with wxWidgets, a cross-platform widgets toolkit, from my Hugin/GSOC endeavours but I wanted to write my own GUI from scratch. Similarly, I’ve played with libgphoto/gPhoto2 a little using Perl scripts to make timelapse movies (of my dog) using my Canon G7. These tools and libraries allow you to control a wide range of cameras via USB from UNIX-like operating systems.

When a quick trawl for Linux software to create timelapse movies came up with nothing, I decide to use these two libraries to write my own. This is what I’ve come up with so far.


I based this project on the notebook example that comes with the wxWidgets source code. It’s fairly easy to add tabs (or pages); at the moment the main one (above) shows the timelapse settings – interval, max time/frames, start, stop and preview capture. There’s also an option to change the number of frames that are taken at each interval – useful if you want to make an HDR timelapse film. The second tab (below)  shows the camera settings – ISO, shutter speed, aperture etc. The options that are available here depend on your camera – an SLR will have lots to choose from wheras a point and shoot might have very little if anything.


To start capturing images, you just have to set a few values on the timelapse page (e.g. max frames/run time – though leaving these on zero will make it run until your camera/laptop run out of batteries) – then hit start. You can set the camera up using the second tab or just use the camera’s current values (which are the values that get loaded on initialisation). Hit stop when you’re done and your working directory should be full of images. I then use mencoder or ffmpeg to create the video, though I’m thinking of incorporating this stage into another tab. Other things I may add include some HDR creation/tone mapping functions, a preview tab showing a grid made up of captured images, and some re-size/scaling functions. When I’ve ironed out a few minor bugs and added some new functions I’ll probably start a project at Sourceforge.

The weekend before last my friend Sarah threw a big party in a Chapel she’d hired close to Glastonbury in the west of England. At about 2am I decided it would be a good idea to get my laptop, camera and new Gorillapod out and give the program a test drive. After 5 more hours of partying, here’s what I came up with. With hindsight though, the real success is that none of my gear got trashed!


(Btw that’s me in the rabbit ears)


You can download the code via subversion from here:

Compile with ./configure then make then make install. Good luck!

My HDR panoramic work flow goes something like this: stitch first bracketed set, save image and .PTO. Open up .PTO file with a text editor and replace the images with the next ones in the set. Load into Hugin, stitch and save image, then edit .PTO again and replace with final set of images for the last stitch (I usually bracket at 0EV, -2EV and +2EV). This results in 3 panoramas which should line up perfectly as the control points and thus warping of all images should be identical. The 3 images are then loaded into Dynamic Photo HDR or Qtpfsgui where they’re aligned (without the need for any intervention) and merged to HDR prior to tone mapping. I always stitch first and tone map second so you can get a global rather than local preview of the tone map output.

The other day I made a mistake when editing the .PTO files and failed to replace a couple of the images. Only after I’d stitched the pano did I realise my mistake. So I decide it was time for a little hack to automate the image replacement process from within Hugin.

I’ve added a few extra buttons to the images panel, ‘Bracket up’ and ‘Bracket down’:

images panel bracket buttons

Once you’ve finished your first pano, simply hit the ‘Bracket up’ button. My function will scan the image file name for a number, increment it by one, then search original image’s directory for the new image. If it finds all the images in the next series it will then replace all the current ones and update the list, but will not alter any of the panorama’s other settings. Hit ‘Bracket down’ and it will do exactly the same but look for the previous images in the set. If it succeeds a message will pop up:

all images found

Open up the preview window and you should see a darker or lighter pano with all setting intact. Hit stitch, repeat for all bracketed sets and stitch again and you’re ready for HDR.. and no messy editing of .PTO files in sight.

You can download this patch from here. I’ve tested it under Ubuntu and Centos using SVN 3555. For Ubuntu, I had to install libboost-regex and libboost-regex-dev packages; under Centos I had to add some Boost suffixes to the relevant section in FindBoost.cmake (since I’d built Boost from source). Since Boost is a prerequisite for building Hugin (it uses the Thread library), other platforms might not need any modification.

Let me know if this is useful to you, it has definitely shortend my workflow in the last few days.

Celeste is my little contribution to the Hugin project. About 2 days before the GSOC deadline I noticed that Hugin was on the project list. I looked at the list of project suggestions and noticed one titled ‘sky identification’.  A bit of Googling around on subjects like ‘texture discrimination’ and ‘Gabor filtering‘ and I was ready to propose a solution.

The problem is this: photo stitching software relies on objects within different images remaining in the same place. Control points are then used to match corresponding positons in each photo. However, panoramas are often captured over a number of minutes. During this time frame, non-static features such as clouds, water, and other objects (usually influenced by wind) may move significantly thus creating problems for automatic alignment tools. Should a control point be added to a cloud in photo A, the same cloud may have moved by the time photo B is taken, so the corresponding control point on the same cloud in photo B will actually be in the wrong position.

To implement a solution to this problem, I used a classification algorithm called a Support Vector Machine. SVMs are binary classifiers, so the objective is to use one to make a simple call: is this control point on a cloud, or not. The SVM uses textural and colour information around the control point to make this judgement, based on what it has been trained to recognise. The training process involves providing labeled example of each category (cloud/non-cloud) so the SVM can learn to discrminate between them. Simple eh?

Ok enough theory, here it is in action. You’ll need a recent version of Hugin (I’m using SVN version 3545 on Ubuntu here) and you can download the examples from here. Load some images and generate control points. There are a few ways of running Celeste; I’ll show you how to do so via the control point editor panel first.

pre-celeste cp editor

Select a pair of images. For these two in the example, there are 15 control points connecting the two images, 6 of which are on clouds – we want to get rid of these. Hit the ‘Run Celeste’ button and wait a few seconds. A message box should pop up saying how many control points have been removed:

celeste done

Click OK then have another look at the control points. Of the 6 on clouds, 5 have been removed. In this example none of the non-cloud control points have been removed:

post-celeste cpeditor

The only cloud control point remaining is actually on an airline exhaust trail (which, being a fairly straight line, is not very cloud-like). So we do pretty well here – 15/16 control points correctly classified is about 93% accuracy. This is a good example; under stringent 10-fold cross validation, Celeste was 82% accurate.

You can also run Celeste from the images panel. Simply select the images that you want to run Celeste on (or select all of them) and hit the ‘Run Celeste’ button again. Exactly the same thing should happen – a message will pop up saying how many control points have been removed:

celeste images panel

There’s one more way of running it – right at the start on the assistant panel. After control points have been generated, Celeste will scan all the images and remove cloud-like control points from the whole set. To run Celeste this way open up the preferences panel and activate the ‘Automatically run Celeste..’ option. Then hit ‘Align’ on the assistant panel, and Celeste will run after control point generation:

celeste preferences

And that’s it! There are a few other options on the preferences tab; to alter the sensitivity you can adjust the threshold value. The SVM will generate a score for each control point, where greater than 0.5 indicates a cloud. If you want to remove more control point reduce the threshold (try 0.4 to begin with), and raise it (e.g. 0.6) if you want to remove fewer. If there are lots of control points close to the image border, you may have more luck using the small filter size. In most cases you won’t need to adjust this though.

I found a great planetoid on Flickr – check this out. Fpsurgeon did a great job on it but I think he’ll have an easier time using Hugin/Celeste:

“The biggest pain in this shot was shooting the pano quickly enough that the clouds didn’t have the chance to drift much, since that makes stitching a pain, and I had to manually choose the control points since autopano-sift-c wanted to put them all over the clouds.”

Special thanks to Yuv Levy, Harry van der Wolf, Simon Prince and Google for making Celeste happen. Cheers 🙂