Spring 2013 Progress and Summer Plans

This is an overview of progress from March through May, and a summary of research directions for the summer.

Progress summary, Spring 2013

Since March, we've made significant progress toward a fully-automatic system for reconstructing 3d stem models from 2D images. 

  • completely redeveloped the point-correspondence algorithm to handle noisy curve-detectors, including:
    1. reversals
    2.  fragmented 2d curves
    3. gaps
    4. many-to-one correspondences
  • Developed a model for background noise, to allow automatic classification of foreground vs. background
  • Optimized the 3D reconstruction algorithm, improving the asymptotic running time from O(n^3) to O(n), and achieving speedups up to 40x in practice.  This is essential, because it will be called thousands of times during inference.
  • Developed a statistical inference engine that produces promising results (see Results below), but currently too prone to noise and gets stuck in sub-optimal solutions.  Our continuing development is focusing on improving results.
  • Developed a data-driven system for proposing matching curves, improving running times by 10-20x.

Results

Video: http://www.youtube.com/watch?v=i818pW4FFho

Future Directions - Summer 2013

We currently need to improve the quality of results produced by the inference engine.  The two directions of research we will pursue concurrently are:

  1. Improved models for foreground and background, to improve robustness to noise.
  2. Improved inference to explore the solution space more efficiently.

Background curve model
I've been working on a model for curve that don't move between views (background curve). 
Prior to this I've struggled with a posterior that prefers to classify clutter as "bad 3d curves" instead of "2d noise curves".  Tweaking parameters to reduce the number of bad 3d curves resulted in significant loss in recall of good 3d curves.  
By having a stronger noise model, it should make it easier to distinguish foreground and background using the bayesian posterior.  

Revamped index-set estimation

My early experiments with this "background curve" model revealed that I was estimating the index set in a way that over-estimated the spacing between points.   The side-effect was that the model posterior was overly-permissive and would continuously improve as the noise sigma was reduced to zero during training.  In addition to being nonsensical, this prevents us from using noise-level to distinguish between foreground and background curves.  I revamped the point-correspondence algorithm as well as the index-set estimation algorithm, which seems to have fixed it.

Foreground curve model

I've also started theoretical work on improving the foreground model (i.e. 3d curves).  The current problem is that deviations between observed points and their corresponding projected model points are not conditionally independent, but the likelihood function treats them as such.  I believe we can model these deviations using a Gaussian process for each view without changing the posterior's asymptotic running time.  This should result in significantly higher posteriors for "good" foreground curves and lower posteriors for bad ones.  A nice side-effect is that we'll also be able to infer the deviations in 3d position between each image.

Split/merge/swap move

In addition, I'm also working out the math for a merge/split/swap move.  Recall from our earlier conversation that naive merge moves are unlikely to be accepted under MH, because the reverse split move has a combinatoric number of possibilities, resulting in a small q() in the numerator of the acceptance ratio.  I think Zhu/Barbu's Generalized Swendsen-Wang provides an answer for this, and I think I've found a way to apply it to this model.  I've developed a rough sketch of the algorithm, but need to find the time soon to sit down and formalize it.