Antw: [Insight-users] Shape prior level sets: question about
MAPCostFunction
Zachary Pincus
zpincus at stanford.edu
Wed Feb 2 15:32:39 EST 2005
Karl,
I hope you don't mind, but I'm cc'ing the insight-users list in case
anyone else is interested in my very preliminary findings.
> Among others we are also using the ITK implementation of Michael
> Leventon s work. Therefore I have been very interested in your
> thoughts about the different itk classes, which are used for the shape
> prior level set segmentation. I am sharing your opinion especially
> about the itkShapePriorMAPCostFunction class and would really be very
> interested in your modifications, if you are willing to code a new
> class. Implementing the modified cost function in our framework and
> comparing the results/performance would definitely be very interesting
> for us.
I've been looking at the ShapePriorMAPCostFunction class to try to
determine what I could do to make the shape-model part of the level set
segmentation a bit more robust. (This class computes the term described
in section 3.2 of Leventon's paper.)
I had found that even in my toy examples, there were knife-edge thin
margins for success, depending on the initial level set and parameter
choices, so I wanted to see if I could beef that up. So I looked at
each term in isolation and tried to figure out how to make then
individually robust, so that in combination they would hopefully be
much less initialization-dependent. Note carefully that I have been
optimizing for the type of images I use in my work (microscopy) -- some
or all of these findings may not apply to your setting. However, I
think some of these observations are general enough to be useful.
Firstly, I found that the "Inside Term" as defined by Leventon and
calculated by ITK can be a potent inflationary factor. Basically, this
term tries to keep the shape model *outside* of the evolving curve.
This can lead to a nasty feed-forward loop if the shape model starts
overlapping the curve. Specifically, if the shape model starts
overlapping the current curve, then the optimization of the MAP
function will drive the shape to expand in that region (if possible).
But, the current curve is then influenced by the MAP shape model, so it
likewise expands in that region, setting off this feed-forward chain.
I decided that for my work, I would prefer to change the definition of
the inside term from Leventon's work. He states that P(current curve |
estimated shape) is inversely proportional to the volume of the current
curve outside the estimated shape. This is just one possible model for
that probability. I noticed that both the evolving curve and the
estimated shape are signed distance functions, so they can be directly
compared. As such, I use the L1-norm of the difference between these
functions (in the narrow band active region) as a similarity metric,
and use that as a proxy for the probability. L2 and L-infinity norms
(RMSD and maximum deviation) seem good too. In this way, I encourage
the shape model to "stay near" the evolving curve. This is bad when the
curve is very small compared to its desired final size, however -- in
these regimes Leventon's initial suggestion is better. I am examining
ways to switch between the two probability definitions based on the
size of the curve.
Also, I noted that the Inside Term was proportional to the number of
pixels in the active set, so when the curve expands, this term grows
proportionally more weighty. This made these feed-forward effects
worse. I took the step of normalizing the final value by the number of
pixels in the active set. This step may not precisely map on to a
probabilistic interpretation, but it seems pragmatically useful.
Next, I looked at the "Gradient Term" which calculates how well a shape
model fits to an image. This term assumes that there is a gaussian
distribution of edge intensities (e.g. 1-FeatureImage) normal to image
edges, and tries to fit the shape model to this. (e.g. make the
zero-level-set of the shape model line up with the tallest part of the
gaussian.) Basically, this is supposed to encourage the shape model to
sit squarely on image edges.
The most useful modification that I did make was to rescale the
(1-FeatureImage) values so that they were in the range of heights of a
gaussian curve. Note that (1-FeatureImage) is in the range [0,1] but a
unit gaussian is [0, 0.4]. Practically speaking, this means that there
was an asymmetry in error when the shape model was misaligned with the
center of the gaussian: too little edge intensity (e.g. 1-FeatureImage
is zero where the gaussian would expect it to be peaked) would be
penalized a maximum of 0.4, while too much edge intensity was penalized
at a maximum of 0.6 per pixel. This difference effectively drove the
shape model to an edge-poor region of the image, the penalty for
misalignment to an edge was higher than the penalty for misalignment to
a not-edge. By rescaling, this was alleviated.
Another problem was the assumption that the 1-FeatureImage values had
unit variance. With a GradientMagnitudeRecursiveGaussianImageFilter
sigma of 1 and using a bounded reciprocal to calculate the feature
image, the 1-FeatureImage values have a true variance of about 2.7. By
assuming that the variance was 1, the MAP cost function basically put a
floor on how well the gaussian could fit. This made the "proper fit"
much more unstable and sometimes unattainable: if the 1-FeatureImage
values had too-wide a spatial spread (too large a sigma in the
derivative-of-gaussian filter), then it would be lower-error overall to
push the shape model to a edge-poor region where the tails of the
gaussian fit well, instead of having the shape model centered in the
middle of the edge, where the too-narrow gaussian peak fit well, but
the tails fit very poorly.
I thus modified the MAP cost function to properly estimate the
variance, and then correct for that. The variance estimate is a bit
funny, because you aren't really estimating variance from some sampled
population. Instead, you have input values (the values of the shape
model), and output values (the 1-FeatureImage pixel intensities at
those points). Plotted on a cartesian plane, these look a bit like a
non-normalized gaussian. (Cf. Leventon's Figure 7.) So the way to find
the variance is to treat the input values as samples *weighted by the
output values*, and take the variance of those weighted samples.
Think of it like this: Say the 1-FeatureImage had value 10 at shape
model value 1: then we would pretend that there were ten "1" samples
recorded. If the 1-FeatureImage had value 2 at shape model value -4, we
would record two "-4" samples. Then we would take the variance of these
samples. Now, there's a formula for weighted variance that does this
directly and in one pass, so that's what I really do.
I also noted that with too few "layers" in the active set (too narrow a
narrow band), the shape fit was rather unstable: by only seeing a
"truncated" gaussian under the narrow band, the optimizer has a much
harder time finding and staying at the true gaussian peak, and can
slide more easily to local optima. This is the case regardless of
whether you estimate true variance or assume a unit gaussian. Basically
the problem is that the metric is "fit-to-gaussian" but the image
provides a truncated gaussian. You could modify the fit term to try to
fit to truncated gaussians, or to be more nonparametric. I haven't tied
any of this -- I just widened the narrow band with a larger number of
layers, so that the optimizer is looking at the full gaussian.
Finally, I also scale this value by the size of the active region, to
keep it from blowing up as the region gets large.
I've attached my modifications, so you can look at them directly. The
code isn't perfectly clean, but it's a start.
Zach Pincus
Department of Biochemistry and Program in Biomedical Informatics
Stanford University School of Medicine
-------------- next part --------------
#ifndef __myShapePriorMAPCostFunction_h
#define __myShapePriorMAPCostFunction_h
#include "itkShapePriorMAPCostFunctionBase.h"
#include "itkGaussianKernelFunction.h"
namespace itk
{
template <class TFeatureImage, class TOutputPixel>
class MyShapePriorMAPCostFunction :
public ShapePriorMAPCostFunctionBase< TFeatureImage, TOutputPixel>
{
public:
/** Standard class typedefs. */
typedef MyShapePriorMAPCostFunction Self;
typedef ShapePriorMAPCostFunctionBase<TFeatureImage,TOutputPixel> Superclass;
typedef SmartPointer<Self> Pointer;
typedef SmartPointer<const Self> ConstPointer;
/** Method for creation through the object factory. */
itkNewMacro(Self);
/** Run-time type information (and related methods). */
itkTypeMacro( ShapePriorMAPCostFunction, ShapePriorMAPCostFunctionBase );
/** ParametersType typedef.
* It defines a position in the optimization search space. */
typedef typename Superclass::ParametersType ParametersType;
/** Type of the feature image representing the edge potential map. */
typedef typename Superclass::FeatureImageType FeatureImageType;
typedef typename Superclass::FeatureImagePointer FeatureImagePointer;
/** Type of the return measure value. */
typedef typename Superclass::MeasureType MeasureType;
/** Dimension constant. */
itkStaticConstMacro( ImageDimension, unsigned int, TFeatureImage::ImageDimension);
/** Type of pixel used to represent the level set. */
typedef typename Superclass::PixelType PixelType;
/** Type of node used to represent the active region around the zero set. */
typedef typename Superclass::NodeType NodeType;
/** Type of container used to store the level set nodes. */
typedef typename Superclass::NodeContainerType NodeContainerType;
/** Type of the shape signed distance function. */
typedef typename Superclass::ShapeFunctionType ShapeFunctionType;
/** Type of the array for storing shape parameter mean and standard deivation. */
typedef Array<double> ArrayType;
/** Set/Get the array of shape parameters mean. */
itkSetMacro( ShapeParameterMeans, ArrayType );
itkGetMacro( ShapeParameterMeans, ArrayType );
/** Set/Get the array of shape parameters standard deviation. */
itkSetMacro( ShapeParameterStandardDeviations, ArrayType );
itkGetMacro( ShapeParameterStandardDeviations, ArrayType );
/** Set/Get the weights for each term. Default is a vector of all ones.
* The weights are applied to terms in the following order:
* LogInsideTerm, LogGradientTerm, LogShapePriorTerm and LogPosePriorTerm.*/
typedef FixedArray<double,4> WeightsType;
itkSetMacro( Weights, WeightsType );
itkGetConstReferenceMacro( Weights, WeightsType );
/** Compute the inside term component of the MAP cost function.
* In particular, the method sums the number of pixels inside
* the current contour (defined by nodes of the active region
* that are less than zero) which are outside the shape
* specified by the input parameters. */
virtual MeasureType ComputeLogInsideTerm( const ParametersType & parameters ) const;
/** Compute the gradient term component of the MAP cost function.
* In particular, this method assume that ( 1 - FeatureImage ) approximates
* a Gaussian (zero mean, unit variance) algon the normal of the evolving contour.
* The gradient term is then given by a Laplacian of the goodness of fit of
* the Gaussian. */
virtual MeasureType ComputeLogGradientTerm( const ParametersType & parameters ) const;
/** Compute the shape prior component of the MAP cost function.
* In particular, the method assumes that the shape parameters comes from
* independent Gaussian distributions defined by the ShapeParameterMeans
* and ShapeParameterVariances array. */
virtual MeasureType ComputeLogShapePriorTerm( const ParametersType & parameters ) const;
/** Compute the pose prior component of the MAP cost function.
* In particular, this method assumes that the pose parameters are
* uniformly distributed and returns a constant of zero. */
virtual MeasureType ComputeLogPosePriorTerm( const ParametersType & parameters ) const;
/** Initialize the cost function by making sure that all the components
* are present. */
virtual void Initialize(void) throw ( ExceptionObject );
protected:
MyShapePriorMAPCostFunction();
virtual ~MyShapePriorMAPCostFunction() {};
void PrintSelf(std::ostream& os, Indent indent) const;
private:
MyShapePriorMAPCostFunction(const Self&); //purposely not implemented
void operator=(const Self&); //purposely not implemented
ArrayType m_ShapeParameterMeans;
ArrayType m_ShapeParameterStandardDeviations;
WeightsType m_Weights;
typename GaussianKernelFunction::Pointer m_GaussianFunction;
};
} // end namespace itk
#ifndef ITK_MANUAL_INSTANTIATION
#include "myShapePriorMAPCostFunction.txx"
#endif
#endif
-------------- next part --------------
A non-text attachment was scrubbed...
Name: myShapePriorMAPCostFunction.txx
Type: application/octet-stream
Size: 9000 bytes
Desc: not available
Url : http://public.kitware.com/pipermail/insight-users/attachments/20050202/5636e4d2/myShapePriorMAPCostFunction.obj
-------------- next part --------------
More information about the Insight-users
mailing list