[Insight-users] grey value binning in mutual information

Lydia Ng lng at insightful.com
Mon, 1 Mar 2004 13:21:02 -0800


Hi Jorn,

There are now several flavors of mutual information in ITK.
I can provide details with the two I am familiar with.

Typically there are two steps in computing mutual information:
[1] Estimating the probability density function (pdf) for the moving =
image
intensity, fixed image intensity and joint moving and fixed image =
intensity
[2] From the pdf estimates, estimate the marginal and joint entropy.

----------------------
MutualInformationImageToImageMetric is based on Viola and Wells's paper.

This implementation *does not* use histograms at all.

[1] The pdf is computed using Parzen windowing. Roughly speaking, the =
pdf at
a particular value (x say) is computed using a weighted sum of samples =
that
is within a particular bandwidth to x. The weighting is defined by a =
Gaussian
kernel. Quality of the pdf estimate depends on the kernel's standard
deviation. Too large - then the pdf may get over-smoothed. Too small - =
the
pdf is very noisy.

Note that - a new set of sample is generated at each all of GetValue() =
etc.

The number of samples and standard deviation of the kernels can be =
specified
by the user.

[2] The entropy involves integrating log(p(x))*p(x) over x. Viola and =
Wells
uses a stochastic sampling technique - where you sample over x (or the =
image)
and compute sum of log(p(x_i))*p(x_i) over the samples.

------------------------
MattesMutualInformationImageToImageMetric is based Mattes et al's paper.

[1] The pdf is also computed using Parzen windowing. The main =
differences
are:
- the image intensity are rescaled between 0 (min intensity) and 1 (max
intensity)
- a boxcar kernel is used for the fixed image pdf and a third order =
BSpline
is used for the moving image pdf
- due to the intensity rescaling, a fixed bandwidth is used for the =
BSpline
kernel
- samples are drawn in Initialze() and the same samples are use for each =
call
of GetValue() etc.

[2] The entropy is computed by dividing the x uniformly into bins and
computing log(p(x))*p(x) over the center(?) of the bins.=20

The number of samples and number of bins used can be specified by the =
user.

- Lydia


> -----Original Message-----
> From: J. Van Dalen [mailto:j.vandalen at rad.umcn.nl]
> Sent: Monday, March 01, 2004 1:48 AM
> To: insight-users at itk.org
> Subject: [Insight-users] grey value binning in mutual information
>=20
>=20
> Dear ITK-users,
>=20
> Using mutual information, I do not get insight in the way grey values
> are binned to optimize the method. There are all kinds of parameters =
you
> can set, e.g., learning rate, scale factor, etc., but not a binning
> parameter. Is this fixed? And if so, how is it fixed (does it depend =
on
> the histogram size)? If it is not fixed, which parameter controls the
> binning of grey values?
>=20
> Hope someone can help me,
> Thanks in advance,
> Cheers,
> Jorn.
>=20
> ----J.A. van Dalen, Physicist, PhD
> ----UMCN St. Radboud, Room: 0.141
> ----Dept. Radiology, Telephone: +31 24 361 4766
> ----The Netherlands, Telefax: +31 24 354 0866
>=20
>=20
>=20
> _______________________________________________
> Insight-users mailing list
> Insight-users at itk.org
> http://www.itk.org/mailman/listinfo/insight-users