<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN">
<HTML><HEAD>
<META http-equiv=Content-Type content="text/html; charset=iso-8859-1">
<META content="MSHTML 6.00.2900.2627" name=GENERATOR>
<STYLE></STYLE>
</HEAD>
<BODY bgColor=#ffffff>
<DIV><FONT face=Arial size=2><FONT face="Times New Roman"
size=3>Hi,<BR><BR>Thank you for your advice on my work Luis. Believe me, I do
know that it is<BR>impossible to "rank" the metrics, or even to "characterize"
them correctly<BR>(or completely). Eventhough I am only including 4 metrics in
my research,<BR>the possiblilities seem endless. But in fact, my goal is not to
make a<BR>theoretical study of the metrics, but to find ONE that works for
my<BR>particular US-MR 2D-3D registration problem. From that point of view,
it<BR>doesn't bother me to perform preprocessing on some metrics and not
on<BR>others, as long as it improves the behavior of the metric in
question.<BR><BR>Your advice on using rescaling or windowing is new to me,
it seems<BR>applicable for all metrics, no?<BR><BR>Just to make sure I
understand what your saying:<BR><BR>The idea is to only retain that range of
intensities in the image that will<BR>contribute in a good way to the
calculation of the metric? For both images<BR>this range can be
different?<BR><BR>Is there anyway of easily veryfying what an adequate range
would (by <BR>checking the contributions on the metric), or do I have to
visually inspect <BR>images that have been "windowed" to see if the desired
anatomical structures <BR>are well presented?<BR><BR><BR>Thanks again for all
your advice, ITK is an excelllent toolkit!!!!! I would<BR>call it my bible but I
don't want to piss off the new pope;-)<BR><BR><BR>Jef<BR><BR>>From: Luis
Ibanez <<A href="">luis.ibanez@kitware.com</A>><BR>>To: Jef
Vandemeulebroucke <<A href="">jvdmb@hotmail.com</A>><BR>>CC: <A
href="">insight-users@itk.org</A><BR>>Subject: Re: [Insight-users]
Normalisation of images necessary?<BR>>Date: Sat, 23 Apr 2005 17:27:25
-0400<BR>><BR>><BR>>Hi Jef,<BR>><BR>>Normalization of the image
intensities is not required for<BR>>the Image Metrics:<BR>><BR>> >
MutualInformationHistogramImageToImageMetric<BR>> >
NormalizedMutualInformationHistogramImageToImageMetric<BR>><BR>>However,
what is *VERY* important is to make sure that you use<BR>>the range of
intensities that is relevant to the anatomical<BR>>structures that you care
to register.<BR>><BR>>In other words, your image will have section of the
dynamic<BR>>range of intensities that are not contributing (and may
even<BR>>disturb) the evaluation of the Metric. You should then
apply<BR>>a filter such
as<BR>><BR>><BR>>
RescaleIntensityImageFilter<BR>><BR>>or<BR>><BR>>
IntensityWindowingImageFilter<BR>><BR>>for preprocessing the
images.<BR>><BR>>Note that these filters (and its parameters) bring
uncertainty<BR>>to your comparision of Image metrics. For the sake of
fairness<BR>>you probably want to apply *exactly* the same preprocessing
to<BR>>the image that are fed into all your registration
metrics.<BR>><BR>><BR>>Note that at the end, any comparision of
Algorithms is pointless<BR>>and useless if you dont' provide the entire set
of material that<BR>>you used for your comparision. That
includes:<BR>><BR>> - Source
code<BR>> - Input
images<BR>> - Full sets of
parameters<BR>><BR>><BR>>Only in this way, other people will be able to
repeat your<BR>>evaluations and tweak them in different ways. The fact
that<BR>>each metric has many parameters makes very difficult (if
not<BR>>impossible) to define a "fair" comparison. For example,
you<BR>>are selecting for Viola Wells parameters such as
:<BR>><BR>> - Number of
Bins<BR>> - Number of
Samples<BR>> - Standard
Deviations<BR>><BR>>Changes in any of those parameters will result in
dramatic<BR>>changes on the outcome of the Metric, and therefore will
chage<BR>>how this metric perform face to other
metrics.<BR>><BR>><BR>>Conclusions of the
sort:<BR>><BR>>
"Metric A is better than Metric B"<BR>><BR>>are useless and worst of all:
misleading.<BR>><BR>><BR>>They are only of interest for writing papers
in the Dark Side<BR>>of the current publishing system where reproducibility
is not<BR>>supported or even welcomed, and where conclusions are not
derived<BR>>from ones' own experience but from subjective judgement, such
as<BR>>the ones provided by the decadent peer-review
system.<BR>><BR>><BR>>Unfortunately, those practices still percolate
the entire community<BR>>of medical image
processing.<BR>><BR>><BR>><BR>>
Regards,<BR>><BR>><BR>><BR>><BR>>
Luis<BR>><BR>><BR>><BR>><BR>>-----------------------------<BR>>Jef
Vandemeulebroucke wrote:<BR>><BR>>>Hi,<BR>>> I am testing
several mutual information metrics, plotting their <BR>>>
behavior.<BR>>>Among the metrics are the two based on
histograms:<BR>>>
MutualInformationHistogramImageToImageMetric<BR>>>NormalizedMutualInformationHistogramImageToImageMetric<BR>>>
Do these metrics give better results when the images have been <BR>>>
normalised, as it is for the Viola-Wells implementation of MI, or is this
<BR>>> of no importance?<BR>>> Thank you,<BR>>>
Jef<BR>>><BR>>><BR>>>------------------------------------------------------------------------<BR>>><BR>>>_______________________________________________<BR>>>Insight-users
mailing
list<BR>>>Insight-users@itk.org<BR>>>http://www.itk.org/mailman/listinfo/insight-users<BR>><BR>><BR>><BR><BR><BR>_______________________________________________<BR>Insight-users
mailing list<BR><A href="">Insight-users@itk.org</A><BR><A
href="">http://www.itk.org/mailman/listinfo/insight-users</A><BR><BR>-----
Original Message ----- <BR>From: "Luis Ibanez" <<A
href="">luis.ibanez@kitware.com</A>><BR>To: "Jef Vandemeulebroucke" <<A
href="">jvdmb@hotmail.com</A>><BR>Cc: <<A
href="">insight-users@itk.org</A>><BR>Sent: Saturday, April 23, 2005 11:27
PM<BR>Subject: Re: [Insight-users] Normalisation of images
necessary?<BR><BR><BR>><BR>> Hi Jef,<BR>><BR>> Normalization of the
image intensities is not required for<BR>> the Image Metrics:<BR>><BR>>
> MutualInformationHistogramImageToImageMetric<BR>> >
NormalizedMutualInformationHistogramImageToImageMetric<BR>><BR>> However,
what is *VERY* important is to make sure that you use<BR>> the range of
intensities that is relevant to the anatomical<BR>> structures that you care
to register.<BR>><BR>> In other words, your image will have section of the
dynamic<BR>> range of intensities that are not contributing (and may
even<BR>> disturb) the evaluation of the Metric. You should then
apply<BR>> a filter such
as<BR>><BR>><BR>>
RescaleIntensityImageFilter<BR>><BR>>
or<BR>><BR>>
IntensityWindowingImageFilter<BR>><BR>> for preprocessing the
images.<BR>><BR>> Note that these filters (and its parameters) bring
uncertainty<BR>> to your comparision of Image metrics. For the sake of
fairness<BR>> you probably want to apply *exactly* the same preprocessing
to<BR>> the image that are fed into all your registration
metrics.<BR>><BR>><BR>> Note that at the end, any comparision of
Algorithms is pointless<BR>> and useless if you dont' provide the entire set
of material that<BR>> you used for your comparision. That
includes:<BR>><BR>> - Source
code<BR>> - Input
images<BR>> - Full sets of
parameters<BR>><BR>><BR>> Only in this way, other people will be able
to repeat your<BR>> evaluations and tweak them in different ways. The fact
that<BR>> each metric has many parameters makes very difficult (if
not<BR>> impossible) to define a "fair" comparison. For example, you<BR>>
are selecting for Viola Wells parameters such as
:<BR>><BR>> - Number of
Bins<BR>> - Number of
Samples<BR>> - Standard Deviations<BR>><BR>>
Changes in any of those parameters will result in dramatic<BR>> changes on
the outcome of the Metric, and therefore will chage<BR>> how this metric
perform face to other metrics.<BR>><BR>><BR>> Conclusions of the
sort:<BR>><BR>>
"Metric A is better than Metric B"<BR>><BR>> are useless and worst of all:
misleading.<BR>><BR>><BR>> They are only of interest for writing papers
in the Dark Side<BR>> of the current publishing system where reproducibility
is not<BR>> supported or even welcomed, and where conclusions are not
derived<BR>> from ones' own experience but from subjective judgement, such
as<BR>> the ones provided by the decadent peer-review
system.<BR>><BR>><BR>> Unfortunately, those practices still percolate
the entire community<BR>> of medical image
processing.<BR>><BR>><BR>><BR>>
Regards,<BR>><BR>><BR>><BR>><BR>>
Luis<BR>><BR>><BR>><BR>><BR>>
-----------------------------<BR>> Jef Vandemeulebroucke
wrote:<BR>><BR>>> Hi,<BR>>> I am testing several mutual
information metrics, plotting their <BR>>> behavior.<BR>>> Among the
metrics are the two based on histograms:<BR>>>
MutualInformationHistogramImageToImageMetric<BR>>>
NormalizedMutualInformationHistogramImageToImageMetric<BR>>> Do
these metrics give better results when the images have been <BR>>>
normalised, as it is for the Viola-Wells implementation of MI, or is this
<BR>>> of no importance?<BR>>> Thank you,<BR>>>
Jef<BR>>><BR>>><BR>>>
------------------------------------------------------------------------<BR>>><BR>>>
_______________________________________________<BR>>> Insight-users
mailing list<BR>>> <A href="">Insight-users@itk.org</A><BR>>> <A
href="">http://www.itk.org/mailman/listinfo/insight-users</A><BR>><BR>><BR>><BR>>
<BR></FONT></FONT></DIV></BODY></HTML>