[Insight-developers] GaussianFilter::NormalizeAcrossScale not good for scale space analysis

Ivan Macia imacia at vicomtech.org
Sun Jan 9 12:46:34 EST 2011


Hi again,

Last day I was a bit confused, probably because I had some flu at the moment
which probably affected my brain even more :) In your last message there is
a misunderstanding of what I call the "analytical" implementation and the
way I calculate it. My first reference for this was of course:
*Discrete Derivative Approximations with Scale-Space Properties: A Basis for
Low-Level Feature Extraction*
http://www.nada.kth.se/~tony/abstracts/Lin93-JMIV.html<http://www.nada.kth.se/%7Etony/abstracts/Lin93-JMIV.html>

In this paper, Lindeberg computes some high-order discrete derivative
kernels (see figures in pag. 16). In page 10 he starts talking about the
implementation. I saw two ways of computing derivatives of the Gaussian.

1) The finite difference approach. This is clear, one computes the finite
difference kernel for the given order of derivatives and then convolves* *it
with the discrete Gaussian kernel of the desired precision. After that we
can perform scale-space and spacing normalization. This is the standard way
of calculating derivatives of discrete signals.

2) What I call the "analytical" approach. The idea came from the formula
(38) (which btw I don't understand very well were it comes from) which shows
a parallelism between continuos and discrete Gaussian derivatives. The idea
is that you can compute *exactly* the formula of the n-derivative of the
Gaussian, since it corresponds to a polynomial series, in fact they are the
Hermite polynomials (http://en.wikipedia.org/wiki/Hermite_polynomials),
modified by some sigma multipliers, divided by a power of sigma and
multiplied by -1 when the order is negative. That is dG_n/dx_n = ( (-1)^n /
(sigma^n) ) H_n(x,sigma) G(x,sigma) (you may compute the derivatives and
obtain this easily). The polynomial coefficients* are not *a convolution
kernel. The idea is that you can simply discretize the polynomial at the
kernel locations by replacing the values of x and sigma (in a kernel of size
7, x goes from -3 to 3) and *multiply them one-by-one* by the discrete
Gaussian kernel. That is, you multiply the value at the position x=-3 in the
discrete Gaussian kernel by the value of x=-3 after replacing in the
polynomial and so on for all the kernel size. But note that this *is not *a
convolution, simply a pixel-wise multiplication. Of course this work if the
kernel is large enough according to the Nyquist theorem for the polynomial;
the higher the order, the more kernel elements are necessary to keep all the
peaks and valleys in an appropiate sampling. The advantage of this is that
is not necessary to perform a convolution. I ignore if someone else has done
this before, but the results seem quite correct and I cannot discard them
without any further reason. Maybe an expert could provide more insight. For
example, are scale-space properties kept according to Lindeberg with this
calculation???

In order to check the results again and try to figure out what was wrong in
my reasoning I performed some experiments in Matlab. First I implemented
again all the Hermite polynomial calculations and all the finite differences
as calculated in DerivativeOperator. Then I performed the normalizations and
calculated both approaches. Again, the results are quite similar, and I
could not tell which of them is the best approximation and why. Maybe you
can see something in the results or the code that could provide some more
insight. Btw I get a bit lost in your last paragraph of bounding errors.

Attached you may find the Matlab code I used. The links below show some
graphics, calculated for sigma = 1.0 and spacing = 1.0 and 0.25 for both
approaches. It is possible to modify GaussianDerivativeTest.m to change the
values of sigma and spacing accordingly if you have Matlab installed.
Otherwise, the code is very easy to follow, and is even documented ;)

http://picasaweb.google.com/imacoli/Research#5560242405918631634 derivative
operator, sigma = 1.0, spacing = 1.0
http://picasaweb.google.com/imacoli/Research#5560242405013272386 analytical,
sigma = 1.0, spacing = 0.25
http://picasaweb.google.com/imacoli/Research#5560242410248294578 derivative
operator, sigma = 1.0, spacing = 0.25
http://picasaweb.google.com/imacoli/Research#5560242411446359202 analytical,
sigma = 1.0, spacing = 0.25

Regarding ITK, as I told you, for now we can keep the Derivative operator
approach, at least until is clearer what is happening above.

Thanks again for your patience

Ivan



> 2011/1/4 Ivan Macia <imacia at vicomtech.org>
>
> Brad,
>>
>> Your arguments seem solid. I just wanted to know why was the reason why it
>> was not working. Some of the things you mention make sense, but still I need
>> to understand the underlying problem. I did not find any publication that
>> recommends this approach, nor I read sth against it. Probably the problem is
>> related to the fact that high order polynomials usually give problems, for
>> example in the analytical approach they result in very large kernels (at
>> least large enough to hold all the peaks and valleys according to Nyquist
>> theorem). I will have a look and tell you if I find something. This is more
>> scientific curiosity :)
>>
>> For the moment, I would keep your actual implementation that seems to be
>> more trustworthy.
>>
>> Best regards
>>
>> Iván
>>
>>
>> 2011/1/4 Bradley Lowekamp <blowekamp at mail.nih.gov>
>>
>> Ivan,
>>>
>>> Yes, I did remove the analytical computation because I found it had very
>>> poor to incorrect results. I posted to this list because I thought a
>>> discussion of the methods may be needed.
>>>
>>> The worst case I found was with a sigma of .5 and order 4, where the
>>> coefficients summed to nearly 100, when they are suppose to sum to zero (as
>>> all good derivative operators should. Under many conditions the resulting
>>> derivative kernel does not sum to zero, particularly with even ordered
>>> derivatives. Additionally, under some conditions the polynomial is not
>>> sampled sufficiently producing incorrect results as well.
>>>
>>> Additionally the boundary condition of the central difference approach
>>> was incorrect. Which produced the errors on the edges in figure 2.d in the
>>> IJ paper. In addition, in the same figure I figured the discontinuity at the
>>> center of O4 was caused by the incorrect computation of the Bessel Function
>>> I_0(t >=3.75). There were many errors in this implementation which makes the
>>> figures not so relevant.
>>>
>>> I did implement a shift normalization of the derivative operators,
>>> however it made no difference with the central difference method, but did
>>> improve the analytical. Still with my application of analyzing the
>>> eigen-values of blobs, the central difference approach produced eigan-values
>>> much closer to the recursive. Given the amount of shift needed for some
>>> operators, I do not trust that approach. The 0-order gaussian operator needs
>>> to be scale on the order of 1+MaxError, where MaxError is the accuracy
>>> provided by user. The analytic approach does not appeared to be bounded with
>>> it's error and the needed shift, which make me rather uncomfortable.
>>>
>>> I also did not find any literature which recommends the analytic approach
>>> for scale-space. Are you aware of any?
>>>
>>> I will work on reproducing the graphs, from the paper to instill
>>> additional confidence in the central difference method.
>>>
>>> Brad
>>>
>>>
>>> On Jan 4, 2011, at 10:23 AM, Ivan Macia wrote:
>>>
>>> Hi Brad,
>>>
>>> Excellent work. My only concern is that you have removed the analytical
>>> computation of coefficients completely. From my experiments (I cannot
>>> confirm this, it was long ago but something can be seen in the IJ paper in
>>> the graphics at the end
>>> http://www.insight-journal.org/browse/publication/179) it seemed that
>>> the analytical approach had more precision specially for larger derivative
>>> orders. What was the ultimate reason to remove them? The normalization not
>>> working? Do you think we could fix the normalization of the analytical
>>> coefficients the same way as with the Derivative operator?
>>>
>>> Tell me if you need any further help with this.
>>>
>>> Thanks in advance
>>>
>>> Iván
>>>
>>> El 3 de enero de 2011 22:53, Bradley Lowekamp <blowekamp at mail.nih.gov>escribió:
>>>
>>>> Hello all,
>>>>
>>>> I have completed a major revision to itkGaussianDerivativeOperator and
>>>> submitted a couple of patches to gerrit:
>>>>
>>>> http://review.source.kitware.com/#change,660
>>>> http://review.source.kitware.com/#change,661
>>>>
>>>> This patch is a needed algorithmic revision to produce accurate results.
>>>>
>>>> I still need to modify my recursive gaussian scale space test to test
>>>> the discrete functions. But without that,  I am still extremely confident in
>>>> the the results form this operator are correct and robust with it's
>>>> parameters as it's values are extremely close to the values I am getting
>>>> with the recursive hessian filter for my scale space program.
>>>>
>>>> Brad
>>>>
>>>> ========================================================
>>>> Bradley Lowekamp
>>>> Lockheed Martin Contractor for
>>>> Office of High Performance Computing and Communications
>>>> National Library of Medicine
>>>> blowekamp at mail.nih.gov
>>>>
>>>>
>>>>
>>>
>>>  ========================================================
>>>
>>> Bradley Lowekamp
>>>
>>> Lockheed Martin Contractor for
>>>
>>> Office of High Performance Computing and Communications
>>>
>>> National Library of Medicine
>>>
>>> blowekamp at mail.nih.gov
>>>
>>>
>>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.itk.org/mailman/private/insight-developers/attachments/20110109/b8b00b15/attachment.htm>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: DiscreteGaussianDerivativesMatlabTests.zip
Type: application/zip
Size: 6249 bytes
Desc: not available
URL: <http://www.itk.org/mailman/private/insight-developers/attachments/20110109/b8b00b15/attachment.zip>


More information about the Insight-developers mailing list