[Insight-users] Re: Using KalmanLinearEstimator
Luis Ibanez
luis.ibanez at kitware.com
Mon May 7 13:03:19 EDT 2007
Hi Mathieu,
Thanks for the update.
Yeap, if you have a problem with a small number of samples,
then Kalman Estimation may not be the right tool for you.
You may be better of, by actually using a matrix solver
such as the ones available in VNL.
Kalman estimation is more appropriate for problems with
very large numbers of samples, for which a matrix solution
will have required a significant amount of memory.
A second advantage of the Kalman estimator is that it
continously give you a value of the "result so far".
This is particularly useful in "tracking" problems.
Regards,
Luis
-------------------------
Mathieu Malaterre wrote:
> As usual Luis thanks a bunch for your help. I will look into your
> suggestion.
>
> just for reference I was able to get a correct approximate solution by
> doing:
>
> for(unsigned int duplicate = 0; duplicate < 100000; ++duplicate)
> for(unsigned int ax=0; ax < N; ax++)
> {
> predictor(0) = points[ax][0];
> measure = points[ax][1];
> //std::cout << predictor(0) << "," << measure << std::endl;
> filter.UpdateWithNewMeasure(measure,predictor);
> }
>
> If you plot the values using excel or gnuplot you can see the linear
> regression is pretty obvious. Anyway I do not know enough about Kalman
> algorithm.
>
> Thanks
> -Mathieu
>
> On 5/7/07, Luis Ibanez <luis.ibanez at kitware.com> wrote:
>
>>
>> Hi Mathieu,
>>
>>
>> 1) Do you have only 6 data points in your real problem ?
>>
>> They may not be enough data for bringing the
>> Kalman Estimator to convergence to a good
>> estimation.
>>
>> I was assuming that you had in the order
>> of hundreds to thousands of values.
>>
>>
>> 2) You may find useful to print out the estimations
>> as you go adding new points. One of the interesting
>> advantages of the Kalman estimator is that it
>> provides at estimation using the currently provided
>> samples.
>>
>> E.g., move the line:
>>
>> > VectorType estimation = filter.GetEstimator();
>>
>> inside the for loop and print out the value of
>> the estimation. Then you will see how it get
>> close to the [A,B] values.
>>
>> Note that the estimator is initialized with [0,0]
>> so it may take a while (many measurement) to shift
>> the estimation to your real A,B values.
>>
>> Maybe we should add a method to initialize the
>> Estimator to a known value. In this way, the
>> Kalman estimator would focus on refining the
>> estimation.
>>
>>
>>
>> Regards,
>>
>>
>> Luis
>>
>>
>> -------------------------
>> Mathieu Malaterre wrote:
>> > Hello,
>> >
>> > I must be something obviously really wrong when using the
>> > KalmanLinearEstimator class, because the results are way off. Could
>> > someone please help find the obvious mistake I must have done.
>> >
>> > thanks !
>> > -Mathieu
>> >
>> > Code is:
>> >
>> > #include "itkKalmanLinearEstimator.h"
>> > #include <iostream>
>> >
>> > int main(int, char* [] )
>> > {
>> > typedef itk::KalmanLinearEstimator<double,2> KalmanFilterType;
>> >
>> > typedef KalmanFilterType::VectorType VectorType;
>> > typedef KalmanFilterType::MatrixType MatrixType;
>> > typedef KalmanFilterType::ValueType ValueType;
>> >
>> > KalmanFilterType filter;
>> >
>> > filter.ClearEstimation();
>> > filter.SetVariance(1.0);
>> >
>> > ValueType measure;
>> > VectorType predictor;
>> >
>> > // A ~ 0.323
>> > // B ~ 172.4
>> > const double points[][2] = {
>> > {-16.6817,167.095},
>> > {-44.4257,158.118},
>> > {-68.0884,150.462},
>> > {-87.7176,144.111},
>> > {-103.447,139.021},
>> > {-115.437,135.142},
>> > };
>> > const unsigned int N = sizeof(points) / ( 2 * sizeof( double ));
>> >
>> > predictor(1) = 1.0;
>> > for(unsigned int ax=0; ax < N; ax++)
>> > {
>> > predictor(0) = points[ax][0];
>> > measure = points[ax][1];
>> > //std::cout << predictor(0) << "," << measure << std::endl;
>> > filter.UpdateWithNewMeasure(measure,predictor);
>> > }
>> >
>> > VectorType estimation = filter.GetEstimator();
>> >
>> > std::cout << std::endl << "The Estimation is : " << std::endl;
>> > std::cout << estimation;
>> >
>> > std::cout << std::endl << std::endl;
>> > return EXIT_SUCCESS;
>> > }
>> >
>> >
>> > On 5/6/07, Luis Ibanez <luis.ibanez at kitware.com> wrote:
>> >
>> >>
>> >> Hi Mathieu,
>> >>
>> >> You may want to try the KalmanLinearEstimator.
>> >>
>> >>
>> http://www.itk.org/Insight/Doxygen/html/classitk_1_1KalmanLinearEstimator.html
>>
>> >>
>> >>
>> >>
>> >> Your estimator will be the vector [a,b],
>> >>
>> >> and your measures will be the vector [1,Y]
>> >>
>> >>
>> >> The Kalman Linear Estimator should converge to the vector [a,b] that
>> >> minimizes the sum of squared errors between X and (a+bY).
>> >>
>> >> For an example, you may want to look at:
>> >>
>> >>
>> >>
>> >> Insight/Testing/Code/Algorithms/
>> >> itkKalmanLinearEstimatorTest.cxx
>> >>
>> >>
>> >>
>> >> Regards,
>> >>
>> >>
>> >> Luis
>> >>
>> >>
>> >> ---------------------------
>> >> Mathieu Malaterre wrote:
>> >> > Hello,
>> >> >
>> >> > I have a pretty simple problem to solve and I was wondering if I
>> >> > could reuse any brick already existing in ITK.
>> >> >
>> >> > I am working with a 4D dataset (x,y,y and theta), where I can
>> >> > express for each pixel that:
>> >> >
>> >> > I(x,y,z)/sin(theta) = F(x,y,z) + b*I(x,y,z)/tan(theta)
>> >> >
>> >> > My goal here is to find out the b constant, I do not need the
>> >> > F(x,y,z) part. Obvisouly all I need to do is draw the line
>> >> >
>> >> > X = a + b Y
>> >> >
>> >> > where X = I(x,y,z)/sin(theta) and Y = I(x,y,z)/tan(theta) and the
>> >> > slope will give me 'b'.
>> >> >
>> >> > Any suggestions on which class to reuse ?
>> >> > thanks !
>> >>
>> >
>> >
>>
>
>
More information about the Insight-users
mailing list