[Insight-developers] RE: Demons voodoo

Lydia Ng lng at insightful.com
Thu, 8 Apr 2004 17:51:24 -0700


Jim,

Are you able to use the WarpImageFilter as it? Or do you have to modify =
it
to?

Also, have you seen http://www.inria.fr/rrrt/rr-3706.html?=20

BTW: Could you send me your test code and the toy images? I am wondering =
if
the current implementation using moving image derivative instead of the =
fixed
image that the behavior may be to same as to your suggested changes.	=20

-Lydia

-----Original Message-----
From: Miller, James V (Research) [mailto:millerjv at crd.ge.com]=20
Sent: Thursday, April 08, 2004 11:55 AM
To: Lydia Ng; Miller, James V (Research); Insight-developers (E-mail)
Subject: RE: Demons voodoo

The mapping in Thirion's paper looks like it is from a position in g =
(moving
image) to a position in f (fixed image). When=A0the deformation field is =
used
by the WarpImageFilter,=A0it=A0(always) looks like a reverse =
mapping.=A0I=A0consider
what I am proposing for the Demons as the reverse mapping and what =
current
implementation does=A0as a forward mapping :)=A0
=A0
By swapping what image is the fixed and what is the moving image, you =
can
always produce a registration that performs the mapping that you want. =
Course
the two deformation fields are not inverse equivalent.
=A0
The warp that is calculated by the Demons paper, "warps" the coordinate =
frame
of g onto the corrdinate frame of f. The WarpImageFilter assigns pixels
values at the prescribed positions in the output image by sampling the =
input
image at positions dictated by transforming a position from the output
coordinate frame to the input coordinate frame as defined by the =
prescribed
deformation field.
=A0
I did do the experiment swapping the circle and the square, the results =
are
attached.
=A0
For the scheme I am using now, you interpolate f (and grad f) instead of
interpolating g.
=A0
Again, my concern with the current implementation is that it does not =
match
the original paper. It may be ugly (though not so such so that we do it) =
to
support both approaches. The tricky part is that some of the types are
defined via the fixed image in one implementation and via the moving =
image in
the other. Most of the changes are within the =
DemonsRegistrationFunction.=A0
However, there are some changes to PDEDeformableRegistrationFilter to
propagate the appropriate information to the pipeline that is based on =
the
fixed image in one case and on the moving image in another.=20
=A0
=A0
-----Original Message-----
From: Lydia Ng [mailto:lng at insightful.com]
Sent: Thursday, April 08, 2004 2:00 PM
To: Miller, James V (Research); Insight-developers (E-mail)
Subject: RE: Demons voodoo
Hi Jim,

>From my understanding of your email, what you are proposing is to =
estimate
the "forward mapping" instead of the "reverse mapping".=20
Both of which do allow you to warp the moving image to the fixed image - =
the
difference as you pointed out is that one keeps tracks of where a pixel =
goes
to and the other keeps track of where a pixel comes from.

Out of curiosity, could you also repeat the experiment with the circle =
and
square swapped? I would be interested at the reverse behavior as well.

For your forward mapping scheme, how do you compute (f-g)? Do you =
interpolate
f?

As previously discussed in the list, "reverse mapping" is essential for =
image
resampling to avoid problems of holes (where no points map to an output =
pixel
- simple example is uniform scaling) and overlap (where several points =
of
different intensity maps to the same output pixel). While is it not =
strictly
essential for registration (because you can always interpolate f instead =
of
g), I have been also doing "reverse mapping" for registration so that =
the
mapping can be used for resampling.

Although I don't have Thiron's technical with me right now - I believe =
it did
mention somewhere that the "reverse mapping" might be more convenient to
estimate (I will need to find the reference.)

It could well be that "reverse mapping" is my personal soapbox but since =
the
implementation have shown to be useful for me for inter-subject brain =
mapping
- I really loathe to have it yanked out. Would it be possible to somehow =
have
both forward and reverse mapping?

- Lydia=20


-----Original Message-----
From: Miller, James V (Research) [mailto:millerjv at crd.ge.com]=20
Sent: Thursday, April 08, 2004 8:12 AM
To: Lydia Ng; 'Insight-developers (E-mail)'
Subject: Demons voodoo

(Original was bounced by the message size filter on the list.)
=A0
Lydia,=20
=A0
We were having some problems reconciling the output of the demons
registration filter with our understanding of the demons paper. I think =
I
have things worked out but it would require changing the =
code/behavior.=A0 I
have made the modfications locally and the effect of the results are in =
the
attached image.=A0 All of this is very complicated, having to keep track =
of
what the warps fields actually do, so I'll break my discussion up into =
stages
and use the "brain index".
=A0
(1 brain needed to read this section):=A0 Demon's paper
=A0
According to the demon's paper (page 15 and equation 4 on page 17), the =
goal
of the demons algorithm is to "make an image g appear more like an image =
f".=A0
Equation 4 is essentially
=A0
=A0=A0=A0 v =3D (g -f) grad(f) / (gradmag(f)^2 + (g - f))
=A0
I interpret this to mean the vector v is the displacement=A0that is =
needed to
move a=A0position in image g such that it will lie on a pixel in image f =
that
has=A0the same intensity as=A0the=A0pixel at the original position in g =
(this last
bit is just the basic premise of the demons algorithm).
=A0
In other words, the displacement field calculated will tell you for each
pixel in g, where the corresponding pixel in f is.
=A0
(2 brains needed to read this section): ITK implementation=A0
=A0
The demons algorithm is implemented using the finite difference engine, =
where
the output is the vector field of displacements. The finite difference =
engine
iterates over the output vector image, passing the index of vector pixel =
to
the DemonsRegistrationFunction.=A0 In pseudo-code, this function =
computes the
following:
=A0
=A0=A0 Calculate:
=A0=A0 f(pi)
=A0=A0 grad(f(pi))
=A0=A0=A0g(pi + vi)
=A0
=A0=A0=A0dvi =3D =A0(f - g) grad(f) / (gradmag(f)^2 + (f - g))
=A0
Mapping the code to the equations above, f is the fixed image and g is =
the
moving image. Note that this is calculating a vector at a point pi that =
is
associated with image f.=A0 This seems to be calculating a vector field =
that is
keeping track of "where a pixel in f goes" as f is warped to g.
=A0
(2 brains needed to read this section): Proposed ITK implementation=A0
=A0
Given the equation in the demons paper, I interpret the vector field to =
map a
position in image g to where in image f the corresponding pixel lies.=A0
Therefore the output of the filter should be a vector field where the =
pixel
indices correspond to pixel indices in the moving image g and the vector
values at these indices say where in f to sample the values. I propose,
therefore, the code should be
=A0
=A0=A0 Calculate:
=A0=A0 g(pi)
=A0=A0=A0f(pi + v)
=A0=A0 grad(f(pi+v))
=A0
=A0=A0 dvi =3D (g - f) grad(f) / (gradmag(f)^2 + (g-f))
=A0
Again, the fixed image is f and the moving image is g.=A0 The output =
vector
field is index aligned with g and each vector gives a displacement to a
corresponding pixel in f. In essence, the vector field is keeping track =
of
"where a pixel in g came from" as g is warped to f.
=A0
(3 brains needed to read this section): Warping using the demons =
displacement
field
=A0
ITK's WarpImageFilter takes a displacement field that is index aligned =
with
the output of the WarpImageFilter and using the vector at each pixel to
determine where in the input image to probe to produce the output.
=A0
In the current implementation, we pass the moving image (g) and the =
demons
displacement filed (d) to the WarpImageFilter to produce an "estimate" =
of the
fixed image.
=A0
In my proposal, the deformation field is in the other direction, so we =
would
need to pass the fixed image (f) and the demons displacement field (g) =
to the
WarpImageFilter to produce an "estimate" of the moving image.
=A0
Summary/Impact
=A0
My main concern with the=A0current implementation is that=A0the vector =
field
computed does not match the mapping (g->f) of the original demons =
paper.=A0 So
people familar using the paper as a reference to how the filter should =
behave
can be confused (we have already proven that). The filter is currently
calculating vectors of where a pixel in f should go and opposed to where =
a
pixel in g came from. I don't think these are quite equivalent.=A0 As an
example, see the attached png image.=A0 I took a circle and warped it to =
a
square using the current ITK implementation and my proposed =
modifications.
Because the sense of the vector fields is different in the two
implementations, which image is considered the fixed or moving is =
different.=A0
However, you can accomplish the end task with both implementations.=A0 =
The
first row of the figure shows the current implementation.=A0 As the =
circle
deforms into a square it actually changes topology twice (4 small hole =
appear
in the corners of the square and are then filled in).=A0 The second row =
of the
figure shows my alterations which seems more "regularized".=A0 However, =
the new
implementation requires more iterations to reach the destination.
=A0
So the new implementation needs more iterations. It also requires that =
the
grad(f(pi+vi)) be calculated.=A0 Currently, I am using the same central
difference image function as before but now I pass in a physical =
coordinate
instead of an index.=A0 This image function will simply evaluate the =
gradient
at the nearest pixel. So overall, the proposed implementation requires =
more
computations.=A0 However, it looks to me like the deformation path is =
more
stable and the computed deformation field seems to agree (with me =
anyway)
with the original Demons paper.=20
=A0
But perhaps more disruptively, the "sense" of the deformation field =
(moving
to fixed as opposed to fixed to moving) is different.
=A0
=A0
=A0
Jim Miller=20
_____________________________________
Visualization & Computer Vision
GE Research
Bldg. KW, Room C218B
P.O. Box 8, Schenectady NY 12301

millerjv at research.ge.com
james.miller at research.ge.com
(518) 387-4005, Dial Comm: 8*833-4005,=20
Cell: (518) 505-7065, Fax: (518) 387-6981=20
=A0