[Insight-developers] two-input filters don't work
Michael Xanadu
xanadu.michael at googlemail.com
Mon Sep 21 10:45:51 EDT 2009
Kevin, finally I found the solution. It turned out that the segmentation has
worked for the whole time. What I didn't know was that the quality of the
segmentation depends on the number of slices. Usually, I used just four
slices of dicom data for the input because of performance (all other filters
worked fine with a low number of slices). But if I use more slices (today I
did that for the first time for two-input-filters) I get an amazing segment
(look at attachement)! My mistake was to believe that the segment grows
thrue the iterations in x- and y-direction, even if z (slices) is low. Can
you verify that? But I still wonder:
1. Why does the segmentation works for 2D-image-files (png), even if an
image is only one slice? In 3D it doesn't work with only one slice, like
mentioned above. Are there differences in the algorithm?
2. I never get a segment in the first slice of the data, no matter how many
slices I use or in which slice I set the seedpoint or how many iterations I
use. In the 16-slice-example, which you can find in the attachement, I set
the seedpoint to the first slice ("00_slice.png"). There you can see our
cute white spot. And in second slice suddenly the segment appears! If I set
the seedpoint to a slice in the middle of the data, both outer slices appear
black. That led me to believe that the algorithm doesn't work for the outer
slices. Can you verify that?
3. Why does my application crashes if I put sigmoid output to fastMarching
input? I already wrote that in my last mail. I still use fastMarching
without an input.
Kevin, I just wanna thank you far for your help. You pushed me into the
right direction to find the answer for my question.
Regards, Michael
2009/9/18 Kevin H. Hobbs <hobbsk at ohiou.edu>
> The first problem that I see is that you only sent a part of the code so
> nobody will be able to tell if your DICOM reader is set up correctly.
>
> You are still doing too much at once. This is fine for the example which
> must be a self contained program, and for a final segmentation
> application, but it's an awful way to learn what's going on. Also I find
> that for large 3d images the first smoothing step can take a very long
> time and it would be a shame to have to redo it every time you want to
> change a downstream parameter. I've broken the example up into several
> small programs.
>
> Writing to PNG files is a problem for two reasons :
>
> The files are only 2d you will not be able to examine the intermediate
> results for a whole 3d image. That is unless you replace the writer with
> a series writer and write to a series of PNG files but it's easier to
> just use the MHD format.
>
> When you write to a PNG file the pixel type is restricted to unsigned
> char. Not only do you loose precision but you loose the sign of the
> data. This is important when you set the value of the fast marching seed
> to a negative number. You should be able to confirm that the zero level
> is near the boundary of the object you want to segment.
>
> Now in the GIMP I converted the input image you posted before to gray
> scale and cut off the excess.
> http://crab-lab.zool.ohiou.edu/kevin/two-input_filters/0_input.png
>
> I smoothed it with the program I sent
> ./Smooth 0_input.png 0.125 5 9.0 tmp/smoothed.mhd
> and rendered it in paraview
> http://crab-lab.zool.ohiou.edu/kevin/two-input_filters/smoothed.png
>
> I took the gradient magnitude
> ./GradMag tmp/smoothed.mhd 1.0 tmp/grad_mag.mhd
> and rendered it in paraview
> http://crab-lab.zool.ohiou.edu/kevin/two-input_filters/grad_mag.png
>
> I flipped and scaled the gradient magnitude with the sigmoid filter
> ./Sigmoid tmp/grad_mag.mhd -0.5 1.0 tmp/sig.mhd
> and rendered it in paraview
> http://crab-lab.zool.ohiou.edu/kevin/two-input_filters/sig.png
>
> I used the fast marching filter to get a time crossing map
> ./Fast tmp/sig.mhd 263.36 184.04 tmp/fast.mhd
> and rendered it in paraview
> http://crab-lab.zool.ohiou.edu/kevin/two-input_filters/fast.png
> where I chose the level t=100 shown in green as the IsoSurfaceValue for
> the initial level set.
>
> I used the shape detection filter to bring the initial levelset closer
> to the boundaries.
> ./Shape tmp/sig.mhd tmp/fast.mhd 100 0.05 1 tmp/shape.mhd
> and rendered it in paraview
> http://crab-lab.zool.ohiou.edu/kevin/two-input_filters/shape.png
>
> In paraview I rendered the smoothed image, the t=100 level in green, and
> the zero-level from the shape detection filter in blue.
> http://crab-lab.zool.ohiou.edu/kevin/two-input_filters/compare.png
>
> I hope this gets you closer to your goal.
>
> Keep us updated!
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.itk.org/mailman/private/insight-developers/attachments/20090921/f5bb3940/attachment.htm>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: 4 slices input.zip
Type: application/zip
Size: 1182 bytes
Desc: not available
URL: <http://www.itk.org/mailman/private/insight-developers/attachments/20090921/f5bb3940/attachment.zip>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: 16 slices input.zip
Type: application/zip
Size: 12596 bytes
Desc: not available
URL: <http://www.itk.org/mailman/private/insight-developers/attachments/20090921/f5bb3940/attachment-0001.zip>
More information about the Insight-developers
mailing list