[Insight-users] Registering US -> CT

N.E. Mackenzie Mackay 9nem at qlink.queensu.ca
Fri Sep 17 12:21:40 EDT 2004


That sounds like a good idea.

If I read in the 2d image as 3D and applied a blurring filter would 
that cause a 3D blurring ( blur the pixels outside of the plane )?  If 
so, I could set the "thickness" by the setting the radius of the 
blurring mask.

Just a thought.
Neilson

On Sep 16, 2004, at 6:47 PM, Luis Ibanez wrote:

>
> Hi Neilson,
>
> That's a good point. You are right,
> in this case, since you have scattered slices
> from Ultra Sound it is not possible to make sure
> that every point will be inside an image.
>
> One option could be to associate a "thickness"
> to every US slice, you can probably figure out
> one that makes sense from the point of view of
> the physical acquisicion process.
>
> That thickness could be used for defining
> regions of space where a point will be considered
> to be "inside" one of the US images.
>
> The more US image you have, the more chances
> there are that this could lead to a reasonable
> registration.
>
> Note that this requires you to do more modifications
> on the ImageMetric class.
>
>
>
>
>    Regards,
>
>
>       Luis
>
>
>
> --------------------------------
> N.E. Mackenzie Mackay wrote:
>
>> I was thinking the same thing.
>> The only thing I was worried about is using the method in 3D.  If 
>> some of the points don't map onto the the US image will the 
>> registration method ignore those points or will it throw an error?
>> On Sep 14, 2004, at 10:33 PM, Luis Ibanez wrote:
>>>
>>> Hi Neilson,
>>>
>>> MutualInformation is mostly a region-based image metric.
>>> This means that its value gets better when the overlap
>>> between matching regions of the image is large. Mutual
>>> Information is not particularly well suited for matching
>>> thin structures since thir random sampling is unlikely
>>> to select many pixels belonging to those structures.
>>>
>>> In that sense you probably shouldn't expect much from Mutual
>>> Information for registering Bone, since bone structures are
>>> mostly shell-like and they don't fill large regions of space.
>>> E.g. large bones have their layers of cortical bone with high
>>> calcifications but their width usually cover just a couple of
>>> pixels in a CT scan.
>>>
>>> Given that you seem to have segmented the bone from the CT scan,
>>> it is probably worth to try a Model-to-Image registration approach.
>>> This can be done by taking points on the surface of your bone
>>> segmentation, and/or from a band around that surface, and using
>>> them to match the intensities (and structure) of the same bone as
>>> seen in the UltraSound images.
>>>
>>> Could you post a couple of the US images ?
>>>
>>> (e.g. you could put them in www.mypacs.net and let us know their
>>>  image ID).
>>>
>>>
>>> Depending on how the bone structures look like on the US image
>>> there may be different possible metrics to try in a PointSet to
>>> Image registration.
>>>
>>>
>>> BTW, when you start working in 3D, don't attempt to use Rigid
>>> transforms until you have manage to tune all the other parameters
>>> of the registration to work with simple translation transforms.
>>> It is more effective to deal with a single issue at a time.
>>>
>>>
>>>
>>> Regards,
>>>
>>>
>>>     Luis
>>>
>>>
>>>
>>> -------------------------------
>>> N.E. Mackenzie Mackay wrote:
>>>
>>>> Hi,
>>>>     I have tried for the last while to get a single ultrasound 
>>>> image to register to a CT volume.  Specifically try to get the bone 
>>>> of an ultrasound image and bone of the CT to register together.  Up 
>>>> to now I am having quite some trouble.
>>>> I have been able to segment the bone from CT and give an estimate 
>>>> on where the bone is in the ultrasound.  I am now trying to 
>>>> register those two images.
>>>> This is what I am using:
>>>> MattesMutualInformationImageToImageMetric - decided to use this 
>>>> because the registration was of two different modalities.  Couldn't 
>>>> use feature registration becuase to hard to segment ultrasound 
>>>> correctly
>>>>     - using 50 bins and %20-%100 of samples still doesn't give 
>>>> adequate results.
>>>> linearInterpolateImageFunction
>>>> RegularSetGradientDecentOptimizer
>>>> Euler3DTransform
>>>> Both images ( 3D CT and US ) are normalized before the registration.
>>>> I have used a maximum step ranging from 0.5-6.  And a min step of 
>>>> 0.005 - 1.
>>>> I have a good initial guess ( maximum 2cm away from correct with 
>>>> about 0- 30degrees of rotation)> I tested out the registration 
>>>> method in 2D and have had success.  When I use the exact same 
>>>> variables applied in 3D the registration is poor.
>>>> Does anyone have any suggestions?  I would be happy to provide a 
>>>> couple of images or actual code to show you what I am dealing with.
>>>> Neilson
>>>> _______________________________________________
>>>> Insight-users mailing list
>>>> Insight-users at itk.org
>>>> http://www.itk.org/mailman/listinfo/insight-users
>>>
>>>
>>>
>>>
>>>
>
>
>
>



More information about the Insight-users mailing list