# TubeTK/Documentation/Sliding Organ Registration

From KitwarePublic

< TubeTK | Documentation

Jump to navigationJump to search
Revision as of 22:23, 11 November 2010 by Danielle.pace (talk | contribs)

# Overview

# Use case

# Related classes

- Base/Registration/itkImageToImageDiffusiveDeformableRegistrationFilter.h
- Base/Registration/itkImageToImageDiffusiveDeformableRegistrationFilter.txx
- Base/Registration/itkImageToImageDiffusiveDeformableRegistrationFunction.h
- Base/Registration/itkImageToImageDiffusiveDeformableRegistrationFunction.txx

# Related Works

- Slipping objects in image registration: Improved motion field estimation with direction-dependent regularization
- Alexander Schmidt-Richberg, Jan Ehrhardt, Rene Werner and Heinz Handels
- MICCAI 2009, Lecture Notes in Computer Science, Volume 5761, pp.755-762, 2009
- http://www.springerlink.com/content/j9675524406844p5/
- Abstract:
- The computation of accurate motion fields is a crucial aspect in 4D medical imaging. It is usually done using a non-linear registration without further modeling of physiological motion properties. However, a globally homogeneous smoothing (regularization) of the motion field during the registration process can contradict the characteristics of motion dynamics. This is particularly the case when two organs slip along each other which leads to discontinuities in the motion field. In this paper, we present a diffusion-based model for incorporating physiological knowledge in image registration. By decoupling normal- and tangential-directed smoothing, we are able to estimate slipping motion at the organ borders while ensuring smooth motion fields in the inside and preventing gaps to arise in the field. We evaluate our model focusing on the estimation of respiratory lung motion. By accounting for the discontinuous motion of visceral and parietal pleurae, we are able to show a significant increase of registration accuracy with respect to the target registration error (TRE).

- An investigation of smoothness constraints for the estimation of displacement vector fields from image sequences
- Hans-Hellmut Nagel and Wilfried Enkelmann
- IEEE Transactions on Pattern Analysis and Machine Intelligence, 8(5), pp 565-593, 1986
- http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?arnumber=4767833
- Abstract:
- A mapping between one frame from an image sequence and the preceding or following frame can be represented as a displacement vector field. In most situations, the mere gray value variations do not provide sufficient information in order to estimate such a displacement vector field. Supplementary constraints are necessary, for example the postulate that a displacement vector field varies smoothly as a function of the image position. Taken as a general requirement, this creates difficulties at gray value transitions which correspond to occluding contours. Nagel therefore introduced the oriented smoothness requirement which restricts variations of the displacement vector field only in directions with small or no variation of gray values. This contribution reports results of an investigation about how such an ``oriented smoothness
*constraint may be formulated and evaluated.*

- A mapping between one frame from an image sequence and the preceding or following frame can be represented as a displacement vector field. In most situations, the mere gray value variations do not provide sufficient information in order to estimate such a displacement vector field. Supplementary constraints are necessary, for example the postulate that a displacement vector field varies smoothly as a function of the image position. Taken as a general requirement, this creates difficulties at gray value transitions which correspond to occluding contours. Nagel therefore introduced the oriented smoothness requirement which restricts variations of the displacement vector field only in directions with small or no variation of gray values. This contribution reports results of an investigation about how such an ``oriented smoothness

- A review of nonlinear diffusion filtering
- Joachim Weickert
- Scale-Space Theory in Computer Vision, Lecture Notes in Computer Science, Volume 1252, pp. 1-28, 1997
- http://www.springerlink.com/content/ywu8306108123080/
- Abstract:
- This paper gives an overview of scale-space and image enhancement techniques which are based on parabolic partial differential equations in divergence form. In the nonlinear setting this filter class allows to integrate a-priori knowledge into the evolution. We sketch basic ideas behind the different filter models, discuss their theoretical foundations and scale-space properties, discrete aspects, suitable algorithms, generalizations, and applications.

- Data assimilation using a gradient descent method for estimation of intraoperative brain deformation
- Songbai J Alex Hartov, David Roberts and Keith Paulsen
- Medical Image Analysis, 15(5), p 744-756, 2009
- http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2749709/
- Biomechanical models that simulate brain deformation are gaining attention as alternatives for brain shift compensation. One approach, known as the “forced-displacement method”, constrains the model to exactly match the measured data through boundary condition (BC) assignment. Although it improves model estimates and is computationally attractive, the method generates fictitious forces and may be ill-advised due to measurement uncertainty. Previously, we have shown that by assimilating intraoperatively acquired brain displacements in an inversion scheme, the Representer algorithm (REP) is able to maintain stress-free BCs and improve model estimates by 33% over those without data guidance in a controlled environment. However, REP is computationally efficient only when a few data points are used for model guidance because its costs scale linearly in the number of data points assimilated, thereby limiting its utility (and accuracy) in clinical settings. In this paper, we present a steepest gradient descent algorithm (SGD) whose computational complexity scales nearly invariantly with the number of measurements assimilated by iteratively adjusting the forcing conditions to minimize the difference between measured and model-estimated displacements (model-data misfit). Solutions of full linear systems of equations are achieved with a parallelized direct solver on a shared-memory, eight-processor Linux cluster. We summarize the error contributions from the entire process of model-updated image registration compensation and we show that SGD is able to attain model estimates comparable to or better than those obtained with REP, capturing about 74% to 82% of tumor displacement, but with a computational effort that is significantly less (a factor of 4-fold or more reduction relative to REP) and nearly invariant to the amount of sparse data involved when the number of points assimilated is large. Based on five patient cases, an average computational cost of approximately 2 minutes for estimating whole-brain deformation has been achieved with SGD using 100 sparse data points, suggesting the new algorithm is sufficiently fast with adequate accuracy for routine use in the operating room (OR).