[Insight-developers] Tensors - my thoughts so far

Torsten Rohlfing torsten at synapse.sri.com
Tue Feb 1 19:20:43 EST 2005


Hi everyone.

Okay, it seems that there is quite an interest out there in tensor 
representation, but it also seems like there is not quite a consensus as 
to how to best implement one that is both general and efficient (both in 
terms of memory and computational performance). I have played with some 
tensor data here and experimented with different representations and 
designs. I would like to share some thoughts I've come up with and ask 
for comments.

I looked at the implementation proposal at

http://www.na-mic.org/Wiki/index.php/NAMIC_Wiki:DTI:ITK-DiffusionTensorPixelType 


that Luis mentioned earlier, and it goes kind of in the direction that I 
think makes sense. I would, however, favor the following changes:

a) The tensor dimension should be templated to support 2x2 and 4x4 
tensors, etc. As the parent class for the compact symmetric tensor I 
suggest a FixedArray of dimension n(n+1)/2, where n is the size of the 
square tensor matrix (for non-square tensors, read below).

b) Algorithms that benefit from knowing the dimension (e.g., eigenvalue 
computation) should be implemented as traits with specialized 
implementations as far as they are available, and a generic 
implementation for other dimensions.

c) For the class name, I would suggest something like "SymmetricTensor", 
since it would not be specific to diffusion imaging anymore. It's also 
not "just" a pixel type for the same reason that "Vector" is not merely 
considered a pixel type.

Regarding non-symmetric tensors, I suggest a more general class 
"Tensor", which would hold an actual tensor in matrix form. To avoid 
code duplication (of implementations), the "SymmetricTensor" class could 
use the general tensor class for all computations that require 
construction of the matrix representation. For that, the "Tensor" class 
would be given a conversion constructor from a "SymmetricTensor".

The computational overhead should be negligible, assuming as said that 
the given operation needs the full tensor matrix representation anyway. 
If an operation can be performed on the compact symmetric tensor, it 
probably should be. Of course this approach requires replication of 
interface functions, unless the burden is put on the class user by 
requiring that all generic tensor operations be performed by first 
constructing a general tensor from the symmetric one, and then 
performing the operation on the general tensor. I have a feeling that 
interface replication would be the better choice to be able to use 
symmetric and general tensors interchangebly as template parameters.

Regarding tensors of rank larger than 2, my gut feeling is to leave that 
for later and approach it independent of rank 2 tensors. The interface 
to arbitrary rank tensors will have to be quite different from rank 2, 
since for example the number of index variables for the tensor elements 
would vary. So, for example, we cannot use the convenient (i,j) 
indexing. Also, while I have to admit that I really don't know much 
about tensors, I don't quite see how concepts like eigensystems would 
generalize to higher-order tensors?

Anyway, I have coded a prototype design that implements a symmetric 
tensor class with specialized 3D eigenvalue computation, and an image 
filter that computes the fractional anisotropy for a tensor image. I'd 
be happy to post this code (or put it somewhere for download) for 
further discussion.

Best,
  Torsten

-- 
Torsten Rohlfing, PhD           SRI International, Neuroscience Program
 Research Scientist              333 Ravenswood Ave
  torsten at synapse.sri.com         Menlo Park, CA 94025
   Phone: ++1 (650) 859-3379       Fax: ++1 (650) 859-2743

     "Though this be madness, yet there is a method in't"



More information about the Insight-developers mailing list