# Difference between revisions of "Refactoring Level-Set framework - V4"

Arnaudgelas (talk | contribs) (→TODO) |
|||

(32 intermediate revisions by 4 users not shown) | |||

Line 1: | Line 1: | ||

This page outlines the proposed changes to the current Level-Set framework. A number of these changes will break backwards compatibility. | This page outlines the proposed changes to the current Level-Set framework. A number of these changes will break backwards compatibility. | ||

== Goals == | |||

=== Beta Release === | |||

* Ease the development, integration, combination of new terms in level-set evolution PDE(s): | |||

** Propagation term | |||

** Advection term | |||

** Curvature term | |||

** ChanAndVese Internal energy term | |||

** ChanAndVese External energy term | |||

** Overlap penalty term | |||

** ''etc.'' | |||

* Provide an infrastructure which allows to evolve 1 to N level-sets, | |||

* Provide an infrastructure to limit the domain in which level-sets can propagate, | |||

* Provide an infrastructure to control when to stop the evolution of the level-sets, | |||

* Provide an infrastructure for IterationEvent to query and modify during iterations, | |||

* Use space-efficient label maps for representing levelsets and sparse-fields, | |||

* Ease the conversion from BinaryMask / LabelMap to level-sets (and ''vice-et-versa'') | |||

* Provide generic implementations | |||

** ''"Dense"'' case | |||

** ''"Sparse"'' case | |||

*** Whitaker representation, where values are in [-3,3], and made of 5 layers: L_{-2} (-2), L_{-1} (-1) , L_{0} (0), L_{1} (1), L_{2} (2) | |||

*** Shi representation, where values are {-3,-1,+1,+3}, and made of 2 layers L_{in} (-1) and L_{out} (+1) | |||

*** Malcolm representation, where values are {-1,0,+1}, and made of 1 layer L_{0} (0). | |||

=== Won't be part of the beta === | |||

* Parametric case | |||

* Extension to work on Images and Meshes | |||

== Authors == | == Authors == | ||

Line 15: | Line 44: | ||

== Tcons == | == Tcons == | ||

=== | * 29 July. Attendees: Karthik Krishnan (Kitware), Lucas Antiga (Orobix), Kishore Mosaliganti (Harvard), Arnaud Gelas (Harvard) | ||

* 11 August. Attendees: Luis Ibanez (Kitware), Ivo Sbalzarini (ETH), Kishore Mosaliganti (Harvard), Arnaud Gelas (Harvard) | |||

* 18 August. Attendees: Gregory Paul (ETH), Kishore Mosaliganti (Harvard), Arnaud Gelas (Harvard) | |||

* 1 September. Attendees: Olivier Bernard (Creatis), Joel Schaerer (Creatis), Kishore Mosaliganti (Harvard), Arnaud Gelas (Harvard) | |||

* [[ITK_Release_4/Refactoring Level Set Framework/Tcon 2010-09-09|Tcon 2010-09-10]] | |||

* [[ITK_Release_4/Refactoring Level Set Framework/LevelSetGPUBostonMeeting 2010-10-14|Level Set GPU Local Meeting in Boston 2010-10-14]] | |||

* [[ITK_Release_4/Refactoring Level Set Framework/IOWAMeeting|ITKv4 Iowa meeting 2011 November 8-10]] | |||

== Ideas == | |||

=== Term === | |||

==== Term class ==== | |||

Represent on term of the PDE. | |||

First example of implementation (not complete) located at http://github.com/lantiga/ModularLevelSets. | |||

* One weight (scalar) | |||

* One Level Set Function | |||

* virtual GetCFLContribution() = 0; | |||

* virtual Value() = 0; | |||

* virtual bool IsStreamable() = 0 | |||

==== Term Container class ==== | |||

* std::map< id, Term > to be able to dynamically add/remove terms at any time during the evolution | |||

* Virtual Value() = 0; | |||

* bool IsStreamable(); // iterate on all term elements as long as there is streamable element. | |||

==== Term Sum Container ==== | |||

* Inherit form Term Container | |||

* Implement Value as sum of all elements from the container | |||

==== Term Product Container ==== | |||

(Don't know yet the application, but it is theoretically possible to implement it as noticed by Joel Schaerer). | |||

* Inherit form Term Container | |||

* Implement Value as product of all elements from the container | |||

=== LevelSetFunctionBase === | |||

The level set function representation could be discrete, parametric (continuous) | |||

* Virtual Value() = 0; | |||

* Virtual Grad() = 0; | |||

* Virtual Hessian() = 0; | |||

* Virtual bool IsStreamable() = 0; | |||

=== LocalStructure === | |||

This structure will be used to pass information add each step for a single location x | |||

* Location x | |||

* std::pair< bool, xxx > value (bool encode if its characteristics has already been computed) | |||

* std::pair< bool, xxx > grad | |||

* Etc… | |||

=== Reinitialization Filter === | |||

* Input: one level set function | |||

* Output: one level set function | |||

* Virtual void GenerateData() = 0; // processing | |||

=== Stopping criterion === | |||

* Functor? | |||

* Inputs: Phi^{n}, Phi^{n-1}, Number of iterations | |||

* Check for additional values that can be passed (to avoid as much as possible explosion of computational cost!!!) | |||

=== SamplingDomain === | |||

* Represents the domain where the level set PDE is sampled, and thus used to solve the PDE equation. | |||

* Iterator Begin(); | |||

* Iterator End(); | |||

* Iterator Iterator::operator ++ ( -- ? ) | |||

=== Thoughts on dependencies among terms === | |||

In order to make term implementation modular and reusable, terms should include a way to be reusable by each other. | |||

For instance, a curvature term should depend on a "first derivatives" and a "hessian" term, and only implement the formula for curvature. In fact, how the first and second derivatives are implemented should be kept modular, so that one can adopt, for instance, a finite differences or a Gaussian convolution approach to compute them, and the curvature code could stay the same. | |||

I see two ways of achieving this: | |||

1. reuse terms in other terms by referring to their (expected) name in the term container (global workspace approach) | |||

2. specify dependencies by instantiating them and storing them in a dependency container in each term class (dependency approach, adopted in [[http://github.com/lantiga/ModularLevelSets ModularLevelSets]], see for instance the itk::LevelSetCurvatureTerm and how it depends on two other terms itk::LevelSetDerivativesTerm and itk::LevelSetHessianTerm). | |||

I now recall the process that led me to having dependencies in [[http://github.com/lantiga/ModularLevelSets ModularLevelSets]] and not a global workspace. | |||

If you have a global workspace, then you may end up having a couple of potential maintenance issues: | |||

1. all terms have to know what names are used for each quantities and don't have a chance to know if something has changed in other classes in the container. Say Term1 computes "Hessian" and puts it in the workspace. Term2 needs Hessian and grabs it from the workspace. If Term1 changes the name of the term to FiniteDifferenceHessian, Term2 will still call it Hessian (unless we require that the user checks that dependencies are built with correct names). There would really be no way of knowing if things are ok or not until we assemble all the terms and run them. | |||

2. you'd have a hard time detecting inconsistencies compile time: say Term1 computes Hessian using finite differences, while Term2 computes Hessian using convolution, but they both call it Hessian. Term1 puts the "Hessian" in the workspace, and Term2 finds a "Hessian" in the workspace and uses it, but it's not the Hessian Term2 needs. | |||

Of course careful development will avoid all this sort of issues, but a newcomer would have to know about any quantities the other terms generate. | |||

On the other hand there are a few advantages using dependencies: | |||

1. if Term2 depends on the Hessian term, there's a way to have term compatibility checked at compile-time (see term constructor in [[http://github.com/lantiga/ModularLevelSets ModularLevelSets]], in particular itk::LevelSetCurvatureTerm). | |||

2. since Hessian is a term, i.e. a class, there's no chance of creating conflicting classes with the same name. If Term1 depends on Hessian computed using finite differences, the developer of Term2 will be forced to implement a ConvolutionHessian term. In other words, the dependency of the two terms will be local to the terms, no chance of collisions in the global workspace, irrespective on how the terms are assembled. | |||

Dependencies have to analyzed only once, when it's time for level sets to run. You can decide the order in which to evaluate terms and cache the results of a term if that term is a dependency of more than one term, so there's really no extra cost compared to the global workspace option, terms are computed the same number of times. | |||

This is just my view on the topic, but in general I think this is a really important issue, as we expect that terms will grow in number. | |||

Luca | |||

== TODO == | |||

For the Beta release | |||

* Add \ingroup ITK-LevelSetsv4 to all the header file doxygen description. | |||

* Move all implementation from *.h -> *.hxx | |||

* UpdatePixel() methods need to be implemented and validated for Sparse, Shi and Malcolm | |||

* Implement EvaluateGradient and EvaluateHessian for all representations | |||

* Implement Evaluate* methods when passing a LevelSetDataType as reference for all representations (and make use of this method instead) | |||

** Assign string names to each Value/Gradient/Hessian type in LevelSetDataType | |||

** Before beginning the iterations, we need to iterate through all term containers to identify which Value/Gradient/Hessian are needed | |||

*** The values needed by a given term container is dependent on its individual terms. Each term identifies the values it uses based on the string name. | |||

** In each iteration, before going through the container, compute all the relevant Value/Gradient/Hessian for the pixel | |||

* Use image iterators instead of GetPixel() methods in the dense case | |||

* DomainMapFilter should be made as a LabelMap | |||

* Write tests for curvature, propagation, and advection terms for all representations | |||

* Rename files more consistently | |||

* Condense code especially for adaptors and sparse level set evolution classes | |||

* Regularization method for Shi, and Malcolm methods= | |||

* Implement overlap penalty term | |||

* Implement advection term | |||

* Multithreading of levelsets, sparse layer updates (may not be for the beta release!). |

## Latest revision as of 21:59, 12 August 2011

This page outlines the proposed changes to the current Level-Set framework. A number of these changes will break backwards compatibility.

## Goals

### Beta Release

- Ease the development, integration, combination of new terms in level-set evolution PDE(s):
- Propagation term
- Advection term
- Curvature term
- ChanAndVese Internal energy term
- ChanAndVese External energy term
- Overlap penalty term
*etc.*

- Provide an infrastructure which allows to evolve 1 to N level-sets,
- Provide an infrastructure to limit the domain in which level-sets can propagate,
- Provide an infrastructure to control when to stop the evolution of the level-sets,
- Provide an infrastructure for IterationEvent to query and modify during iterations,
- Use space-efficient label maps for representing levelsets and sparse-fields,
- Ease the conversion from BinaryMask / LabelMap to level-sets (and
*vice-et-versa*) - Provide generic implementations
*"Dense"*case*"Sparse"*case- Whitaker representation, where values are in [-3,3], and made of 5 layers: L_{-2} (-2), L_{-1} (-1) , L_{0} (0), L_{1} (1), L_{2} (2)
- Shi representation, where values are {-3,-1,+1,+3}, and made of 2 layers L_{in} (-1) and L_{out} (+1)
- Malcolm representation, where values are {-1,0,+1}, and made of 1 layer L_{0} (0).

### Won't be part of the beta

- Parametric case
- Extension to work on Images and Meshes

## Authors

The proposed changes for the Level-Set framework are being proposed by the Megason-Lab, Systems Biology Department, Harvard Medical School.

Questions and comments regarding these changes should be sent to

- Arnaud Gelas (arnaud_gelas -at- hms.harvard.edu)
- Sean Megason (sean_megason -at- hms.harvard.edu)

Other team members include:

- Kishore Mosaliganti (kishore_mosaliganti -at- hms.harvard.edu)
- Nicolas Rannou (nicolas_rannou -at- hms.harvard.edu)

## Tcons

- 29 July. Attendees: Karthik Krishnan (Kitware), Lucas Antiga (Orobix), Kishore Mosaliganti (Harvard), Arnaud Gelas (Harvard)
- 11 August. Attendees: Luis Ibanez (Kitware), Ivo Sbalzarini (ETH), Kishore Mosaliganti (Harvard), Arnaud Gelas (Harvard)
- 18 August. Attendees: Gregory Paul (ETH), Kishore Mosaliganti (Harvard), Arnaud Gelas (Harvard)
- 1 September. Attendees: Olivier Bernard (Creatis), Joel Schaerer (Creatis), Kishore Mosaliganti (Harvard), Arnaud Gelas (Harvard)
- Tcon 2010-09-10
- Level Set GPU Local Meeting in Boston 2010-10-14
- ITKv4 Iowa meeting 2011 November 8-10

## Ideas

### Term

#### Term class

Represent on term of the PDE. First example of implementation (not complete) located at http://github.com/lantiga/ModularLevelSets.

- One weight (scalar)
- One Level Set Function
- virtual GetCFLContribution() = 0;
- virtual Value() = 0;
- virtual bool IsStreamable() = 0

#### Term Container class

- std::map< id, Term > to be able to dynamically add/remove terms at any time during the evolution
- Virtual Value() = 0;
- bool IsStreamable(); // iterate on all term elements as long as there is streamable element.

#### Term Sum Container

- Inherit form Term Container
- Implement Value as sum of all elements from the container

#### Term Product Container

(Don't know yet the application, but it is theoretically possible to implement it as noticed by Joel Schaerer).

- Inherit form Term Container
- Implement Value as product of all elements from the container

### LevelSetFunctionBase

The level set function representation could be discrete, parametric (continuous)

- Virtual Value() = 0;
- Virtual Grad() = 0;
- Virtual Hessian() = 0;
- Virtual bool IsStreamable() = 0;

### LocalStructure

This structure will be used to pass information add each step for a single location x

- Location x
- std::pair< bool, xxx > value (bool encode if its characteristics has already been computed)
- std::pair< bool, xxx > grad
- Etc…

### Reinitialization Filter

- Input: one level set function
- Output: one level set function
- Virtual void GenerateData() = 0; // processing

### Stopping criterion

- Functor?
- Inputs: Phi^{n}, Phi^{n-1}, Number of iterations
- Check for additional values that can be passed (to avoid as much as possible explosion of computational cost!!!)

### SamplingDomain

- Represents the domain where the level set PDE is sampled, and thus used to solve the PDE equation.
- Iterator Begin();
- Iterator End();
- Iterator Iterator::operator ++ ( -- ? )

### Thoughts on dependencies among terms

In order to make term implementation modular and reusable, terms should include a way to be reusable by each other. For instance, a curvature term should depend on a "first derivatives" and a "hessian" term, and only implement the formula for curvature. In fact, how the first and second derivatives are implemented should be kept modular, so that one can adopt, for instance, a finite differences or a Gaussian convolution approach to compute them, and the curvature code could stay the same.

I see two ways of achieving this: 1. reuse terms in other terms by referring to their (expected) name in the term container (global workspace approach) 2. specify dependencies by instantiating them and storing them in a dependency container in each term class (dependency approach, adopted in [ModularLevelSets], see for instance the itk::LevelSetCurvatureTerm and how it depends on two other terms itk::LevelSetDerivativesTerm and itk::LevelSetHessianTerm).

I now recall the process that led me to having dependencies in [ModularLevelSets] and not a global workspace.

If you have a global workspace, then you may end up having a couple of potential maintenance issues:

1. all terms have to know what names are used for each quantities and don't have a chance to know if something has changed in other classes in the container. Say Term1 computes "Hessian" and puts it in the workspace. Term2 needs Hessian and grabs it from the workspace. If Term1 changes the name of the term to FiniteDifferenceHessian, Term2 will still call it Hessian (unless we require that the user checks that dependencies are built with correct names). There would really be no way of knowing if things are ok or not until we assemble all the terms and run them.

2. you'd have a hard time detecting inconsistencies compile time: say Term1 computes Hessian using finite differences, while Term2 computes Hessian using convolution, but they both call it Hessian. Term1 puts the "Hessian" in the workspace, and Term2 finds a "Hessian" in the workspace and uses it, but it's not the Hessian Term2 needs.

Of course careful development will avoid all this sort of issues, but a newcomer would have to know about any quantities the other terms generate.

On the other hand there are a few advantages using dependencies:

1. if Term2 depends on the Hessian term, there's a way to have term compatibility checked at compile-time (see term constructor in [ModularLevelSets], in particular itk::LevelSetCurvatureTerm).

2. since Hessian is a term, i.e. a class, there's no chance of creating conflicting classes with the same name. If Term1 depends on Hessian computed using finite differences, the developer of Term2 will be forced to implement a ConvolutionHessian term. In other words, the dependency of the two terms will be local to the terms, no chance of collisions in the global workspace, irrespective on how the terms are assembled.

Dependencies have to analyzed only once, when it's time for level sets to run. You can decide the order in which to evaluate terms and cache the results of a term if that term is a dependency of more than one term, so there's really no extra cost compared to the global workspace option, terms are computed the same number of times.

This is just my view on the topic, but in general I think this is a really important issue, as we expect that terms will grow in number.

Luca

## TODO

For the Beta release

- Add \ingroup ITK-LevelSetsv4 to all the header file doxygen description.
- Move all implementation from *.h -> *.hxx
- UpdatePixel() methods need to be implemented and validated for Sparse, Shi and Malcolm
- Implement EvaluateGradient and EvaluateHessian for all representations
- Implement Evaluate* methods when passing a LevelSetDataType as reference for all representations (and make use of this method instead)
- Assign string names to each Value/Gradient/Hessian type in LevelSetDataType
- Before beginning the iterations, we need to iterate through all term containers to identify which Value/Gradient/Hessian are needed
- The values needed by a given term container is dependent on its individual terms. Each term identifies the values it uses based on the string name.

- In each iteration, before going through the container, compute all the relevant Value/Gradient/Hessian for the pixel

- Use image iterators instead of GetPixel() methods in the dense case
- DomainMapFilter should be made as a LabelMap
- Write tests for curvature, propagation, and advection terms for all representations
- Rename files more consistently
- Condense code especially for adaptors and sparse level set evolution classes
- Regularization method for Shi, and Malcolm methods=
- Implement overlap penalty term
- Implement advection term
- Multithreading of levelsets, sparse layer updates (may not be for the beta release!).