# Refactoring Level-Set framework - V4

This page outlines the proposed changes to the current Level-Set framework. A number of these changes will break backwards compatibility.

## Authors

The proposed changes for the Level-Set framework are being proposed by the Megason-Lab, Systems Biology Department, Harvard Medical School.

Questions and comments regarding these changes should be sent to

- Arnaud Gelas (arnaud_gelas -at- hms.harvard.edu)
- Sean Megason (sean_megason -at- hms.harvard.edu)

Other team members include:

- Kishore Mosaliganti (kishore_mosaliganti -at- hms.harvard.edu)
- Nicolas Rannou (nicolas_rannou -at- hms.harvard.edu)

## Tcons

- 29 July. Attendees: Karthik Krishnan (Kitware), Lucas Antiga (Orobix), Kishore Mosaliganti (Harvard), Arnaud Gelas (Harvard)
- 11 August. Attendees: Luis Ibanez (Kitware), Ivo Sbalzarini (ETH), Kishore Mosaliganti (Harvard), Arnaud Gelas (Harvard)
- 18 August. Attendees: Gregory Paul (ETH), Kishore Mosaliganti (Harvard), Arnaud Gelas (Harvard)
- 1 September. Attendees: Olivier Bernard (Creatis), Joel Schaerer (Creatis), Kishore Mosaliganti (Harvard), Arnaud Gelas (Harvard)
- Tcon 2010-09-10
- Level Set GPU Local Meeting in Boston 2010-10-14
- ITKv4 Iowa meeting 2011 November 8-10

## Ideas

### Term

#### Term class

Represent on term of the PDE. First example of implementation (not complete) located at http://github.com/lantiga/ModularLevelSets.

- One weight (scalar)
- One Level Set Function
- virtual GetCFLContribution() = 0;
- virtual Value() = 0;
- virtual bool IsStreamable() = 0

#### Term Container class

- std::map< id, Term > to be able to dynamically add/remove terms at any time during the evolution
- Virtual Value() = 0;
- bool IsStreamable(); // iterate on all term elements as long as there is streamable element.

#### Term Sum Container

- Inherit form Term Container
- Implement Value as sum of all elements from the container

#### Term Product Container

(Don't know yet the application, but it is theoretically possible to implement it as noticed by Joel Schaerer).

- Inherit form Term Container
- Implement Value as product of all elements from the container

### LevelSetFunctionBase

The level set function representation could be discrete, parametric (continuous)

- Virtual Value() = 0;
- Virtual Grad() = 0;
- Virtual Hessian() = 0;
- Virtual bool IsStreamable() = 0;

### LocalStructure

This structure will be used to pass information add each step for a single location x

- Location x
- std::pair< bool, xxx > value (bool encode if its characteristics has already been computed)
- std::pair< bool, xxx > grad
- Etc…

### Reinitialization Filter

- Input: one level set function
- Output: one level set function
- Virtual void GenerateData() = 0; // processing

### Stopping criterion

- Functor?
- Inputs: Phi^{n}, Phi^{n-1}, Number of iterations
- Check for additional values that can be passed (to avoid as much as possible explosion of computational cost!!!)

### SamplingDomain

- Represents the domain where the level set PDE is sampled, and thus used to solve the PDE equation.
- Iterator Begin();
- Iterator End();
- Iterator Iterator::operator ++ ( -- ? )

### Thoughts on dependencies among terms

In order to make term implementation modular and reusable, terms should include a way to be reusable by each other. For instance, a curvature term should depend on a "first derivatives" and a "hessian" term, and only implement the formula for curvature. In fact, how the first and second derivatives are implemented should be kept modular, so that one can adopt, for instance, a finite differences or a Gaussian convolution approach to compute them, and the curvature code could stay the same.

I see two ways of achieving this: 1. reuse terms in other terms by referring to their (expected) name in the term container (global workspace approach) 2. specify dependencies by instantiating them and storing them in a dependency container in each term class (dependency approach, adopted in [ModularLevelSets], see for instance the itk::LevelSetCurvatureTerm and how it depends on two other terms itk::LevelSetDerivativesTerm and itk::LevelSetHessianTerm).

I now recall the process that led me to having dependencies in [ModularLevelSets] and not a global workspace.

If you have a global workspace, then you may end up having a couple of potential maintenance issues:

1. all terms have to know what names are used for each quantities and don't have a chance to know if something has changed in other classes in the container. Say Term1 computes "Hessian" and puts it in the workspace. Term2 needs Hessian and grabs it from the workspace. If Term1 changes the name of the term to FiniteDifferenceHessian, Term2 will still call it Hessian (unless we require that the user checks that dependencies are built with correct names). There would really be no way of knowing if things are ok or not until we assemble all the terms and run them.

2. you'd have a hard time detecting inconsistencies compile time: say Term1 computes Hessian using finite differences, while Term2 computes Hessian using convolution, but they both call it Hessian. Term1 puts the "Hessian" in the workspace, and Term2 finds a "Hessian" in the workspace and uses it, but it's not the Hessian Term2 needs.

Of course careful development will avoid all this sort of issues, but a newcomer would have to know about any quantities the other terms generate.

On the other hand there are a few advantages using dependencies:

1. if Term2 depends on the Hessian term, there's a way to have term compatibility checked at compile-time (see term constructor in [ModularLevelSets], in particular itk::LevelSetCurvatureTerm).

2. since Hessian is a term, i.e. a class, there's no chance of creating conflicting classes with the same name. If Term1 depends on Hessian computed using finite differences, the developer of Term2 will be forced to implement a ConvolutionHessian term. In other words, the dependency of the two terms will be local to the terms, no chance of collisions in the global workspace, irrespective on how the terms are assembled.

Dependencies have to analyzed only once, when it's time for level sets to run. You can decide the order in which to evaluate terms and cache the results of a term if that term is a dependency of more than one term, so there's really no extra cost compared to the global workspace option, terms are computed the same number of times.

This is just my view on the topic, but in general I think this is a really important issue, as we expect that terms will grow in number.

Luca