[Insight-developers] Validation Directory

Jisung Kim bahrahm@yahoo.com
Wed, 18 Sep 2002 07:33:20 -0700 (PDT)


Hi Lydia.

--- Lydia Ng <lng@insightful.com> wrote:
> Hi Jisung,
>  
> I agree with you that it would be nice to have
> consistency between groups 
> in terms of file formats and results - this was
> something I too was hoping for
> and tried to table it as a issue at the Feb meeting
> but there wasn't 
> much interest at that early stage. As I recall there
> wasn't  a consensus on
> how to structure the validation at that meeting and
> it defaulted to each group 
> doing their own thing for the validation study. 
>  
> However at this stage, we simply don't have the
> resources to rework things.
> We have enough budget to finish up the final
> validation study and for one of us
> to go the final meeting. Also the validation studies
> are due Sept 30th, restructuring
> things 2 weeks before this deadline seems like risky
> business.
>  
> The consistency issue could be something we address
> within WA5. Between now
> and then we will be happy to do simple changes (e.g.
> moving the code to a 
> different directory) but unfortunately we will have
> to hold on more involved tasks.

I agree with you on that we don't have much time for
restructuring existing studies.

I simply want to know anyone has already had some
ideas for the organization of validation studies.

At this stage, I have no problem about having
different image formats and results formats, due to
the time limit. However, I still think we need a
common data directory for dataset descriptions.

Another thought about directory structure (in the
future). I think it would be nice we have a common
Code directory under the Validation directory
(Validation/Code) and put each apps  in a subdirectory
of it. For example, KmeansClassifier related code goes
under Validation/Code/KmeansClassifier ,
MRFGaussianClassifier goes under
Validation/Code/MRFGaussiandClassifier, and common
stuff like input parser goes under the
Validation/Code/Common. With this setup,  readers of
our validation study can find where to look at if they
want to see the implementation of a specific Algorithm
validation app. Another benefit of this setup could be
that if there is already a study comparing algorithm A
and B but later another researcher want run a study
with algorithm A and C (probably new addition to ITK),
he/she just create an validation app in
Validation/Code/C  and use it with the validation app
in Validation/Code/A. 

>  
> BTW - the IBSR data is a bit messy - each study
> starts at a different slice, 
> each study has a different number of slices, while
> the mask starts at different slice
> to the images etc ... were you able to come up with
> clean solution with MetaImage? 
>  

The answer is no. I have worked mainly with BrainWeb
data. I downloaded it yesterday.

I don't know MetaImage supports missing slices and
configurable starting slice numbering stuff. I guess
not. I will ask Stephen to make sure that's the case. 

If MetaImage doesn't support such features, I would
create a helper class that does what Aljaz suggested
(filling blank slices) using an additional image info
file which might have information on desired number of
slices and starting slice number, and put it under the
Validation/Code/Common (if we are going to have one). 

Since we don't have enough time to restructure
everything, I suggest that we gather ideas for
consistent organization of validation studies so that
we can use such idea for reorganization in future and
use only some simple and safe(?) idea at this stage.
 
> - Lydia

Thanks,

Jisung.


=====
Jisung Kim
bahrahm@yahoo.com
106 Mason Farm Rd.
129 Radiology Research Lab., CB# 7515
Univ. of North Carolina at Chapel Hill
Chapel Hill, NC 27599-7515

__________________________________________________
Do you Yahoo!?
Yahoo! News - Today's headlines
http://news.yahoo.com