ITK Release 4/A2D2 Projects/SCORE++/Tcon-2011-04-18: Difference between revisions
From KitwarePublic
< ITK Release 4 | A2D2 Projects | SCORE++
Jump to navigationJump to search
(→Items) |
(→Items) |
||
(9 intermediate revisions by one other user not shown) | |||
Line 1: | Line 1: | ||
__TOC__ | |||
= Attendees = | = Attendees = | ||
Line 21: | Line 24: | ||
** About 100Gb per item | ** About 100Gb per item | ||
** Suggestion: | ** Suggestion: | ||
*** Per dataset have | *** Per dataset have three tiers: | ||
***# Testing data (data freely available, annotations/truth sequestered) | |||
***# Training data (data and annoations freely available) | |||
***# Small sample for quick evaluation | |||
*** Provide an online viewing tool (MIDAS/DJ Viewer) | |||
* Data Transfer | * Data Transfer | ||
** Using MIDAS CPP ? | ** Using MIDAS CPP ? | ||
** 100 Gbit /s Transfer rate at Kitware KHQ (about 10 GigaByte/s) | ** 100 Gbit /s Transfer rate at Kitware KHQ (about 10 GigaByte/s) | ||
** Use Amazon S3 Store | ** Use Amazon S3 Store | ||
*** Rates and prices | *** Rates and prices prohibitive | ||
* Scoring Algorithms | * Scoring Algorithms | ||
** Client Side | ** Client Side and Server Side the same | ||
** | ** Fully open-source | ||
** Provided as | |||
**# ITK classes | |||
**# GenerateCLP Executables for use with BatchMake | |||
**#* How are these distributed? New toolkit? | |||
** Batchmake scripts provided to show how to run methods and score on a collection of data | |||
**#* How are these distributed? New toolkit? | |||
* Representations for | * Representations for | ||
** Tracks (3D + t) | ** Tracks (3D + t) | ||
Line 36: | Line 49: | ||
*** 3D + time ? | *** 3D + time ? | ||
** Use itk::LabelMaps to represent segmentations and tracks | ** Use itk::LabelMaps to represent segmentations and tracks | ||
=== Tracking Evaluation === | |||
* ICCV paper: Tracking performance evaluation | |||
* Sometimes the data start from meshes | |||
** Sometimes it starts from images | |||
** Sometimes it starts from tracks | |||
* If adopting a file format for storing this data | |||
** THEM we must provide a library (IO API) for reading and writing this data. | |||
** Maybe use the MetaImage library ? | |||
*** It already has the concepts of "polylines" and "tubes" | |||
=== Server Side Processing === | |||
WARNING: THIS IS BRAINSTORMING...but we need to do something like this! | |||
* Rationale: The data is too large to download | |||
** Users could upload | |||
*** Source code | |||
*** Binaries | |||
*** Virtual Machines | |||
** These code/binaries will be executed on the data and the output will go to the evaluation. | |||
= Action Items = | |||
* File formats for Tracks | |||
** Luis, Arnaud, DJ | |||
* Track Evaluation Metrics (comparing tracks ICCV) | |||
** Marcel | |||
* Segmentation Metrics | |||
** Patrick, Xiaoxiao | |||
* Web Data Visualization | |||
** Patrick, DJ, Arnaud, Luis | |||
* IO / Database | |||
** Arnaud, Luis, DJ, Zack Mullen |
Latest revision as of 23:30, 18 April 2011
Attendees
- Sean Megason
- Arnaud Gelas
- Stephen Aylward
- Patrick Reynolds
- Julien Jomier
- Xiaoxioa Liu
- Luis Ibanez
Items
- Data licensing
- Data Curation
- Associating publications with the data in the database
- Comparison of segmentations
- What MIDAS instance to use ?
- Data Collection MIDAS
- How much data ?
- A dozen items..
- About 100Gb per item
- Suggestion:
- Per dataset have three tiers:
- Testing data (data freely available, annotations/truth sequestered)
- Training data (data and annoations freely available)
- Small sample for quick evaluation
- Provide an online viewing tool (MIDAS/DJ Viewer)
- Per dataset have three tiers:
- Data Transfer
- Using MIDAS CPP ?
- 100 Gbit /s Transfer rate at Kitware KHQ (about 10 GigaByte/s)
- Use Amazon S3 Store
- Rates and prices prohibitive
- Scoring Algorithms
- Client Side and Server Side the same
- Fully open-source
- Provided as
- ITK classes
- GenerateCLP Executables for use with BatchMake
- How are these distributed? New toolkit?
- Batchmake scripts provided to show how to run methods and score on a collection of data
- How are these distributed? New toolkit?
- Representations for
- Tracks (3D + t)
- Object Ids
- Use ITK SpatialObjects ?
- 3D + time ?
- Use itk::LabelMaps to represent segmentations and tracks
Tracking Evaluation
- ICCV paper: Tracking performance evaluation
- Sometimes the data start from meshes
- Sometimes it starts from images
- Sometimes it starts from tracks
- If adopting a file format for storing this data
- THEM we must provide a library (IO API) for reading and writing this data.
- Maybe use the MetaImage library ?
- It already has the concepts of "polylines" and "tubes"
Server Side Processing
WARNING: THIS IS BRAINSTORMING...but we need to do something like this!
- Rationale: The data is too large to download
- Users could upload
- Source code
- Binaries
- Virtual Machines
- These code/binaries will be executed on the data and the output will go to the evaluation.
- Users could upload
Action Items
- File formats for Tracks
- Luis, Arnaud, DJ
- Track Evaluation Metrics (comparing tracks ICCV)
- Marcel
- Segmentation Metrics
- Patrick, Xiaoxiao
- Web Data Visualization
- Patrick, DJ, Arnaud, Luis
- IO / Database
- Arnaud, Luis, DJ, Zack Mullen