HDF5 file format and library
HDF5 is both a file format and a library dedicated to reading and writing files in that format.
According to Wikipedia, "HDF5 include only two major types of object:
- Datasets, which are multidimensional arrays of a homogenous type
- Groups, which are container structures which can hold datasets and other groups
This results in a truly hierarchical, filesystem-like data format. In fact, resources in an HDF5 file are even accessed using the POSIX-like syntax /path/to/resource. Metadata is stored in the form of user-defined, named attributes attached to groups and datasets. More complex storage APIs representing images and tables can then be built up using datasets, groups and attributes. In addition to these advances in the file format, HDF5 includes an improved type system, and dataspace objects which represent selections over dataset regions. The API is also object-oriented with respect to datasets, groups, attributes, types, dataspaces and property lists. Because it uses B-trees to index table objects, HDF5 works well for Time series data such as stock price series, network monitoring data, and 3D meteorological data. The bulk of the data goes into straightforward arrays (the table objects) that can be accessed much more quickly than the rows of a SQL database, but B-Tree access is available for non-array data. The HDF5 data storage mechanism can be simpler and faster than an SQL Star schema."
It is available in BSD-like license.
- Chunking (streaming)
- Multi-Channel images
- Large datasets ( Size > 4Gb )
- Single experiment images of size 1024 x 1024 x 75 (XYZ), 2 channels, 1000 time-points
- 8bit and 16bit
- Images stored as 2D PNGs with filenames giving location
- Need to support optimized reading (image streaming) of a sub-volume
- Eg: Box filtering using a kernel of size 5x5x1x1x3
- Cyclic buffer optimization in the ITK reader that keeps overlapping data and only reads new data
- Multi-resolution images for heirarchical registration of multiple experimental sets
- Compression is not as important in the short term but will be needed in the long term
With HDF5, everything is either a group or a dataset.
ITK must be able to save many different types -- how do we store the actual ITK type in the HDF5? Attributes may be an option for that. How do we store the template parameters -- do we even need to store them? Glehmann 16:06, 18 April 2011 (EDT)
Atomic objects or unbreakable basic types. They are (generally?) stored as datasets in the HDF5 files.
Composite objects are made of one or more atomic or composite objects. Each object is named in the same way it is named in the ITK classes, without the leading "m_". They are (generally?) stored as groups in the HDF5 files.
We may need something simpler to store the version as an attribute. Glehmann 16:06, 18 April 2011 (EDT)
This is the storage of the class ImageRegion.
TODO: where do we store the Dimension of the ImageRegion?
TODO: Is it good enough to assume that it can be deduced from the dimension of the Index and of the Size?
TODO: What to do when the Index and the Size dimensions mismatch?
This is the storage of the class ImageBase.
|Region||ImageRegion||This is the largest possible region shortened, because the different regions in itk::Image doesn't really make sense in a file storage.|
|Pixels||TODO||which type should be used? a dataset directly? an atomic type?|
This is not a strict requirement, but images should be saved in chunks to allow them to be efficiently streamed (both read and write) and compressed.
I think the chunk size should be one on all the dimensions but x and y. Wich chunk size to choose on x and y is tricky, and may depend on the use case -- should we choose a size?
This is the storage of the class LabelObjectLine.
|Lenght||unsigned integer||how do we describe this type?|
This is the storage of the class LabelObject.
|Label||integer||how do we describe this type?|
|Lines||TODO||which type should be used? a group directly? an composite type?|
|LabelObjects||TODO||which type should be used? a group directly? an composite type?|
By default, the object of interest is stored in /ITK, so it can be either a atomic (HDF5 dataset) or composite object (HDF5 group). Of course it is possible to access the objects by using another or a longer path.
How to do that? The version should be stored somewhere for sure - should it be:
- at the base of the file? in an /ITKVersion group for example?
- in each object, as an attribute? This would allow to easily copy an object from one file to another. I think I like much this method Glehmann 16:06, 18 April 2011 (EDT)