jshalf at lbl.gov
Wed Jan 16 19:56:14 EST 2002
Here is another example from AMR-land. We have nested
hierarchical rectilinear grids where each layer of nesting
offers double the resolution of the previous layer (usually
about 8 or more levels of refinement). The deeper layers
are fully contained in their parent, but do not necessarily
cover the entire area of their parent grids. The areas of
refinement are often non-contiguous collections of
rectilinear grids. The refined grids *overlap* data in
their parent grids. The refined grids are all of different
sizes and shapes.
With the current vtk pipeline, the domain decomposition is
spatial and fixed size chunks. This is simply not possible
with AMR data. We'd really like to be able to pass
collections of objects down the pipeline. Furthermore, we
need the "GetUpdateExtent/SetUpdateExtent" pass through the
VTK pipeline to be able to cull items from the collection of
AMR grids based on spatial location or data range.
For example, to make a slice through the AMR hierarchy, you
only want to pass grids through the pipeline that intersect
the slice (a small fraction of them usually).
For an isosurface, you only want to pass grids through the
pipeline that have a datarange that intersects the
Matt Hall's vtkSMG modifications to VTK 2.2 essentially
implemented your 3-pass system for an AMR-specific pipeline
whereby it would
* send bounding-box/datarange metadata for the collection
of datasets down the pipeline (in vtk3.2 this is where you
propagate the full extents of the dataset down the pipeline)
* the filters would cycle through the metadata to determine
which AMR grids they would be interacting with and mark them
for transport as they back-propagate to the data source.
They identify items they want to have sent to them using the
familiar borg notation (ie. i want 1-of-8, 2-of-8) (sort of
like where vtk3.2 does the SetUpdateExtents() operation to
choose the domain decomposition of the dataset)
* the data source then starts feeding the selected grids
one-at-a-time down the pipeline for processing using the
familar borg notation. (plans were to support sending data
collections down the pipeline, but initially it was just
sequential like the out-of-core vtk implementation)
Perhaps in asking for a "datasetcollection", we are really
asking for the 3-pass update system to allow us to propagate
collections of metadata or some more arbitrary token down
the pipeline in order to select extents and setup the domain
decomposition for the data. So this wouldn't completely
mess up existing streaming... it would perhaps mean allowing
the GetUpdateExtent()/SetUpdateExtent() pass around handles
to complex datastructures than block-sizes in order to
support pipelines that deal with datasetcollections. (ie.
you would pass in a datastructure containing int x1, int x2,
int y1, int y2, int z1, int z2 for your uniform grids, but
pass down linked lists or arrays of bounding boxes for AMR
grid collections for instance). This would mean
SetUpdateExtent()/GetUpdateExtent() might have void* or
class-specific arguments, but it would hopefully not require
changes to the args for existing pipelines that are not
managing collections of datasets.
Am I making any sense here?
John Biddiscombe wrote:
> I can only speak for myself here....
> The main reason I needed a datasetcollection that supported updates was for
> a vtkDXFReader class, which outputs 2 collections, one for layers in the DXF
> file and one for Blocks. DXFReader descends from ProcessObject and simply
> exposes two outputs which operate normally
> Then I wanted to take a series of test points (probe points) and compute the
> field strength at the points from a transmitter and using a high resolution
> building database as the "scenery".
> Later on - multiple "Test points" sets were generated and the field strength
> computed for each and a collection of field strengths ouput.
> (There are lots of permutaions involving different scenery files or
> different test points, or different transmitter locations/frequencies etc
> I do not want to do anything at all to the existing update mechanism or
> modify any of the existing filters. This is important. No messing with
> existing streaming.
> If one decides to develop a custom filter which is capable of iterating over
> lists of inputs, the existing streaming mechanism supports
> Pieces of type : ImageData Extents
> Pieces of type : Polydata piece number (or cell ID a to b)
> all one needs to do is add
> Pieces of type : Collection index start and end (List ID a to b)
> This would only be used when the developer specifically chooses to support
> lists as inputs and there is no need to modify the many hundreds of existing
> classes. I only mentioned it in the hope someone else would do it :). (I
> haven't really got deeply involved with multiple CPUs - yet)
> vtk-developers mailing list
> vtk-developers at public.kitware.com
More information about the vtk-developers