[vtkusers] [EXTERNAL] Efficiently visualizing duplicate data in biological model
Paul Melis
paul.melis at surfsara.nl
Thu Apr 30 10:51:02 EDT 2015
Hi Gerrick,
I tried to make the two different "cells" apparent: VTK cells and
biological cells in the model (the latter being represented by a piece
of polydata), but apparently failed :) The filters you suggest work on
VTK points and cells only, not the higher level of biological cells in
the model I'm looking for.
I'll try to add more detail. I have several biological cells in my
polydata object that are each represented by a few hundred points and
triangles (basically a distorted sphere per biological cell). All cells
in the simulation are represented by a single VTK polydata object (so
not a polydata object per cell). There is a point-data array called
"cellId" that holds the biological cell ID, so polydata points that form
the same biological cell in the model will have the same cell ID value.
For each of the biological cells there is a single value for a number of
model output values, e.g. volume of the biological cell, total amount of
distortion, etc. These values I'd like to visualize by coloring the
whole cell (i.e. that cell's subset of the polydata) with a color-mapped
value. It seems the only way to do that in VTK is to propagate the value
to show for a cell (say volume) to all that cell's polydata points, for
each of the biological cells, and then do color-mapped rendering as usual.
However, this value propagation is not something VTK can do for me it
seems, so I've written it manually in Python at the moment. The way this
works is that I add a point-data array to the polydata object for each
value, e.g. Volume. It then takes the values per biological cell from an
HDF5 file and assigns those values to the polydata points, based on the
biological cell ID of each point, thereby duplicating e.g. the volume
value for cell 123 on all points having cellID value 123.
This value propagation takes a bit of processing that I'd like to
minimize while interactively visualizing and reading in new timesteps.
Preprocessing all timestep files is possible, but will add a lot of
duplicate data, something I'd like to avoid.
Regards,
Paul
On 04/30/2015 04:27 PM, Gerrick Bivins wrote:
> Hi Paul,
> I'm a little confused on what your data layout is and how you'd actually like it to be
> but have you tried using either of these:
> http://www.vtk.org/doc/nightly/html/classvtkCellDataToPointData.html
> http://www.vtk.org/doc/nightly/html/classvtkPointDataToCellData.html
>
> Gerrick
>
>
> -----Original Message-----
> From: vtkusers [mailto:vtkusers-bounces at vtk.org] On Behalf Of Paul Melis
> Sent: Thursday, April 30, 2015 6:29 AM
> To: vtkusers at vtk.org
> Subject: [EXTERNAL] [vtkusers] Efficiently visualizing duplicate data in biological model
>
> Hi,
>
> (Sorry for the lengthy introduction below, I feel some detail is needed before framing my question :))
>
> I'm working with output data from a simulation of certain biological cells, currently a few hundred of them. Each cell is modeled with a few hundred vertices and triangles and can deform and interact with other cells. There's both per-vertex data for each cell (like force), as well as per-biological-cell data for the whole cell (like volume, amount of deformation, etc). The two different sets of data are stored in separate
> HDF5 files per timestep and we're using Xdmf to read the cell geometry + per-vertex data in as VTK polydata for visualization.
>
> The per-biological-cell data is stored as arrays of scalars indexed by biological cell ID. The (biological) cell polydata has a per-vertex "cellId" scalar value referencing this cell ID. For visualizing the per-biological-cell values (coloring a whole biological cell with one color based on the "cell-global" value) I'm currently using a bit of Python to add extra point-data arrays to the polydata for each per-biological-cell value, thereby massively duplicating the per-biological-cell values to each point/vertex. This works, but it's a bit slow already (I'm processing on-the-fly during reading) and the datasets will need to scale up to much larger numbers of cells.
> Duplicating the per-biological-cell values to each point-data array and storing them in a preprocess is doable, but wastes a lot of disk space.
> And the current separation of files for per-vertex and per-biological-cell data seems natural (although obviously they could be datasets in the same HDF5 file, but that wouldn't solve the duplication issue I'm pondering on).
>
> I've tried to find good ways to handle this case in VTK (or even on the level of Xdmf), but don't see filters or operations that handle this case out of the box. Is there a better approach than the "manual" data duplication that I'm using at the moment?
>
> Thanks in advance for any reply,
> Paul
>
More information about the vtkusers
mailing list