[vtk-developers] Optimization of vtkClipDataSet and vtkProbeFilter for large data?

Biddiscombe, John A. biddisco at cscs.ch
Wed Jun 3 04:43:25 EDT 2015


Mirek

I’m not sure I followed your explanation, but in case you’re doing more work than you need to...

It is possible to compute interpolation weights and reuse them
here I have computed weights for an interpolation (in the weights array)
https://github.com/biddisco/pv-meshless/blob/master/vtkSPHProbeFilter.cxx#L944
and then pass them to the attribute arrays for computing the new values for each field at the point Id.

(I use this on very large datasets, no problem)

JB

From: vtk-developers [mailto:vtk-developers-bounces at vtk.org] On Behalf Of Dr. Miroslav Sejna
Sent: 03 June 2015 09:46
To: Will Schroeder
Cc: vtk-developers
Subject: Re: [vtk-developers] Optimization of vtkClipDataSet and vtkProbeFilter for large data?

Thank you, Will. I'll definitely be interested in parallelization of my code, which will be done in a next step.

Yesterday I fortunately found a simple solution for my problem. It is based on a custom additional array in PointData and overriding its virtual method "InterpolateTuple". This is the place where I can get all interpolation factors and save them for later re-use. The rest was just a routine programming. Now I'm able to display scalars and vectors (defined at nodes of the original FE-mesh) on modified VTK meshes (clipping/slicing/...) without recalculating these filters. It works exactly as I wanted. VTK is a great library - thanks again.

Mirek

From: Will Schroeder [mailto:will.schroeder at kitware.com]
Sent: Tuesday, June 02, 2015 3:59 PM
To: Dr. Miroslav Sejna
Cc: vtk-developers
Subject: Re: [vtk-developers] Optimization of vtkClipDataSet and vtkProbeFilter for large data?

If I understand you correctly, it seems that your solution is to save in memory the interpolation factors so that you can rapidly clip new attribute data. Here's a probably crazy alternative suggestion that may have merit in the long run.

You could take advantage of (emerging) parallel hardware and recalculate the interpolation factors anyway; i.e., do extra work but with lots of processors it may be simpler and faster. There are currently several folks working on these sorts of algorithms (including clipping) but the results will not be available until later this year. If you can afford to be patient, or want to to try your hand at writing some parallel algorithms, I'm sure we can point you in the right direction.

W

On Mon, Jun 1, 2015 at 4:34 PM, Mirek <m.sejna at pc-progress.com<mailto:m.sejna at pc-progress.com>> wrote:
Dear VTK developers,

Filters like vtkClipDataSet and vtkProbeFilter, interpolating values of
vtkDataSetAttributes, work perfectly for relatively small data, i.e. if the
clipping is fast and you can have all vtkDataSetAttributes in memory.
However, it looks like there is no option to optimize these filters for
large data. In my case (see details (*) below), data of each quantity is
loaded "on demand" from the disk. The problem is that vtkClipDataSet cannot
interpolate new vtkDataSetAttributes into the existing (clipped)
unstructured grid without recalculating everything, which is unnecessary and
slow. In my old program, all "interpolated points" had information how to
recalculate values of a new quantity (now vtkPointData scalars), which
actually is a simple and fast operation. In case of a tetrahedral mesh, you
just need to have IDs of 4 original mesh nodes and their weights for the
linear interpolation.

I have spent several days by debugging VTK and looking for a standard
solution. Unfortunately, I have not found any way how to get and save the
interpolation factors so that I could reuse them. The interpolation of
vtkDataSetAttributes is implemented in vtkTetra::Clip and most of important
functions (vtkTetra::Clip,  vtkDataSetAttributes::InterpolateEdge, ...) are
not virtual => it will not be easy to modify the filter. Before investing
time into the development of a new filter, I'd like to ask: Did I miss
something?  Is there an existing solution for this problem?

Thank you
Mirek

(*) In my case the data can be really large: unstructured FE-meshes with up
to 50 mil. nodes and time-varying results (10-200 different quantities
defined by values at mesh nodes, while there can be 100 - 10000 time
layers).




--
View this message in context: http://vtk.1045678.n5.nabble.com/Optimization-of-vtkClipDataSet-and-vtkProbeFilter-for-large-data-tp5732100.html
Sent from the VTK - Dev mailing list archive at Nabble.com.
_______________________________________________
Powered by www.kitware.com<http://www.kitware.com>

Visit other Kitware open-source projects at http://www.kitware.com/opensource/opensource.html

Search the list archives at: http://markmail.org/search/?q=vtk-developers

Follow this link to subscribe/unsubscribe:
http://public.kitware.com/mailman/listinfo/vtk-developers



--
William J. Schroeder, PhD
Kitware, Inc.
28 Corporate Drive
Clifton Park, NY 12065
will.schroeder at kitware.com<mailto:will.schroeder at kitware.com>
http://www.kitware.com
(518) 881-4902
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://public.kitware.com/pipermail/vtk-developers/attachments/20150603/f8076679/attachment.html>


More information about the vtk-developers mailing list