[Paraview] Parallel Python Programmable Filters and Data Segmentation
Jesus Pulido
jpulido at ucdavis.edu
Fri Aug 26 13:07:23 EDT 2011
Actually, to answer my own question, Structured data can be segmented by
adding
from paraview import util
self.GetExecutive().SetExtentTranslator(self.GetExecutive().GetOutputInformation(0),
vtk.vtkExtentTranslator())
To the RequestInformation box in the programmable filter and therefore I no
longer have a problem daisy-chaining different types of filters.
Although all datatypes can be segmented now, the bug where when the filter
is initially applied is still present. The data is not segmented until the
dataset is reread with the filter already applied.
Another thing that I ran into is that if I use vtkThreshold on the segmented
data, the threshold filter will somehow grab the entire extent of the data
and output the threshold of the entire data (per processor) even you you
give it the segmented input. As of result you have a thresholded dataset
that is (thresholded data size) * (number of processors) large. Here is
some sample code with the output set to vtkUnstructuredGrid
script
---------
import time
import paraview.vtk.parallel
myProcId = 0
numProcs = 1
controller =
paraview.vtk.parallel.vtkMultiProcessController.GetGlobalController()
if controller:
myProcId = controller.GetLocalProcessId()
numProcs = controller.GetNumberOfProcesses()
input = self.GetInputDataObject(0, 0) # Safer method
output = self.GetOutput()
indim = input.GetDimensions()
print(str(myProcId) + ">> input dimensions: " + str(indim[0]) + " " +
str(indim[1]) + " " + str(indim[2]) + "\n")
#Threshold the data
t1 = time.clock()
thresh = vtk.vtkThreshold()
thresh.SetInput(input)
thresh.SetInputArrayToProcess(0, 0, 0,
vtk.vtkDataObject.FIELD_ASSOCIATION_POINTS, "volume_scalars");
thresh.ThresholdBetween(2, 3)
thresh.Update()
output.ShallowCopy(thresh.GetOutput())
print("Time spent thresholding: " + str(time.clock()-t1) + "\n")
print(str(myProcId) + " is done!\n")
------------
Requestinformation script (Although for vtkUnstructuredData the input is
already segmented)
---------------
from paraview import util
self.GetExecutive().SetExtentTranslator(self.GetExecutive().GetOutputInformation(0),
vtk.vtkExtentTranslator())
Would this be considered correct?
Jesus
On Fri, Aug 26, 2011 at 9:33 AM, Jesus Pulido <jpulido at ucdavis.edu> wrote:
> Hello,
>
> I am working on various python programmable filters and have gotten them
> working in parallel but I've ran into some problems and have questions on
> some of the functionality. I am working with (structured) vtkImageData
> loaded in from a .pvti file (4 pieces) and will want to ideally output
> vtkUnstructuredData for some of my filters, and vtkImageData for others. I
> am not sure if Paraview is supposed to segment the input/output data for you
> but what I've found out is that it will depending on the set output data
> type.
>
> If you choose vtkPolyData, vtkStructuredGrid, vtkRectilinearGrid,
> vtkImageData, and vtkUniformGrid, and print the input extents for each
> processor that's running the filter, the input will be that of the entire
> dataset. (and therefore each pvserver will read in the entire dataset into
> memory!)
>
> If you choose vtkUnstructuredGrid or vtkMultiBlockData, then the input will
> be segmented into pieces and printing the extents of the data for each
> processor running the filter will show the input segmented.
>
> Now, my question is if this behavior is normal/expected? If it is, is there
> a filter that I may call within my filter to segment my data to only include
> piece X our of N processors available?
>
> Regarding the output data, I'm assuming each processor is expecting a
> certain extent of the data and Paraview will combine the output of all
> processors into one object, but when I output the "output" extents (out =
> self.GetOutput()), the extents are set to the entire dataset size by
> default.
>
> I've also found a bug within the datatypes in which inputs are segmented.
> When a filter is initially added and applied to a dataset, outputting the
> extents of the inputs shows that each processor is loading the entire extent
> of the data. If you have a dataset with multiple timesteps or are able to
> force Paraview to re-read the dataset with the filter already set, you will
> see that the "correct" segmented extents are now passed through to the
> filter for each core.
>
> Another, perhaps bug, is that if daisy-chain two python programmable
> filters, the first one being one of the two datatypes that segment the data,
> and then the second being one that doesnt segment the data, the inputs for
> the first filter will no longer be segmented anymore.
>
> Jesus
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.paraview.org/pipermail/paraview/attachments/20110826/6aa97728/attachment.htm>
More information about the ParaView
mailing list