[vtkusers] PVTS, Multiblock or Both
Philip Sakievich
psakievich at gmail.com
Thu Dec 29 00:59:35 EST 2016
Okay, I found what I was looking for. Thanks for your help Andy! Your
comments and suggestions helped me along in the process.
I've pasted the code for anyone else who is interested. I hope it will be
useful to have this documented since similar questions have been asked in
the past. (
http://vtk.1045678.n5.nabble.com/quot-manually-quot-specify-extents-for-vtkXMLPRectilinearGridWriter-td5714844.html)
The key was to specify the WholeExtent when defining the grid, but then
have a filter that gets called during the write process which changes the
extent to the local extent for each processor.
I still have a lot more to learn about all of this. It would be nice if
you guys at kitware would put some more specific examples and documentation
for working in a parallel environment. I have been working on this problem
off an on for several months, gave up on it for a while and then came back
to it when I started this email chain.
The code below is as bare bones as I could get it for specifying local and
whole extents. I also got it to work with the vtkExtentTranslator, which
might prove useful in the future. However, I have my own partitioning
schemes for this current problem.
--------------------------CODE-------------------------------
'''
This is intended to write data peices that are unique to each
processor and have it be combined in the pvts file.
'''
import vtk
import numpy as np
from vtk.numpy_interface import dataset_adapter as dsa
from vtk.util import numpy_support
debug=True
contr=vtk.vtkMultiProcessController.GetGlobalController()
if not contr:
nranks=1
rank=0
else:
nranks=contr.GetNumberOfProcesses()
rank =contr.GetLocalProcessId()
if debug:
print 'Hello from rank {}'.format(rank)
pnts=vtk.vtkPoints()
#Global size/extent
sizeGlobal=np.array([16,33,32],dtype=int)
extentGlobal=np.array([0,15,0,32,0,31],dtype=int)
#cylindrical coordinates
r=np.linspace(0,1,sizeGlobal[0])
theta=np.linspace(0,2*np.pi,sizeGlobal[1])
z=np.linspace(-0.5,0.5,sizeGlobal[2])
#rank specific extents
if rank==0:
extentLocal=np.array([0,7,0,32,0,15],dtype=int)
if rank==1:
extentLocal=np.array([7,15,0,32,0,15],dtype=int)
if rank==2:
extentLocal=np.array([0,7,0,32,15,31],dtype=int)
if rank==3:
extentLocal=np.array([7,15,0,32,15,31],dtype=int)
#define points specific to each rank
for i in range(extentLocal[4],extentLocal[5]+1):
for j in range(extentLocal[2],extentLocal[3]+1):
for k in range(extentLocal[0],extentLocal[1]+1):
xl=r[k]*np.cos(theta[j])
yl=r[k]*np.sin(theta[j])
zl=z[i]
pnts.InsertNextPoint(xl,yl,zl)
#dump data
pf=vtk.vtkProgrammableFilter()
def execute():
info = pf.GetOutputInformation(0)
output = pf.GetOutput()
input = pf.GetInput()
output.ShallowCopy(input)
output.SetExtent(extentLocal)
pf.SetExecuteMethod(execute)
sg=vtk.vtkStructuredGrid()
sg.SetExtent(extentGlobal)
pf.SetInputData(sg)
sg.SetPoints(pnts)
writer=vtk.vtkXMLPStructuredGridWriter()
writer.SetInputConnection(pf.GetOutputPort())
writer.SetController(contr)
writer.SetFileName('testgrid.pvts')
writer.SetNumberOfPieces(nranks)
writer.SetStartPiece(rank)
writer.SetEndPiece(rank)
writer.Update()
writer.Write()
On Wed, Dec 28, 2016 at 4:44 PM, Philip Sakievich <psakievich at gmail.com>
wrote:
> I'm not really looking to do in-situ at this point. I'm probably not
> making sense. I've put together a test problem in python that illustrates
> what I'm trying to do (it should be run with <=4 processors). I've
> attached the code and I'll copy it at the bottom for anyone on the mailing
> list.
>
> Basically, each rank declares a portion of the grid in memory, and then
> calls the writer. They all have access to the WholeExtent, but I don't
> know how to get it working in the pipeline. All I need to do to get it
> working in paraview and visit is manually change the WholeExtent in the
> *pvts file to "0, 15, 0, 33, 0, 15".
>
> -----------CODE--------------------
> '''
> This is intended to write data peices that are unique to each
> processor and have it be combined in the pvts file.
> '''
> import vtk
> import numpy as np
> from vtk.numpy_interface import dataset_adapter as dsa
> from vtk.util import numpy_support
> debug=True
>
> contr=vtk.vtkMultiProcessController.GetGlobalController()
> if not contr:
> nranks=1
> rank=0
> else:
> nranks=contr.GetNumberOfProcesses()
> rank =contr.GetLocalProcessId()
>
> if debug:
> print 'Hello from rank {}'.format(rank)
>
> pnts=vtk.vtkPoints()
> #Global size/extent
> sizeGlobal=np.array([16,33,16],dtype=int)
> extentGlobal=np.array([0,15,0,33,0,15],dtype=int)
> #cylindrical coordinates
> r=np.linspace(0,1,sizeGlobal[0])
> theta=np.linspace(0,2*np.pi,sizeGlobal[1])
> z=np.linspace(-0.5,0.5,sizeGlobal[2])
> #rank specific extents
> if rank==0:
> extentLocal=np.array([0,7,0,32,0,7],dtype=int)
> if rank==1:
> extentLocal=np.array([7,15,0,32,0,7],dtype=int)
> if rank==2:
> extentLocal=np.array([0,7,0,32,7,15],dtype=int)
> if rank==3:
> extentLocal=np.array([7,15,0,32,7,15],dtype=int)
> #define points specific to each rank
> for i in range(extentLocal[4],extentLocal[5]+1):
> for j in range(extentLocal[2],extentLocal[3]+1):
> for k in range(extentLocal[0],extentLocal[1]+1):
> xl=r[k]*np.cos(theta[j])
> yl=r[k]*np.sin(theta[j])
> zl=z[i]
> pnts.InsertNextPoint(xl,yl,zl)
> #dump data
> sg=vtk.vtkStructuredGrid()
> sg.SetExtent(extentLocal)
> sg.SetPoints(pnts)
> writer=vtk.vtkXMLPStructuredGridWriter()
> writer.SetInputData(sg)
> writer.SetController(contr)
> writer.SetFileName('testgrid.pvts')
> writer.SetNumberOfPieces(nranks)
> writer.SetStartPiece(rank)
> writer.SetEndPiece(rank)
> '''
> This is what I wish I had. I need some way to set the WholeExtent
> that only affects the *pvts file. Everything else works.
> |
> |
> \|/
> V
> writer.SetWholeExtent(extentGlobal)
> '''
> writer.Update()
> writer.Write()
>
>
> On Thu, Dec 22, 2016 at 3:35 PM, Andy Bauer <andy.bauer at kitware.com>
> wrote:
>
>> If you want to use VTK as an in situ library instead of reading data in
>> from disk I strongly recommended looking at ParaView Catalyst (
>> http://www.paraview.org/in-situ/). You can use a vtkTrivialProducer to
>> have the data layout you want but if I understand what you're trying to do
>> you'll probably want to use a VTK composite data set with each process
>> having its own data set.
>>
>> On Thu, Dec 22, 2016 at 12:07 AM, Philip Sakievich <psakievich at gmail.com>
>> wrote:
>>
>>> I'm unclear on how to work with this if I don't specify the local
>>> extent. In my code I am subdividing my computational domain so the data is
>>> unique on each processor, doing work and then dumping the results. It is
>>> completely distributed and the processors don't share any data except on
>>> the boundaries. I already know the global and local extents and now I need
>>> the local extent in the pvts to line up with the other portions of my
>>> code. I don't want the pipeline to give me the local extent unless I can
>>> get it to match the extent I've already specified.
>>>
>>> At this point, it is almost seeming easier to write a script that
>>> writes the *pvts for me instead of using the pipeline, but it frustrates me
>>> that I can't get the pipeline to do what I want. I'm trying to learn VTK in
>>> a broad sense in my spare time. However, for this specific problem the only
>>> thing I need is for the PVTS file to have the LocalExtents match my
>>> datasets on each of the processors. If I can just solve this all of my
>>> problems will be over. I am still wrapping my head around the pipeline
>>> concept, but it has been a struggle since there aren't many examples for
>>> the problem I'm trying to work with. I'm actually not doing any rendering.
>>> I'm just managing IO.
>>>
>>> I looked at the source you mentioned, but I'm not sure where I am
>>> supposed to interact with the RequestInformation request. I'm assuming it
>>> will be in the filter that is the input to vtkXMLPStructuredGridWriter, but
>>> I haven't really worked with any filters since I've only been doing IO.
>>> With the programmable filter example
>>> <http://www.vtk.org/gitweb?p=VTK.git;a=blob;f=IO/ParallelXML/Testing/Python/testParallelXMLWriters.py>,
>>> I can get it to write local extents that differ from the WholeExtent but
>>> they do not match my datasets. (In this case I just copied the execute
>>> function).
>>>
>>> Also, I don't really see why it is necessary to specify the WholeExtent
>>> in RequestInformation. I can specify the WholeExtent via
>>> vtkStructuredGrid.SetDimensions(), or SetExtent and it works fine.
>>> It's just that the LocalExtents aren't correct. I think I'm really hitting
>>> a roadblock because I don't understand how the parallel writer and the
>>> partitioning works with the pipeline.
>>>
>>> Can you provide some specifics on how the partitioning is supposed to
>>> work and/or help me modify the code in my previous example to do what I'm
>>> looking for? Either that or specify an open source code that has
>>> implemented vtkXMLPStructuredGridWriter that I can review. My
>>> application is a CFD solver and it's post processing routines.
>>>
>>>
>>>
>>> On Mon, Dec 19, 2016 at 7:18 AM, Andy Bauer <andy.bauer at kitware.com>
>>> wrote:
>>>
>>>> You need to specify the WholeExtent in the RequestInformation
>>>> request.You actually don't specify the local extent, the pipeline will give
>>>> you that for a source. I'd recommend looking at the
>>>> Imaging/Core/vtkRTAnalyticSource.cxx class in VTK to see how it's done
>>>> as a source.
>>>>
>>>>
>>>> On Sun, Dec 18, 2016 at 1:16 AM, Philip Sakievich <psakievich at gmail.com
>>>> > wrote:
>>>>
>>>>> Andy and community,
>>>>>
>>>>> I have read about the concept of whole extent vs extent, but the one
>>>>> thing I don't seem to be able to determine is how to set the whole extent
>>>>> to be different. I tried using vtkStructuredGrid.Crop() but nothing is
>>>>> happening.
>>>>>
>>>>> In the code snippet below, I have previously declared points local to
>>>>> each processor. I now want to populate a structured grid and write a *pvts
>>>>> that ties them all together. When I run this code the pvts writes itself
>>>>> and the files, but they all have the same whole extent and extent. Nothing
>>>>> is cropped to the desired local extent. What am I doing wrong? How do I
>>>>> specify a whole extent that is uniform across all processors, and a local
>>>>> extent that is specific to each one?
>>>>>
>>>>> #create grid and filter for processing to writer
>>>>> pf=vtk.vtkProgrammableFilter()
>>>>> sg=vtk.vtkStructuredGrid()
>>>>>
>>>>> #set extent global
>>>>> sg.SetExtent(0,15,0,32,0,15)
>>>>>
>>>>> #set extent local
>>>>> if rank==0:
>>>>> lE=np.array([0,8,0,32,0,8],dtype=int)
>>>>> sg.Crop(lE)
>>>>> if rank==1:
>>>>> lE=np.array([0,8,0,32,7,15],dtype=int)
>>>>> sg.Crop(lE)
>>>>> if rank==2:
>>>>> lE=np.array([7,15,0,32,0,8],dtype=int)
>>>>> sg.Crop(lE)
>>>>> if rank==3:
>>>>> lE=np.array([7,15,0,32,7,15],dtype=int)
>>>>> sg.Crop(lE)
>>>>>
>>>>> sg.SetPoints(pnts)
>>>>> pf.SetInputData(sg)
>>>>>
>>>>> writer=vtk.vtkXMLPStructuredGridWriter()
>>>>> writer.SetInputConnection(pf.GetOutputPort())
>>>>> writer.SetController(contr)
>>>>> writer.SetDataModeToAscii()
>>>>> writer.SetFileName('testgrid.pvts')
>>>>> writer.SetNumberOfPieces(nranks)
>>>>> writer.SetStartPiece(rank)
>>>>> writer.SetEndPiece(rank)
>>>>> writer.Write()
>>>>>
>>>>> Thanks,
>>>>>
>>>>> Phil
>>>>>
>>>>> On Tue, Dec 13, 2016 at 7:31 AM, Andy Bauer <andy.bauer at kitware.com>
>>>>> wrote:
>>>>>
>>>>>> Hi,
>>>>>>
>>>>>> I would recommend using the pvts format since that is the best format
>>>>>> for storing structured grids. If you read it back in it will know how to
>>>>>> properly partition the data set for different amounts of processes as well
>>>>>> as do things like ghost cells, extract surfaces, etc.
>>>>>>
>>>>>> For topologically structured grids like vtkStructuredGrid there are
>>>>>> two types of extents, "whole extent" describes the beginning and ending
>>>>>> node (inclusive) in each direction for the entire grid while "extent"
>>>>>> refers to each process's (or pieces if you're serial but doing streaming)
>>>>>> partition of the grid. I believe this should be explained in the VTK User's
>>>>>> Guide which is now available for free as a pdf download.
>>>>>>
>>>>>> Cheers,
>>>>>> Andy
>>>>>>
>>>>>> On Tue, Dec 13, 2016 at 10:21 AM, Philip Sakievich <
>>>>>> psakievich at gmail.com> wrote:
>>>>>>
>>>>>>> Greetings,
>>>>>>>
>>>>>>> I am reasonably new to vtk and I am mainly using it to manage
>>>>>>> datasets on structured grids.
>>>>>>>
>>>>>>> I am trying to write data for a structured grid in parallel python
>>>>>>> via MPI. Each process has a separate portion of the grid, and I'm trying
>>>>>>> to figure out how to set up the write process. I was following this example:
>>>>>>>
>>>>>>> http://www.vtk.org/gitweb?p=VTK.git;a=blob;f=IO/ParallelXML/
>>>>>>> Testing/Python/testParallelXMLWriters.py
>>>>>>>
>>>>>>> But then I realized that in this case each process has the entire
>>>>>>> grid, and each processor is just writing a portion of the data it
>>>>>>> contains. So do I need to use a multiblock data set? Can someone please
>>>>>>> provide a simple example of how to write a structured grid in parallel
>>>>>>> provided each process has the local extent correctly specified?
>>>>>>>
>>>>>>> Thanks
>>>>>>>
>>>>>>> --
>>>>>>> Phil Sakievich
>>>>>>>
>>>>>>> PhD Candidate - Mechanical Engineering
>>>>>>> Arizona State University - Ira A. Fulton School for Engineering of
>>>>>>> Matter Transport and Energy
>>>>>>> Tempe, Arizona
>>>>>>>
>>>>>>> _______________________________________________
>>>>>>> Powered by www.kitware.com
>>>>>>>
>>>>>>> Visit other Kitware open-source projects at
>>>>>>> http://www.kitware.com/opensource/opensource.html
>>>>>>>
>>>>>>> Please keep messages on-topic and check the VTK FAQ at:
>>>>>>> http://www.vtk.org/Wiki/VTK_FAQ
>>>>>>>
>>>>>>> Search the list archives at: http://markmail.org/search/?q=vtkusers
>>>>>>>
>>>>>>> Follow this link to subscribe/unsubscribe:
>>>>>>> http://public.kitware.com/mailman/listinfo/vtkusers
>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> Phil Sakievich
>>>>>
>>>>> PhD Candidate - Mechanical Engineering
>>>>> Arizona State University - Ira A. Fulton School for Engineering of
>>>>> Matter Transport and Energy
>>>>> Tempe, Arizona
>>>>>
>>>>
>>>>
>>>
>>>
>>> --
>>> Phil Sakievich
>>>
>>> PhD Candidate - Mechanical Engineering
>>> Arizona State University - Ira A. Fulton School for Engineering of
>>> Matter Transport and Energy
>>> Tempe, Arizona
>>>
>>
>>
>
>
> --
> Phil Sakievich
>
> PhD Candidate - Mechanical Engineering
> Arizona State University - Ira A. Fulton School for Engineering of Matter
> Transport and Energy
> Tempe, Arizona
>
--
Phil Sakievich
PhD Candidate - Mechanical Engineering
Arizona State University - Ira A. Fulton School for Engineering of Matter
Transport and Energy
Tempe, Arizona
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://public.kitware.com/pipermail/vtkusers/attachments/20161228/e23b8662/attachment.html>
More information about the vtkusers
mailing list