[Paraview] ImageData extent overlapping within MPI Catalyst pipeline

Luigi Calori l.calori at cineca.it
Mon Aug 22 14:55:50 EDT 2016


Thanks a lot Utkarsh.

I' d already tried to pass cell data before asking on the list but 
likely did not set up extent correctly.
The obtained slices:
1) had "holes" as with pointdata
2) were "scrambled"
3) were not smooth ( no interpolation inside cells )

With  your extent suggestion  1) and 2) got fixed
In order to have proper interpolation I tried to add a filter of 
celldata to pointdata.

This did not worked properly as seams between different processors is 
still visible.
I tried to add a filter that clip on a value out of range ( to turn 
struct data into unstruct data ) and then the interpolation seem to work 
much better.
We tried this on a small case, I'm bit worried that this approach to 
convert to unstructured data would cause data explosion and would not 
scale well.

Another option would be to add to the c++ or fortran code some mpi calls 
to retrieve the missing border cells, any example for that?


Thanks again

Luigi


On 20/08/2016 17:59, Utkarsh Ayachit wrote:
> Luigi,
>
> For an image data distributed across multiple ranks with no ghost
> cell, there still exists a single point overlap for the points. The
> points along the shared boundary are expected to be duplicated among
> ranks.  I am thinking your 3D regular grid should be mapped as
> cell-data rather than point data. Thus when creating the image, try
> setting the point extents as 1+cell-extents and then pass your data to
> the cell data arrays rather than point data.
>
> e.g.
>
> int cell_exts[6] = { local domain exts };
> int point_exts[6] = { cell_exts[0] , cell_exts[1] + 1,
>                                  cell_exts[2], cel_exts[3] + 1,
>                                  cell_exts[4], cel_exts[5] + 1};
> vtkNew<vtkImageData> data;
> data->SetExtent(point_exts);
> data->GetCellData()->SetScalars(...<vtkDataArray subclass with 3d grid data> );
>
> Utkarsh
>
> On Fri, Aug 19, 2016 at 1:42 PM, Luigi Calori <l.calori at cineca.it> wrote:
>> We are newbie in Catalyst, trying to instrument two CFD MPI simulation codes
>> that both have 3d  regular grid
>>   as it' s data structure. We have followed the fortran example and used
>> ImageData for both codes.
>> The two code are different in the distribution of the memory among
>> processors:
>> one is a differential one who keep overlapping extents among processors:
>> this gave no problems.
>> The other does not have any overlapping as it using 2d fft decomposition.
>> The imagedata extents of each processor do not overlap as there is no need
>> of "ghost" cells.
>> This result in a strange bug in filter slicing:
>> If the slicing is done in the paraview client ( the whole field is passed to
>> client) and we slice there, slice is ok.
>> If the slicing is done in the catalyst pipeline, the slice has gaps.
>>
>> As one difference is the overlapping of each processor extent, is it
>> REQUIRED that there is some overlapping of
>> PointData extents?
>>
>> Thanks in advance for any help
>>
>> Best
>>
>> Luigi
>>
>> --
>> Luigi Calori
>> SuperComputing Applications and Innovation Department
>> CINECA - via Magnanelli, 6/3, 40033 Casalecchio di Reno (Bologna) - ITALY
>> Tel: +39 051 6171509  Fax: +39 051 6132198
>> hpc.cineca.it
>>
>> _______________________________________________
>> Powered by www.kitware.com
>>
>> Visit other Kitware open-source projects at
>> http://www.kitware.com/opensource/opensource.html
>>
>> Please keep messages on-topic and check the ParaView Wiki at:
>> http://paraview.org/Wiki/ParaView
>>
>> Search the list archives at: http://markmail.org/search/?q=ParaView
>>
>> Follow this link to subscribe/unsubscribe:
>> http://public.kitware.com/mailman/listinfo/paraview
>


-- 
Luigi Calori
SuperComputing Applications and Innovation Department
CINECA - via Magnanelli, 6/3, 40033 Casalecchio di Reno (Bologna) - ITALY
Tel: +39 051 6171509  Fax: +39 051 6132198
hpc.cineca.it



More information about the ParaView mailing list