[Paraview] MPI-aware reader plugin only has rank 0

Biddiscombe, John A. biddisco at cscs.ch
Tue Apr 14 04:42:07 EDT 2015


aha, then your problem is that the way readers handle parallel has been changed and you didn't set CAN_HANDLE_PIECEs or whatever the new name is

[pause]

outInfo->Set(CAN_HANDLE_PIECE_REQUEST(), 1);

is the new way. if you don't set this then your reader only gets created on rank 0


JB

-----Original Message-----
From: Schlottke, Michael [mailto:M.Schlottke at aia.rwth-aachen.de] 
Sent: 14 April 2015 09:09
To: Biddiscombe, John A.
Cc: Utkarsh Ayachit; ParaView
Subject: Re: [Paraview] MPI-aware reader plugin only has rank 0

> are you sure you don't mean that only printf/std:::cout from rank 0 is visible?
I also thought that it might be a visibility issue, thus I opened a file with std::ofstream on each rank with the rank id encoded in the filename. Only one file ever gets created, though, and it is the one with “0” in the name.

> but I actual fact the other pvservers are fine. Create a sphere and check if it has N pieces.
I did that and visualized it by vtkProcessId. The number of ids indeed matches the number of ranks, so I guess nothing fundamental is wrong with the MPI use within ParaView. I just can’t fathom why the reader plugin does not run in parallel. Just to be sure, I added a call 

MPI_Barrier(MPI_COMM_WORLD);

in RequestData and indeed, ParaView gets stuck there, as apparently the collective call is never issued from any rank != 0.

Michael



More information about the ParaView mailing list