[Paraview-developers] ParaView plugins with MPI only on client

Burlen Loring burlen.loring at gmail.com
Sun Jun 2 14:54:38 EDT 2013


> @Burlen: Starting paraview with mpiexec doesn't work either (I think 
> the client is inherently a serial application, so mpiexec -n 2 will 
> just start the client twice).
my comments about portability , potentially needing to start paraview 
client with mpiexec, etc only applies if you go with Johns patch....

> And I double checked: "Use Multi-Core" is enabled and the number of 
> cores is set to 2, but doing an MPI_Initialized(isInit); in my plugin 
> still yields the result that MPI was in fact NOT initialized yet.
in that case I think that something's wrong with your build/install, 
because that's not at all the behavior I see... you saw the snippet of 
code that my reader uses, and it works with the multicore option.  Which 
version of ParaView are you using? Which OS? I tested multicore  with my 
mpi only reader with 3.98.1 and git master from a couple days ago on linux.

> I tried starting MPI in the constructor of the plugin, but that is not 
> a good idea since the constructor is called multiple times. Also, if 
> ParaView should (legitimately) try to start MPI afterwards, it will 
> crash since MPI was already started. 
Exactly. So don't go that route.

> Argh, it seems like I will have to implement both I/O libraries to get 
> around this…
...the multicore option should work...you may want to figure out why 
it's not working for you before you rewrite...


On 6/2/2013 11:09 AM, Michael Schlottke wrote:
> @John: Indeed, portability is an issue, so there's no way I can 
> compile ParaView for all users with a custom source code patch :-/ 
> Thanks for your idea, though, if it was just for me, I'd probably go 
> this way and be done with it.
>
> @Burlen: Starting paraview with mpiexec doesn't work either (I think 
> the client is inherently a serial application, so mpiexec -n 2 will 
> just start the client twice). And I double checked: "Use Multi-Core" 
> is enabled and the number of cores is set to 2, but doing an 
> MPI_Initialized(isInit); in my plugin still yields the result that MPI 
> was in fact NOT initialized yet.
>
> I tried starting MPI in the constructor of the plugin, but that is not 
> a good idea since the constructor is called multiple times. Also, if 
> ParaView should (legitimately) try to start MPI afterwards, it will 
> crash since MPI was already started. Argh, it seems like I will have 
> to implement both I/O libraries to get around this…
>
> Michael
>
> On Jun 2, 2013, at 09:30 , burlen wrote:
>
>> that could be a fine solution if you're not overly concerned about 
>> portability. if you did that then you might have to use mpiexec(or 
>> whatever launcher) to start the paraview client. (but mpich and 
>> openmpi seem to be fine without mpiexec for 1 process runs...ymmv) 
>> you'd also be prevented from running the client on login nodes at 
>> certain hpc sites.
>>
>> but wait a second...
>>> Just enabling the Multi-Core option in the ParaView settings does 
>>> not seem to do the trick.
>> I think that this should work. at least my reader which can't run 
>> without mpi works with it. Is this a recent version of ParaVIew? Are 
>> you sure you tried on an mpi endowed build? Once you check the 
>> multicore setting you need to restart paraview.
>>
>>
>> On 06/01/2013 03:20 PM, Biddiscombe, John A. wrote:
>>>
>>> Michael,
>>>
>>> I had  the same problem with a plugin of mine, so I added some code 
>>> to call mpi init in the client,
>>>
>>> Have a look at this patch. I can’t remember why I put in an #ifdef 
>>> win32 now, (maybe because I usually run the gui on windows and the 
>>> servers on the cray and probably it needed a tweak on linux…)
>>>
>>> https://github.com/biddisco/ParaView/commit/10e4affe2d7a4736d05d1d14cfaffc996c227649
>>>
>>> note also I needed thread multiple, so you might be able to simplify 
>>> it slightly.
>>>
>>> This patch might be obsolete, because I think I found a way of doing 
>>> it inside the plugin…[pause] … no in the plugin I swap the global 
>>> communicator from a dummycontroller to a true mpi controller. Try 
>>> this patch and if it doesn’t work I’ll point you to my plugin tweaks.
>>>
>>> JB
>>>
>>> *From:*paraview-developers-bounces at paraview.org 
>>> [mailto:paraview-developers-bounces at paraview.org] *On Behalf Of *burlen
>>> *Sent:* 01 June 2013 19:19
>>> *To:* Michael Schlottke
>>> *Cc:* ParaView Developers
>>> *Subject:* Re: [Paraview-developers] ParaView plugins with MPI only 
>>> on client
>>>
>>> Hi Micheal,
>>>
>>> I think you better let ParaView start MPI. There is a method that 
>>> every reader should implement called CanReadFile. If you're reader 
>>> cannot run with out MPI then in CanReadFile you should check if MPI 
>>> is initialized and if not then you should return false. Then 
>>> ParaView will not attempt to use your reader. This will avoid 
>>> crashes when you are not running in client server mode. Something 
>>> like this...
>>>
>>> 276 
>>> //-----------------------------------------------------------------------------
>>> 277 int vtkSQBOVReaderBase::CanReadFile(const char *file)
>>> 278 {
>>> 279   #if defined SQTK_DEBUG
>>> 280   pCerr() << "=====vtkSQBOVReaderBase::CanReadFile" << endl;
>>> 281   pCerr() << "Check " << safeio(file) << "." << endl;
>>> 282   #endif
>>> 283
>>> 284   int status=0;
>>> 285
>>> 286   #ifdef SQTK_WITHOUT_MPI
>>> 287   (void)file;
>>> 288   #else
>>> 289   // first check that MPI is initialized. in builtin mode MPI will
>>> 290   // never be initialized and this reader will be unable to read 
>>> files
>>> 291   // so we always return false in this case
>>> 292   int mpiOk=0;
>>> 293   MPI_Initialized(&mpiOk);
>>> 294   if (!mpiOk)
>>> 295     {
>>> 296     return 0;
>>> 297     }
>>> 298
>>> 299   // only rank 0 opens the file, this results in metadata
>>> 300   // being parsed. If the parsing of md is successful then
>>> 301   // the file is ours.
>>> 302 this->Reader->SetCommunicator(MPI_COMM_SELF);
>>> 303   status=this->Reader->Open(file);
>>> 304   this->Reader->Close();
>>> 305   #endif
>>> 306
>>> 307   return status;
>>> 308 }
>>>
>>> Of course if ParaView is built without MPI then you should always 
>>> return false. An even better solution is to structure your reader to 
>>> work both with and without mpi. I know it's doable if you're using 
>>> unidata netcdf ver 4, not so sure about pnetcdf...
>>>
>>> Burlen
>>>
>>> On 06/01/2013 08:15 AM, Michael Schlottke wrote:
>>>
>>>     Hi,
>>>
>>>     for one of our own ParaView reader plugins we rely on Parallel
>>>     NetCDF (pnetcdf) to do read in data in parallel, which in turn
>>>     uses MPI to do the parallel I/O. Principally our reading
>>>     algorithm works with any number of processes, including one.
>>>
>>>     At the moment, we always have to start a pvserver instance with
>>>     MPI (i.e. mpiexec -n NN pvserver), start a normal client, and
>>>     connect to the pvserver instance if we want to use the plugin -
>>>     this also works for NN=1. However, when I start the ParaView
>>>     client, the plugin crashes because MPI was not loaded/started.
>>>     Thus we always have to go through the extra steps of starting a
>>>     pvserver if we want to use the plugin.
>>>
>>>     Thus my question is whether there is a way to either start/load
>>>     MPI manually from the plugin, or if it is possible to configure
>>>     the client to automatically load and start the MPI library? Just
>>>     enabling the Multi-Core option in the ParaView settings does not
>>>     seem to do the trick.
>>>
>>>     Regards,
>>>
>>>     Michael
>>>
>>>     --
>>>
>>>     Michael Schlottke
>>>
>>>     SimLab Highly Scalable Fluids & Solids Engineering
>>>
>>>     Jülich Aachen Research Alliance (JARA-HPC)
>>>
>>>     RWTH Aachen University
>>>
>>>     Wüllnerstraße 5a
>>>     52062 Aachen
>>>     Germany
>>>
>>>     Phone: +49 (241) 80 95188
>>>
>>>     Fax: +49 (241) 80 92257
>>>
>>>     Mail: m.schlottke at aia.rwth-aachen.de
>>>     <mailto:m.schlottke at aia.rwth-aachen.de>
>>>
>>>     Web: http://www.jara.org/jara-hpc
>>>
>>>
>>>
>>>
>>>
>>>     _______________________________________________
>>>
>>>     Paraview-developers mailing list
>>>
>>>     Paraview-developers at paraview.org  <mailto:Paraview-developers at paraview.org>
>>>
>>>     http://public.kitware.com/mailman/listinfo/paraview-developers
>>>
>>
>

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://public.kitware.com/pipermail/paraview-developers/attachments/20130602/e96c86aa/attachment-0001.htm>


More information about the Paraview-developers mailing list