[Paraview-developers] ParaView plugins with MPI only on client

burlen burlen.loring at gmail.com
Tue Oct 1 03:19:50 EDT 2013


sounds like a good compromise.

another way to improve the multicore might be to put up a pvsc like 
dialog requesting the number of cores as PV starts.

a command line option for the multicore would be great too. :) You could 
even make use of the one you have, using the client only initialization 
when only 1 core is requested.

On 09/30/2013 08:46 AM, Utkarsh Ayachit wrote:
> Burlen et. al.
>
> As a compromise, this what I've done: http://review.source.kitware.com/#/t/2044/
>
> + The ParaView client processes (paraview and pvpython) now provide
> two extra command line options "--mpi", "--no-mpi". Specifying one or
> the other will make the client initialize MPI at startup.
> + The default to use when neither of the command line arguments is
> specified is controlled by the CMake variable
> PARAVIEW_INITIALIZE_MPI_ON_CLIENT. When this CMake variable is ON, and
> no command line arguments are specified, ParaView clients will init
> MPI at startup. One can override using the two new command line
> options ("--mpi", "--no-mpi").
> + PARAVIEW_INITIALIZE_MPI_ON_CLIENT is OFF by default, so for people
> building ParaView that are not aware of this, there will be no change.
> + We will enable PARAVIEW_INITIALIZE_MPI_ON_CLIENT on the official
> ParaView binaries, so we can distribute readers, filters that rely on
> MPI to run without too much hassle.
>
> What do you think?
>
> Utkarsh
>
>
> On Mon, Jun 3, 2013 at 2:42 PM, burlen <burlen.loring at gmail.com> wrote:
>> Hi Utkarsh,
>>
>> for my 0.02$ I don't find the idea of using mpi in the client very
>> appealing. I think it's going to add complexity without delivering anything
>> more than the current multicore option does.
>>
>> given what the mpi 3 spec says about startup, there will not be a portable
>> solution any time soon (see below).
>>
>> I assume you'll still support client with out mpi? this will be useful for
>> running the client on login nodes at hpc sites, many of which detect mpi
>> programs and shut them down. For the mpi-less client codes like Michael's
>> will crash it. therefor, we developers will need to do as we do now, either
>> use VTK's controllers/communicators or avoid MPI calls when it's not
>> available. assuming that client on login nodes is a use case that you intend
>> to support, mpi in the client is not really going to make a developers life
>> easier...would the hpc site then need two builds? one with mpi for server
>> running on compute nodes and one without mpi for client running on login
>> nodes?
>>
>> I'm curious as to what's wrong with the multicore solution as it stands?
>> with muticore option, things just work for the user. readers/sources/filters
>> don't execute in the client so the client doesn't need mpi at all. Are there
>> any use cases where MPI is needed outside of a reader/soruce/filter?
>>
>> but say you wanted to do it what could you do?
>> for startup you could do as you do now for server startups -- force the
>> site/user to provide a pvsc (pvcc?), or perhaps force the site/user to
>> describe the mpi startup command at build time similar to how parallel
>> ctests currently work. to support running server with MPI and client
>> without, it would be nice if you refactored the build so that client and
>> server have independent MPI cmake configurations.
>>
>> Burlen
>>
>> from mpi 3.0 spec:
>>
>> """
>> While a standardized startup mechanism improves the usability of MPI, the
>> range of
>> environments is so diverse (e.g., there may not even be a command line
>> interface) that MPI
>> cannot mandate such a mechanism. Instead, MPI speci es an mpiexec startup
>> command
>> and recommends but does not require it, as advice to implementors.
>>
>> """
>>
>> On 06/03/2013 07:55 AM, Utkarsh Ayachit wrote:
>>> Just FYI, we have plans to init MPI even for the client to simplify
>>> the use-case that Michael has. At the same time, the issues with
>>> "mpiexec" that John talks about for Windows (it also exists on Linux
>>> with certain implementations) makes it tricky and hence is hasn't
>>> happened yet. If any one has suggestions for a portable
>>> implementation, let's start a new thread to discuss how that could be
>>> done.
>>>
>>> Utkarsh
>>



More information about the Paraview-developers mailing list