<div dir="ltr"><div>This is fine for me. Then we can add #IFDEFs to the server-manager XML to enable or disable MPI readers instead of using the multiprocess_support option.<br><br></div>I did look at the code though and added a comment in gerrit.<br>
</div><div class="gmail_extra"><br><br><div class="gmail_quote">On Mon, Sep 30, 2013 at 11:46 AM, Utkarsh Ayachit <span dir="ltr"><<a href="mailto:utkarsh.ayachit@kitware.com" target="_blank">utkarsh.ayachit@kitware.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Burlen et. al.<br>
<br>
As a compromise, this what I've done: <a href="http://review.source.kitware.com/#/t/2044/" target="_blank">http://review.source.kitware.com/#/t/2044/</a><br>
<br>
+ The ParaView client processes (paraview and pvpython) now provide<br>
two extra command line options "--mpi", "--no-mpi". Specifying one or<br>
the other will make the client initialize MPI at startup.<br>
+ The default to use when neither of the command line arguments is<br>
specified is controlled by the CMake variable<br>
PARAVIEW_INITIALIZE_MPI_ON_CLIENT. When this CMake variable is ON, and<br>
no command line arguments are specified, ParaView clients will init<br>
MPI at startup. One can override using the two new command line<br>
options ("--mpi", "--no-mpi").<br>
+ PARAVIEW_INITIALIZE_MPI_ON_CLIENT is OFF by default, so for people<br>
building ParaView that are not aware of this, there will be no change.<br>
+ We will enable PARAVIEW_INITIALIZE_MPI_ON_CLIENT on the official<br>
ParaView binaries, so we can distribute readers, filters that rely on<br>
MPI to run without too much hassle.<br>
<br>
What do you think?<br>
<br>
Utkarsh<br>
<br>
<br>
On Mon, Jun 3, 2013 at 2:42 PM, burlen <<a href="mailto:burlen.loring@gmail.com">burlen.loring@gmail.com</a>> wrote:<br>
> Hi Utkarsh,<br>
><br>
> for my 0.02$ I don't find the idea of using mpi in the client very<br>
> appealing. I think it's going to add complexity without delivering anything<br>
> more than the current multicore option does.<br>
><br>
> given what the mpi 3 spec says about startup, there will not be a portable<br>
> solution any time soon (see below).<br>
><br>
> I assume you'll still support client with out mpi? this will be useful for<br>
> running the client on login nodes at hpc sites, many of which detect mpi<br>
> programs and shut them down. For the mpi-less client codes like Michael's<br>
> will crash it. therefor, we developers will need to do as we do now, either<br>
> use VTK's controllers/communicators or avoid MPI calls when it's not<br>
> available. assuming that client on login nodes is a use case that you intend<br>
> to support, mpi in the client is not really going to make a developers life<br>
> easier...would the hpc site then need two builds? one with mpi for server<br>
> running on compute nodes and one without mpi for client running on login<br>
> nodes?<br>
><br>
> I'm curious as to what's wrong with the multicore solution as it stands?<br>
> with muticore option, things just work for the user. readers/sources/filters<br>
> don't execute in the client so the client doesn't need mpi at all. Are there<br>
> any use cases where MPI is needed outside of a reader/soruce/filter?<br>
><br>
> but say you wanted to do it what could you do?<br>
> for startup you could do as you do now for server startups -- force the<br>
> site/user to provide a pvsc (pvcc?), or perhaps force the site/user to<br>
> describe the mpi startup command at build time similar to how parallel<br>
> ctests currently work. to support running server with MPI and client<br>
> without, it would be nice if you refactored the build so that client and<br>
> server have independent MPI cmake configurations.<br>
><br>
> Burlen<br>
><br>
> from mpi 3.0 spec:<br>
><br>
> """<br>
> While a standardized startup mechanism improves the usability of MPI, the<br>
> range of<br>
> environments is so diverse (e.g., there may not even be a command line<br>
> interface) that MPI<br>
> cannot mandate such a mechanism. Instead, MPI speci es an mpiexec startup<br>
> command<br>
> and recommends but does not require it, as advice to implementors.<br>
><br>
> """<br>
><br>
> On 06/03/2013 07:55 AM, Utkarsh Ayachit wrote:<br>
>><br>
>> Just FYI, we have plans to init MPI even for the client to simplify<br>
>> the use-case that Michael has. At the same time, the issues with<br>
>> "mpiexec" that John talks about for Windows (it also exists on Linux<br>
>> with certain implementations) makes it tricky and hence is hasn't<br>
>> happened yet. If any one has suggestions for a portable<br>
>> implementation, let's start a new thread to discuss how that could be<br>
>> done.<br>
>><br>
>> Utkarsh<br>
><br>
><br>
_______________________________________________<br>
Paraview-developers mailing list<br>
<a href="mailto:Paraview-developers@paraview.org">Paraview-developers@paraview.org</a><br>
<a href="http://public.kitware.com/mailman/listinfo/paraview-developers" target="_blank">http://public.kitware.com/mailman/listinfo/paraview-developers</a><br>
</blockquote></div><br></div>