[Paraview] programmable filter, OK in serial, FAILS in mpi

A andrealphus at gmail.com
Tue Nov 14 19:56:58 EST 2017


Hi again!

So I gave that a shot Berk (starting the server and client separatly), but
it still fails (although now I can see the output). It seems like each
thread is trying to operate on the data, but some of the threads have no
data.

E.g. if I do;

from paraview.numpy_support import vtk_to_numpy

import vtkCommonDataModelPython

import numpy as np

from scipy.optimize import curve_fit



if type(self.GetInputDataObject(0,0)) is
vtkCommonDataModelPython.vtkUnstructuredGrid and
type(self.GetInputDataObject(0,1)) is
vtkCommonDataModelPython.vtkPolyData:

    g = 0

    p = 1

elif type(self.GetInputDataObject(0,1)) is
vtkCommonDataModelPython.vtkUnstructuredGrid and
type(self.GetInputDataObject(0,0)) is
vtkCommonDataModelPython.vtkPolyData:

    g = 1

    p = 0

else:

    print('ERROR')

    return


# import the grid

Vs = inputs[g].PointData['Vs']

depth = inputs[g].PointData['depth']

z = inputs[0].PointData['z']


# setup output

output.PointData.append(Vs, 'Vs')

output.PointData.append(depth, 'depth')

output.PointData.append(z, 'z')


# import the profile

Vs_profile = inputs[p].PointData['Vs']

depth_profile = inputs[p].PointData['depth']


print(depth_profile)



I get;

<vtk.numpy_interface.dataset_adapter.VTKNoneArray object at 0x7fd02b9ea910>
<vtk.numpy_interface.dataset_adapter.VTKNoneArray object at 0x7f8971cd5910>
<vtk.numpy_interface.dataset_adapter.VTKNoneArray object at 0x7f010353c910>
<vtk.numpy_interface.dataset_adapter.VTKNoneArray object at 0x7fc5e2d60910>
<vtk.numpy_interface.dataset_adapter.VTKNoneArray object at 0x7f0370faa910>
<vtk.numpy_interface.dataset_adapter.VTKNoneArray object at 0x7fd571637910>
<vtk.numpy_interface.dataset_adapter.VTKNoneArray object at 0x7fc5165c1910>
<vtk.numpy_interface.dataset_adapter.VTKNoneArray object at 0x7fba0f9a2910>
<vtk.numpy_interface.dataset_adapter.VTKNoneArray object at 0x7f37ce82a910>
<vtk.numpy_interface.dataset_adapter.VTKNoneArray object at 0x7f11e08e9910>
<vtk.numpy_interface.dataset_adapter.VTKNoneArray object at 0x7efd695c2910>
<vtk.numpy_interface.dataset_adapter.VTKNoneArray object at 0x7f34e17c5910>
<vtk.numpy_interface.dataset_adapter.VTKNoneArray object at 0x7faed557c910>
[            nan             nan             nan             nan
             nan             nan             nan   -236.76765442
   -674.57861328  -1112.38952637  -1550.20043945  -1988.04382324
  -2425.89160156  -2863.73901367  -3301.58666992  -3739.43408203
  -4177.28173828  -4615.12939453  -5053.00439453  -5491.04589844
  -5929.25048828  -6367.58935547  -6806.04101562  -7244.6015625
  -7683.24755859  -8121.96826172  -8560.76660156  -8999.62597656
  -9438.54199219  -9877.5078125  -10316.52050781 -10755.57617188
 -11194.66992188 -11633.79199219 -12072.91308594 -12512.03515625
 -12951.15722656 -13390.27832031 -13829.40039062 -14268.52246094
 -14707.64355469 -15146.765625   -15585.88769531 -16025.00878906
 -16464.13085938 -16903.25195312 -17342.375      -17781.49609375
 -18220.6171875  -18659.74023438 -19098.86132812 -19537.98242188
 -19977.10546875 -20416.2265625  -20855.34765625 -21294.47070312
 -21733.59179688 -22172.71289062 -22611.8359375  -23050.95703125
 -23490.078125   -23929.20117188 -24368.32226562 -24807.44335938
 -25246.56640625 -25685.6875     -26124.80859375 -26563.93164062
 -27003.05273438 -27442.17382812 -27881.296875   -28320.41796875
 -28759.5390625  -29198.66210938 -29637.78320312 -30076.90429688
 -30516.02734375 -30955.1484375  -31394.26953125 -31833.39257812
 -32272.51367188 -32711.63476562 -33150.7578125  -33589.87890625 -34029.
 -34468.12109375 -34907.24609375 -35346.3671875  -35785.48828125
 -36224.609375   -36663.73046875 -37102.8515625  -37541.9765625
 -37981.09765625 -38420.21875    -38859.33984375 -39298.4609375
 -39737.58203125 -40176.70703125 -40615.828125               nan]


When I try and run subsequent operations on the variable (like
np.isnan(depth_profile)) it returns errors for the first 13 empty threads
and vaild results for the 14th thread, but then pseudo-crashes.

E.g. (same code as above but now with)

nanx = np.argwhere(np.isnan(depth_profile))


output;

TypeError: ufunc 'isnan' not supported for the input types, and the inputs
could not be safely coerced to any supported types according to the casting
rule ''safe''

TypeError: ufunc 'isnan' not supported for the input types, and the inputs
could not be safely coerced to any supported types according to the casting
rule ''safe''

TypeError: ufunc 'isnan' not supported for the input types, and the inputs
could not be safely coerced to any supported types according to the casting
rule ''safe''

TypeError: ufunc 'isnan' not supported for the input types, and the inputs
could not be safely coerced to any supported types according to the casting
rule ''safe''

TypeError: ufunc 'isnan' not supported for the input types, and the inputs
could not be safely coerced to any supported types according to the casting
rule ''safe''

TypeError: ufunc 'isnan' not supported for the input types, and the inputs
could not be safely coerced to any supported types according to the casting
rule ''safe''

TypeError: ufunc 'isnan' not supported for the input types, and the inputs
could not be safely coerced to any supported types according to the casting
rule ''safe''

TypeError: ufunc 'isnan' not supported for the input types, and the inputs
could not be safely coerced to any supported types according to the casting
rule ''safe''



So any idea on how to explicitly only make it only consider the valid
thread? Or any other idea how to fix this? Once again, it works perfectly
on serial, so is just an issue with how PV, myself, and the programmable
filter are interacting with MPI.





On Wed, Oct 11, 2017 at 12:37 PM, Berk Geveci <berk.geveci at kitware.com>
wrote:

> So instead of using the auto MPI stuff, try running the server manually.
> Turn off auto MPI and then run the server on the command  line with
> something like:
>
> >  /usr/bin/mpiexec -np 14 /home/ashton/Treetop/CHANGELIN
> GS/installs/ParaView-05.08.2017/bin/pvserver
>
> Then start the client using something like:
>
> > /home/ashton/Treetop/CHANGELINGS/installs/ParaView-05.08.2017/bin/paraview
> -url=cs://localhost:11111
>
> Now the output on the server side should show up in the terminal where you
> ran mpiexec.
>
> Best,
> -berk
>
>
>
> On Wed, Oct 11, 2017 at 2:53 PM, A <andrealphus at gmail.com> wrote:
>
>> Yup, compiled myself. MPI version, open mpi 1.10.2;
>>
>> ParaView-05.08.2017/bin/paraview
>> AutoMPI: SUCCESS: command is:
>>  "/usr/bin/mpiexec" "-np" "14" "/home/ashton/Treetop/CHANGELI
>> NGS/installs/ParaView-05.08.2017/bin/pvserver" "--server-port=35817"
>> AutoMPI: starting process server
>> -------------- server output --------------
>> Waiting for client...
>> AutoMPI: server successfully started.
>> AutoMPI: SUCCESS: command is:
>>  "/usr/bin/mpiexec" "-np" "14" "/home/ashton/Treetop/CHANGELI
>> NGS/installs/ParaView-05.08.2017/bin/pvserver" "--server-port=43284"
>> AutoMPI: starting process server
>> Waiting for client...
>> AutoMPI: server successfully started.
>>
>> /usr/bin/mpiexec --version
>> mpiexec (OpenRTE) 1.10.2
>>
>>
>> I figured the print statements are getting lost somewhere along the way.
>> I do a decent bit of multiprocessing in python, but unsure of the backend
>> of how its done in Paraview when you set it up to use mpi and are using a
>> programmable filter. I assume Paraview takes care of it itself, but not
>> sure if this is correct.
>>
>>
>>
>> On Wed, Oct 11, 2017 at 10:56 AM, Berk Geveci <berk.geveci at kitware.com>
>> wrote:
>>
>>> Hmmm that's weird. Did you compile yourself? Which MPI are you using?
>>> Most MPI versions print out standard output on the terminal... Also, you
>>> should probably print only from a single MPI rank to avoid clashes. You can
>>> do that by getting rank using mpi4py...
>>>
>>> Best,
>>> -berk
>>>
>>> On Wed, Oct 11, 2017 at 1:34 PM, A <andrealphus at gmail.com> wrote:
>>>
>>>> Hi Berk!
>>>>
>>>> Yup when in mpi it's running from pvserver in a terminal. Sometimes
>>>> errors print to that terminal window sometimes not. Print outputs never
>>>> print to the terminal.
>>>>
>>>> The data sources are some home rolled netcdfs, and the filter in a
>>>> middle step in a long processing chain. These files are several gbs
>>>>
>>>>
>>>> On Oct 11, 2017 10:23 AM, "Berk Geveci" <berk.geveci at kitware.com>
>>>> wrote:
>>>>
>>>> Hi Andre,
>>>>
>>>> Are you running pvserver explicitly? If you run it explicitly and
>>>> connect with the GUI to it, the output of print statements should show up
>>>> on the terminal you ran mpiexec/mpirun on. Once you do that and we know
>>>> what the error is, I should be able to help more.
>>>>
>>>> PS: What is your data source? (file format?)
>>>>
>>>> Best,
>>>> -berk
>>>>
>>>> On Tue, Oct 10, 2017 at 6:59 PM, A <andrealphus at gmail.com> wrote:
>>>>
>>>>> I normally run Paraview on my workstation with mpi support (14 cores).
>>>>> It's been working fine like this for a year.
>>>>>
>>>>> For some reason however, the debug/output messages windows dont work
>>>>> when running in mpi (e.g. print "hello", returns nothing). But they do work
>>>>> when I turn mpi off.
>>>>>
>>>>> I recently wrote a few new programmable filters, and while they work
>>>>> perfectly with mpi off, the hand and dont do anything with mpi on.
>>>>>
>>>>> Any idea?
>>>>>
>>>>> -ashton
>>>>>
>>>>> p.s. heres one of the filters for example;
>>>>>
>>>>> from paraview.numpy_support import vtk_to_numpy
>>>>>
>>>>> import vtkCommonDataModelPython
>>>>>
>>>>> import numpy as np
>>>>>
>>>>> from scipy.optimize import curve_fit
>>>>>
>>>>>
>>>>> if type(self.GetInputDataObject(0,0)) is vtkCommonDataModelPython.vtkUnstructuredGrid and type(self.GetInputDataObject(0,1)) is vtkCommonDataModelPython.vtkPolyData:
>>>>>
>>>>>     g = 0
>>>>>
>>>>>     p = 1
>>>>>
>>>>> elif type(self.GetInputDataObject(0,1)) is vtkCommonDataModelPython.vtkUnstructuredGrid and type(self.GetInputDataObject(0,0)) is vtkCommonDataModelPython.vtkPolyData:
>>>>>
>>>>>     g = 1
>>>>>
>>>>>     p = 0
>>>>>
>>>>> else:
>>>>>
>>>>>     print('ERROR')
>>>>>
>>>>>     return
>>>>>
>>>>>
>>>>> # import the grid
>>>>>
>>>>> Vs = inputs[g].PointData['Vs']
>>>>>
>>>>> depth = inputs[g].PointData['depth']
>>>>>
>>>>> z = inputs[0].PointData['z']
>>>>>
>>>>>
>>>>> # setup output
>>>>>
>>>>> output.PointData.append(Vs, 'Vs')
>>>>>
>>>>> output.PointData.append(depth, 'depth')
>>>>>
>>>>> output.PointData.append(z, 'z')
>>>>>
>>>>>
>>>>> # import the profile
>>>>>
>>>>> Vs_profile = inputs[p].PointData['Vs']
>>>>>
>>>>> depth_profile = inputs[p].PointData['depth']
>>>>>
>>>>>
>>>>> def func(x, a, b, c, d,e):
>>>>>
>>>>>     return a + b*x + c*x**2 + d*x**3 + e*x**4
>>>>>
>>>>>
>>>>> nanx = np.argwhere(np.isnan(depth_profile))
>>>>>
>>>>> nany = np.argwhere(np.isnan(Vs_profile))
>>>>>
>>>>> nani = np.unique(np.append(nanx,nany))
>>>>>
>>>>> xdata = numpy.delete(depth_profile, nani)
>>>>>
>>>>> ydata = numpy.delete(Vs_profile, nani)
>>>>>
>>>>>
>>>>> popt, pcov1 = curve_fit(func, xdata, ydata)
>>>>>
>>>>>
>>>>>
>>>>> Vs_theory = popt[0] + popt[1]*depth + popt[2]*depth**2 + popt[3]*depth**3 + popt[4]*depth**4
>>>>>
>>>>>
>>>>> diff = Vs - Vs_theory
>>>>>
>>>>> per_diff=100*diff/Vs_theory
>>>>>
>>>>> output.PointData.append(per_diff, 'perturbation')
>>>>>
>>>>>
>>>>>
>>>>> _______________________________________________
>>>>> Powered by www.kitware.com
>>>>>
>>>>> Visit other Kitware open-source projects at
>>>>> http://www.kitware.com/opensource/opensource.html
>>>>>
>>>>> Please keep messages on-topic and check the ParaView Wiki at:
>>>>> http://paraview.org/Wiki/ParaView
>>>>>
>>>>> Search the list archives at: http://markmail.org/search/?q=ParaView
>>>>>
>>>>> Follow this link to subscribe/unsubscribe:
>>>>> http://public.kitware.com/mailman/listinfo/paraview
>>>>>
>>>>>
>>>>
>>>>
>>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://public.kitware.com/pipermail/paraview/attachments/20171114/b8eeba4b/attachment.html>


More information about the ParaView mailing list