[Paraview] CoProcessing
pat marion
pat.marion at kitware.com
Wed Aug 4 13:30:33 EDT 2010
Below is a simple cxx program I have used for testing. It creates one
sphere per process, positions the sphere as a function of the process id,
and the spheres grow/shrink over time.
Pat
#include "vtkCPProcessor.h"
#include "vtkCPPythonScriptPipeline.h"
#include "vtkMultiProcessController.h"
#include "vtkXMLUnstructuredGridReader.h"
#include "vtkUnstructuredGrid.h"
#include "vtkCPDataDescription.h"
#include "vtkCPInputDataDescription.h"
#include "vtkSmartPointer.h"
#include "vtkPolyData.h"
#include "vtkSphereSource.h"
#include <stdio.h>
#include <string>
#include <sstream>
static unsigned int procId;
void myprint(const std::string& str)
{
printf("driver (%u): %s\n", procId, str.c_str());
}
class DataGenerator {
public:
DataGenerator()
{
this->Sphere = vtkSmartPointer<vtkSphereSource>::New();
this->Sphere->SetThetaResolution(30);
this->Sphere->SetPhiResolution(30);
this->Sphere->SetCenter(procId*4.0, 0, 0);
this->Index = 0;
}
vtkSmartPointer<vtkPolyData> GetNext()
{
double radius = fabs(sin(0.1 * this->Index));
this->Index++;
this->Sphere->SetRadius(1.0 + radius);
this->Sphere->Update();
vtkSmartPointer<vtkPolyData> ret = vtkSmartPointer<vtkPolyData>::New();
ret->DeepCopy(this->Sphere->GetOutput());
return ret;
}
protected:
int Index;
vtkSmartPointer<vtkSphereSource> Sphere;
};
int main(int argc, char* argv[])
{
if (argc < 3)
{
printf("Usage: %s <cp python file> <number of steps>\n", argv[0]);
return 1;
}
std::string cpPythonFile = argv[1];
int nSteps = atoi(argv[2]);
myprint("starting coprocessor");
vtkCPProcessor* processor = vtkCPProcessor::New();
processor->Initialize();
vtkCPPythonScriptPipeline* pipeline = vtkCPPythonScriptPipeline::New();
// mpi was initialized when we called vtkCPPythonScriptPipeline::New()
procId =
vtkMultiProcessController::GetGlobalController()->GetLocalProcessId();
// read the coprocessing python file
myprint("loading pipeline python file: " + cpPythonFile);
int success = pipeline->Initialize(cpPythonFile.c_str());
if (!success)
{
myprint("aborting");
return 1;
}
processor->AddPipeline(pipeline);
pipeline->Delete();
if (nSteps == 0)
{
return 0;
}
// create a data source
DataGenerator generator;
// do coprocessing
double tStart = 0.0;
double tEnd = 1.0;
double stepSize = (tEnd - tStart)/nSteps;
vtkCPDataDescription* dataDesc = vtkCPDataDescription::New();
dataDesc->AddInput("input");
for (int i = 0; i < nSteps; ++i)
{
double currentTime = tStart + stepSize*i;
std::stringstream timeStr;
timeStr << "time(" << i << ", " << currentTime << ")";
dataDesc->SetTimeData(currentTime, i);
myprint("call RequestDataDescription, " + timeStr.str());
int do_coprocessing = processor->RequestDataDescription(dataDesc);
if (do_coprocessing)
{
myprint("calling CoProcess, " + timeStr.str());
vtkSmartPointer<vtkDataObject> dataObject =
generator.GetNext();
dataDesc->GetInputDescriptionByName("input")->SetGrid(dataObject);
processor->CoProcess(dataDesc);
}
}
myprint("finalizing");
dataDesc->Delete();
processor->Finalize();
processor->Delete();
return 0;
}
On Wed, Aug 4, 2010 at 12:14 PM, Andy Bauer <andy.bauer at kitware.com> wrote:
> Hi Jacques,
>
> There is a polyhedra cell type in VTK now --
> http://www.vtk.org/doc/nightly/html/classvtkPolyhedron.html
> As far as I know it works with all of the proper filters but since I
> haven't tried it yet I won't promise that. The good news is that Will
> Schroeder had a high interest in it and probably worked on some of it so I'd
> assume that it's working quite well right now.
>
> As for Phasta, it does run in parallel (as props to their developers it was
> a finalist for the 2009 Gordon Bell prize). The grid is already partitioned
> and each process runs the adaptor and creates an unstructured grid from its
> portion of the partitioned mesh. Thus, there isn't any need for mpi calls
> in the adaptor code. If you had ghost cell information in your partitioned
> mesh and wanted to get fancy you should be able to add that to your
> partitioned grid to make some of the filters faster but I haven't tried
> that.
>
> Andy
>
>
> On Wed, Aug 4, 2010 at 11:58 AM, Jacques Papper <jacques.papper at gmail.com>wrote:
>
>> Thanks a lot Andy, Takuya,
>>
>> I'm using the PhastaAdaptor, and the FortranAdaptorAPI as a guide for the
>> moment.
>> I know there were talks of getting POLYHEDRAL cell support in VTK. Do you
>> know if this is there yet?
>> My dataset is multiregion unstructured polyhedral mesh domain decomposed
>> amongst each processor.
>>
>> Is the Phasta code parallelized ? If so, I do not see any MPI statements
>> in the adaptor code ?
>>
>> Jacques
>>
>> 2010/8/4 Andy Bauer <andy.bauer at kitware.com>
>>
>> Hi Jacques,
>>>
>>> What type of data set do you have? Even though the PHASTA adaptor (
>>> ParaView/CoProcessing/Adaptors/FortranAdaptors/PhastaAdaptor) is for fortran
>>> code it may give you an idea. Also stepping through the example in
>>> ParaView/CoProcessing/CoProcessor/Testing/Cxx/PythonScriptCoProcessingExample.cxx
>>> may help as well.
>>>
>>> I'll spend some time this week putting up a skeleton of a simulation code
>>> on the coprocessing wiki that should hopefully be easier to follow. I'll
>>> let you know when it's done.
>>>
>>> Andy
>>>
>>> On Wed, Aug 4, 2010 at 8:02 AM, Jacques Papper <jacques.papper at gmail.com
>>> > wrote:
>>>
>>>> Hi All,
>>>>
>>>> Sorry for my last post, I figured out that I had wrongly set my
>>>> PYTHONPATH..
>>>> All the tests work ok now. Still interested in CoProcessing adaptors
>>>> examples though :)
>>>>
>>>> Thanks
>>>> Jacques
>>>>
>>>> 2010/8/4 Jacques Papper <jacques.papper at gmail.com>
>>>>
>>>>
>>>>> Hi All,
>>>>>
>>>>> I'm starting to look into the ParaView CoProcessing libraries.
>>>>> I just pulled from git today, and compiled it all up following the
>>>>> guidelines in :
>>>>> http://www.paraview.org/Wiki/CoProcessing
>>>>> I didn't find :
>>>>> *BUILD_PYTHON_COPROCESSING_ADAPTOR
>>>>> *but instead :
>>>>> PARAVIEW_BUILD_PLUGIN_CoProcessingScriptGenerator
>>>>> anyway the compilation went through without any issues.
>>>>> I then tried :
>>>>>
>>>>> ctest -R CoProcessing
>>>>> Test project /users/boreas01/jacques/PARAVIEW/ParaView-bin
>>>>> Start 491: CoProcessingTestPythonScript
>>>>> 1/3 Test #491: CoProcessingTestPythonScript ........... Passed
>>>>> 0.45 sec
>>>>> Start 492: CoProcessingPythonScriptGridPlot
>>>>> 2/3 Test #492: CoProcessingPythonScriptGridPlot .......***Failed
>>>>> 0.09 sec
>>>>> Start 493: CoProcessingPythonScriptPressurePlot
>>>>> 3/3 Test #493: CoProcessingPythonScriptPressurePlot ...***Failed
>>>>> 0.09 sec
>>>>>
>>>>> 33% tests passed, 2 tests failed out of 3
>>>>>
>>>>> Total Test time (real) = 0.68 sec
>>>>>
>>>>> The following tests FAILED:
>>>>> 492 - CoProcessingPythonScriptGridPlot (Failed)
>>>>> 493 - CoProcessingPythonScriptPressurePlot (Failed)
>>>>> Errors while running CTest
>>>>>
>>>>> Is this a problem in my current installation or on the master branch ?
>>>>>
>>>>> Finally, I would like to start writing an adaptor to a C++ parallelised
>>>>> code. Can you tell me what is the closest code I can inspire myself from ?
>>>>>
>>>>>
>>>>> Thank you,
>>>>> Jacques
>>>>>
>>>>>
>>>>> PS sorry mixed up the subjects..
>>>>>
>>>>
>>>>
>>>> _______________________________________________
>>>> Powered by www.kitware.com
>>>>
>>>> Visit other Kitware open-source projects at
>>>> http://www.kitware.com/opensource/opensource.html
>>>>
>>>> Please keep messages on-topic and check the ParaView Wiki at:
>>>> http://paraview.org/Wiki/ParaView
>>>>
>>>> Follow this link to subscribe/unsubscribe:
>>>> http://www.paraview.org/mailman/listinfo/paraview
>>>>
>>>>
>>>
>>
>
> _______________________________________________
> Powered by www.kitware.com
>
> Visit other Kitware open-source projects at
> http://www.kitware.com/opensource/opensource.html
>
> Please keep messages on-topic and check the ParaView Wiki at:
> http://paraview.org/Wiki/ParaView
>
> Follow this link to subscribe/unsubscribe:
> http://www.paraview.org/mailman/listinfo/paraview
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.paraview.org/pipermail/paraview/attachments/20100804/461e6bcc/attachment-0001.htm>
More information about the ParaView
mailing list