[Paraview] Slow with just 1M cells

Armin Wehrfritz dkxls23 at gmail.com
Tue Jun 14 08:06:41 EDT 2016


Hi Michele and Ken,

I'm also dealing with dataset that contain polyhedral cells. More
precisely my grids are generated using a "cutcell" approach where an
initially fully hexahedral mesh is refined in the main region of
interest. While most cells will be again hexahedral after the
refinement, the cells on the coarse side of the interface are of general
polyhderal shape. I assume Michele has a very similar approach, though
my data stem from CFD simulation using OpenFOAM.

The OpenFOAM reader in ParaView has an option to decompose polyhedral
cells into standard shapes (mostly pyramids in my case). For instance,
my original dataset has about 10.9M hexahedral and 147k polyhedral
cells. When reading the dataset in ParaView and decomposing the
polyhedral cells, I get in total 12.7M cells, i.e. approximately 1.7M
more than in the original dataset due to the decomposition.
Applying the tetrahedralize filter to the original dataset leads to
67.4M cells and the memory usage more than doubles. (The numbers are
listed below.)

I also experience a slowdown when I do not decompose the polyhedral
cells for the usual post-processing/visualization tasks.
However, I should note that for instance volume rendering of my dataset
with decomposed polyhedral cells is still painfully slow using ParaView
5.0.1 on my laptop (Intel i7-4800MQ 2.70GHz/ NVIDIA Quadro K2100M).
ParaView appears to spend the most of the time with something called
"OpenGLProjectedTetrahedraMapper".

I'm not sure which readers, other than the OpenFOAM reader, support the
decomposition of polyhedral cells, but the approach should work for any
unstructured dataset. So it might be worth considering to implement this
on a more general level, rather than in a specific reader.

Best regards,
Armin



Statistics
==========

OpenFOAM reader
Decompose polyhedra: On
------------------------
Cells: 12.7M
Points: 11.4M
Memory: 1100 MB

OpenFOAM reader
Decompose polyhedra: Off
------------------------
Cells: 11M
Points: 11.3M
Memory: 1800 MB

OpenFOAM reader
Decompose polyhedra: Off
Tetrahedralize filter
------------------------
Cells: 67.4M
Points: 11.3M
Memory: 3400 MB





On 05/25/2016 09:05 PM, Moreland, Kenneth wrote:
> Michele,
>
> I took a look at the data you sent me. I experienced many of the
> issues you brought up.
>
> After taking a closer look at the data, I realized that many of the
> cells in your data are of the general polyhedral type. Unlike the
> standard cell shapes like tetrahedron and hexahedron, polyhedral
> shapes are general polyhedra formed by specifying the face polygons.
> They allow you to specify any flat faceted shape, but computing basic
> operations on them such as interpolations, derivatives, and location
> finding is very expensive. This is why operations like streamlines
> are going so slowly.
>
> If the cells are represented as standard shapes, things go much
> faster. For example, if you tetrahedralize the data, streamlines
> takes well under 10 seconds. That gets the operations to about the
> range where your nameless commercial product is running. I suspect,
> but cannot verify, that this other visualization package is probably
> downgrading the cells to something like hexahedra, which makes it run
> faster.
>
> I don’t recommend running the tetrahedralization filter all the time
> on your data. It is also slow and really bloats the data. If you
> could write out an alternate form of the data that wrote hexahedra
> instead of polyhedra, I suspect things would run much faster. You
> would probably have a problem with faces not being aligned, though.
>
> One final note, although the clip filter is taking a long time, I
> found the slice filter to be much faster. Generally, when dealing
> with large data, you should favor slice over clip. It’s much faster,
> uses much less memory, and usually gives you the same information.
>
> -Ken
>
> On 5/21/16, 9:47 AM, "Moreland, Kenneth" <kmorel at sandia.gov> wrote:
>
>> Michele,
>>
>> Taking over a minute to process a data set with 1 million cells
>> does seem like an unreasonably long time, even for a moderately
>> powered PC. Perhaps something odd is happening here. Can you
>> describe in more detail what your data look like and what you are
>> doing with them?
>>
>> -Ken
>>
>> On 5/20/16, 11:55 AM, "ParaView on behalf of Michele Battistoni"
>> <paraview-bounces at paraview.org on behalf of
>> michele.battistoni at unipg.it> wrote:
>>
>>> Paraview is awesome for lots of functionalities, however I find
>>> it extremely slow in processing data with any filter type, or in
>>> changing timestep as soon as the model size is around one million
>>> cells or above. I have experience with a commercial tool which on
>>> the same model and pc is 100x faster. Let's say a second vs. a
>>> min!
>>>
>>> Is there any specific settings for ram of parallelization among
>>> cores?
>>>
>>> Thanks Michele


More information about the ParaView mailing list