[Insight-users] RE: Speed of Connected Components Filter

Peter Cech pcech at vision.ee.ethz.ch
Wed May 25 11:59:03 EDT 2005


Hi,

after looking at the code, I have an idea to share.

Scenario 1 is quite costly, it always flattens whole table although
significant fraction of the table is already flattened. But what we can
do every time new equivalency is added, is to flatten only newly added
equivalency (see add_flattened.patch attachment). It will help keep the
equivalency tree shallow (but not flat). It should speed up scenarios
2 to 4. What do you think?

I have attached two patches for itkEquivalencyTable:

add_flattened.patch
Test of the idea described above. Proper implementation should provide
another method (say AddFlattened) instead of changing Add, but it should
suffice for testing and benchmarking.

flatten_optimize.patch
Minor optimization to Flatten method, saves one hash table lookup per
each element of table.

Please test the patches, I'm curious how much it helps.

Peter Cech

On Wed, May 25, 2005 at 08:23:23 -0400, Miller, James V (Research) wrote:
> Sayan, 
> 
> Would you mind running a few more tests?
> 
> Keep the code as you now have it, using RecursiveLookup() instead of of 
> Lookup().  But add a call to Flatten() every now and again.
> 
> Just set a counter to say 8000 or 16000 or 80000, and decrement
> the counter for each visited pixel.  When the counter reaches zero, call
> Flatten() and reset the counter to original value. (We may change the code
> around later using a different iterator to help).
> 
> I want to see if on these large images, it would make sense to flatten
> the equivalency table periodically.  If you could provide timings for 
> different strategies (different periods of flattening), that would be 
> great.
> 
> The choices for the algorithm may be:
> 
> 1. Flatten every time an equivelence is generated (this is the current code)
> 2. Never flatten (this is what you just tested)
> 3. Flatten every N pixels
> 4. Flatten every N new equivalences
> 
> I originally had the code written as you now have it.  I ran timing tests at
> the time and found most of the algorithm time was in RecursiveLookup.  So I 
> changed the code to Flatten after every new equivalence which yielded a 30%
> speedup.  But I was working on small images with a small number of components
> (512x512 images with few hundred components).
> 
> Jim
> 
> 
> 
> -----Original Message-----
> From: Sayan Pathak [mailto:SayanP at alleninstitute.org]
> Sent: Tuesday, May 24, 2005 9:01 PM
> To: Miller, James V (Research)
> Cc: insight-users at itk.org; ken.urish at gmail.com
> Subject: RE: Speed of Connected Components Filter
> 
> 
> Hi Jim,
> Thanks for looking into the connected component analysis filter speed
> issue. Looks like the replacement code significantly reduces execution
> time of the connected component analysis filter for large data sets with
> lots of components. 
> 
> At the Allen Institute, our team is looking into segmenting regions of
> gene expression in high resolution tissue cross-section images of the
> mouse brain (www.brain-map.org). I used 3 different image sets for this
> test. Each image set comprises of a stack of about twenty 2D
> cross-sectional images, each of size approximately (14k x 8k pixels). I
> have an algorithm that segments regions of expression in these images.
> In my 3 test sets, number of distinct connected components were about
> 800k, 1250k and 2500k. Using the recursive lookup, the computation time
> reduced by 37%, 42% and 54% for the three test sets respectively. The
> larger the number of objects in an image, the greater is the impact of
> the recursive lookup.
> 
> Sayan


More information about the Insight-users mailing list