[Insight-developers] Question regarding itkColorTable
James Ross
jross at bwh.harvard.edu
Wed Dec 23 11:00:40 EST 2009
Greetings,
I have recently been working with itkColorTable and have a question
about the UseRandomColors method. When I attempt the following:
typedef itk::ColorTable< float > ColorType;
ColorType::Pointer colorTable = ColorType::New();
colorTable->UseRandomColors( 5000 );
my program crashes.
However, if I alter the body of ::UseRandomColors from:
r = static_cast<TPixel>(vnl_sample_uniform(NumericTraits<TPixel>
::NonpositiveMin(),
NumericTraits<TPixel>::max()));
m_Color[i].SetRed( r );
g = static_cast<TPixel>(vnl_sample_uniform(NumericTraits<TPixel>
::NonpositiveMin(),
NumericTraits<TPixel>::max()));
m_Color[i].SetGreen( g );
b = static_cast<TPixel>(vnl_sample_uniform(NumericTraits<TPixel>
::NonpositiveMin(),
NumericTraits<TPixel>::max()));
m_Color[i].SetBlue( b );
to:
r = static_cast<TPixel>( vnl_sample_uniform( 0.0, 1.0 ) );
m_Color[i].SetRed( r );
g = static_cast<TPixel>( vnl_sample_uniform( 0.0, 1.0 ) );
m_Color[i].SetGreen( g );
b = static_cast<TPixel>( vnl_sample_uniform( 0.0, 1.0 ) );
m_Color[i].SetBlue( b );
my program runs, and I get expected behavior. Why are the color
components randomly chosen in the interval from non-positive min to max
in the original? Isn't the floating point color component convention
associated with the interval [0.0, 1.0]? If the original is a logical
bug, I'd be happy to fix and commit. If I'm doing something incorrectly,
please let me know.
Many thanks,
-James
More information about the Insight-developers
mailing list