Eric,
I would suggest converting the tiff images into a structured point set where the x and y dimensions are the image pixel dimensions and the z dimension is the 207 pixel depth of the stack of images.
From there you can sample the dataset with marching cubes to create a polydata set.   I haven't played with imaging pipelines, but I know that it must work similar to the way I described.
A look at the frog example will probably help you a lot.   It does what you want.   The "segemented8.tcl" file does all the work.
Good Luck

Eric Engelhard wrote:

Hello,

I am in the process of creating a three dimensional model of a honey
bee exoskeleton. I have already wax embedded and sectioned my first sample. Each of
the 207 tif images has been processed (threshold of
tanned cuticle as black and all else as white). I was wondering if there
is a particular visualization method which would take advantage of this
binary data, as far as enhancing rendering speed. I have the second
edition of the Visualization Toolkit, but am a bit overwhelmed by the
number of objects. Any pointers to online tutorials are also welcome.
Thanks -Eric Engelhard

-----------------------------------------------------------------------------
This is the private VTK discussion list.  Please keep messages on-topic.
Check the FAQ at: <http://www.automatrix.com/cgi-bin/vtkfaq>
To UNSUBSCRIBE, send message body containing "unsubscribe vtkusers" to
<majordomo@gsao.med.ge.com>.  For help, send message body containing
"info vtkusers" to the same address.     Live long and prosper.
-----------------------------------------------------------------------------

-- 
James C Moore, Vice President
URS Technologies, LLC
ph: 614-540-8041, jmoore@qn.net