[vtk-developers] parallel camera: how to get image to fill viewport?

Steve M. Robbins steven.robbins at videotron.ca
Wed Nov 16 13:40:26 EST 2005


Howdy David,

On Tue, Nov 15, 2005 at 04:11:38PM -0500, David Gobbi wrote:

> I suspect that the answer you seek lies not in the minds of the 
> developers, nor in the documentation, but in the source.

I take your point and I have read the source a bit.  However, I can
never be sure whether (a) I'm misunderstanding it, or (b) there's a
bug.  So I truly appreciate the time you took to verify these things.



> So no guarantees here, but I've gone through exactly the set of steps 
> that you are going through now, and here is what I remember:
> 
> 1) In vtkImageData, world coordinates give the positions of the centers 
> of the pixels.  This is something I'm 100% certain about.

Good.  That agrees with my observations.


> 3) The corners of the Viewport correspond to positions at the corners of 
> the screen pixels (note: screen pixels, not image data pixels).  Display 
> coordinates correspond to positions at the center of screen pixels.  
> That is why Viewport to Display conversion involves an offset of half a 
> screen pixel.

You're sure about that?  The comment in vtkViewport.cxx method
ViewportToNormalizedDisplay() says the opposite:

    // the 0.5 offset is here because the viewport uses pixel centers
    // while the display uses pixel edges. 

My test program, if you modify it to dump the Viewport-to-Display
transform, outputs:

Viewport-to-Display transform
1.0   0.0   0.5
0.0   1.0   0.5

which is the correct transformation from pixel centre-based coordinates
to pixel corner-based coordinates.


> What you want to do, if I understand, is put the edges of your image 
> data at the edges of the Viewport.

Yes, assuming that the image and viewport have the same aspect ratio.

Once I get that figured out, here's what I really want to do.  When
the aspect ratios are different, I want to compute a "best fit"
transformation such that the image fills the viewport in at least one
direction (horizontal or vertical).


> The code that I use to fit an image to the view is as follows, where 
> "origin", "extent" and "spacing" are direct from the ImageData:
> 
>        xc = origin[0] + 0.5*(extent[0] + extent[1])*spacing[0]
>        yc = origin[1] + 0.5*(extent[2] + extent[3])*spacing[1]
>        xd = (extent[1] - extent[0] + 1)*spacing[0]
>        yd = (extent[3] - extent[2] + 1)*spacing[1]
>        d = camera.GetDistance()
>        camera.SetParallelScale(0.5*yd)
>        camera.SetFocalPoint(xc,yc,0.0)
>        camera.SetPosition(xc,yc,+d)
> 
> This is equivalent to using "height/2.0", where "height" is the number 
> of rows in the image multiplied by the spacing. To me, "(height-1)/2.0" 
> doesn't seem to make sense, and if it works, I'm curious about why it works.

Except for the difference in parallel scale, your code is equivalent
to mine.

Setting the scale to "(height-1)/2" was determined empirically.  I don't
quite understand it.  If I change my test program to "height/2", the
world-to-viewport is no longer the identity.


> In vtkViewport::NormalizedViewportToViewport() I see this code:
> 
>    u = u * (size[0] - 1.0);
>    v = v * (size[1] - 1.0);   
> 
> To me, the above two lines of Viewport code don't make sense.  The width 
> of the viewport should correspond to the number of pixels across the 
> viewport, since we're measuring between pixel corners, not between 
> pixels centers.  I'm having a very hard time convincing myself that the 
> "-1.0" belongs there.

But it does make sense if the viewport is centre-based, which it appears to be.

And if you look at NormalizedDisplayToDisplay(), you'll see:

    u = u*size[0];
    v = v*size[1];

which is correct for corner-based coordinates.


-Steve



More information about the vtk-developers mailing list