[vtk-developers] Widget picking interfaces only for mouse

Andrew Dolgert ajd27 at cornell.edu
Sun Dec 7 22:37:42 EST 2003


Oh boy, good ideas and a lot of questions.

> do you have your own classes for specific devices or use 
> something like VRPN (http://www.cs.unc.edu/Research/vrpn/)?
I am using something like VRPN.  I've started with CAVELib support and
will finish with VR Juggler.  All three of those libraries read user
input as tracker (position and orientation) and controller (buttons,
joysticks) data in a fairly uniform way.  CAVELib, VR Juggler, and VRPN
would require separate Interactors, but I think that's the only class
that would need to change among the three of them.

> Finally, what about devices that might be used in
> multiple contexts?
You are correct that different devices would have more natural ways to
interact with objects in VTK, but I know of no general solution to that
in any toolkit.  I was happy to find that the 2D mousing method of
picking control points also works in immersive worlds.  Someone would
just have to write their own widget for another style of interaction,
which wouldn't be terrible.
  Actually, DirectX allows you to assign _logical_ meanings to
controllers instead of just reading joystick position, so you can change
whether pulling back on a joystick makes you fly up or down outside of
an application.  It works well for driving games and flight simulators,
but I wouldn't guess how to apply such a thing to protein folding or
structural analysis.

You mentioned meta-interactors and multiple interactors for some cool
applications.  The way these three immersive graphics device libraries
work (VRPN, CAVELib, and VR Juggler, again) is that they present all
devices in an array all at once with a uniform interface.  You miss out
on the advantages of interrupt-driven acquisition and DirectX-like
ability to change a controller's button and wand logical meanings, but
you can easily present all of them from a single interactor.
Collaboration in front of a powerwall could be as simple as making
widgets notice which device in the list picked them.

> have you thought about how 3D
> devices might be used for interaction in general (camera
> and actor transforms) as opposed to just picking?
Yes, I've had to.  This is a big question.  For an immersive
environment, the location and orientation of the user's eyes in a room
control the perspective and view transforms.  Both of these are located
in vtkCamera.  The Interactor has to set them inside vtkCamera.
  There is a problem using vtkCamera for clusters of computers
displaying the same model where the user navigates that model.  The
interactor style sets a transform in the vtkCamera.  That transform is
additive, and, if each vtkCamera on each cluster machine has a different
view for their different viewport, then the view and perspective
transforms will drift.  A possible solution to this is to insert into a
subclass of vtkCamera the idea of a navigated coordinate system so that
every cluster computer's interactor style modifies  an identical
navigation matrix, then the distinct view and perspective matrices are
applied after that.  This method has worked so far in my CAVE, but there
is more testing to be done.
  For actor transforms, I figured we could follow current vtk3DWidget
tradition by changing the actor transforms from a callback on a box
widget, as done in the cone tutorial.  From your question, you seem to
be thinking of a widget which does nothing but move an actor or
assembly.  That seems like a very good idea.  It would let you turn on
or off picking and limit what is pickable.

- Drew Dolgert, Cornell Theory Center




More information about the vtk-developers mailing list