[vtk-developers] vtkGenericRenderWindowInteractor

Michael Halle halazar at media.mit.edu
Mon Apr 15 18:59:57 EDT 2002


What if the interactor's events were named after what they do, not
what triggers them?  For instance, vtkBoxWidget has methods called
OnMiddleButtonDown() and OnLeftButtonUp().  That means that the
toolkit knows about mouse clicks, the interactor knows about mouse
clicks, and the widget knows about mouse clicks.  At some layer, it
seems we should not be talking about which button of which physical
device when down or up, but rather what we want to be happening in the
scene (either at a high or low level of abstraction).  

For analogy, xterm in the X window system doesn't hard-code
right-click to some action; instead, it provides a layer of
indirection (the X resources) that map device events to "actions" that
perform them.  This keeps device differences at a high level.  The
current toolkit/interactor/generic interactor/widget layering, there
are multiple levels, but no indirection: the event mapping is
one-to-one at each level.  Seems to me that at the very least by the
time you get to the widget level, you should be talking about actions
that could be bound to events names, not event names themselves.
Perhaps that transition should be happening even higher, at the
interactor level;  not sure.

The point raised that all the key events were getting sent to the
widget and therefore didn't flexibly allow for other user key bindings
is a warning that something's too rigidly tied together....

Does that make slightly more sense?

							--Mike
Michael Halle
mhalle at bwh.harvard.edu





More information about the vtk-developers mailing list