greenhouse
a creative coding toolkit for spatial interfaces

Handling User Input

In g-speak and Greenhouse, we provide a common language for describing actions users take using all forms of inputs; from mouse and keyboard to mobile devices and gestural inputs like the kinect.

We classify user events into three main forms: blurt events, pointing events, and displacement events, which we’ll cover below.

All events in the API come tagged with a provenance and a wordstamp. The provenance is the source of the event: it’s a unique name for each device, hand, or other event-generating-thing.

The wordstamp describes the event, pose, or action. For instance, pressing the “w” key on a keyboard might result in a blurt event with a provenance of “keyboard/osx” and a wordstamp of “w”. A user holding up a fist in front of a kinect might generate a pointing event with a provenance of “gspk:120” and a wordstamp of “fist”.

Blurt Events

Blurt Events are singular, one-shot actions. For instance, a single press on the keyboard, or showing a “victory” sign. There are three stages, or states, of blurt:

  1. Blurt: the blurt happens (this is the common case)
  2. BlurtRepeat: the event recurs (not yet supported in Greenhouse)
  3. BlurtVanish: the event is over (not yet supported in Greenhouse)

In a traditional application with a keyboard attached, these events would map to KeyDown, KeyRepeat, KeyUp. You can learn the syntax for handling blurt events in the reference pages.

Blurt events have a field called Utterance, which is the same as their wordstamp: a description of the gesture or ascii key that generated the event.

Pointing Events

Pointing Events are continuous, location-based events. For instance, moving a mouse, or using a finger or a wiimote to point in space. Pointing events are about directing attention along some line in space. In the case of a mouse, that line is always perpendicular (that is, normal) to the screen plane. In the case of a hand, it might be in any direction.

Pointing events always contain the physical location of the device or hand and usually (but not always) an “aim” vector indicating where the user is pointing.

There are five stages, or states, of pointing:

  1. PointingAppear: the device or hand pose is found. You can configure which hand poses are of interest.
  2. PointingMove: the device or pose moved in space (maps to conventional mouse-move events)
  3. PointingHarden: an action indicating selection (maps to conventional mouse-down events). For hand poses, this is usually some secondary pose which you defined to be different than the initial pose. A typical pair might be open hand/closed fist.
  4. PointingSoften: a return from the selection state back to the default state
  5. PointingVanish: the device or hand pose is gone

You can learn more about the syntax for handling pointing events in the reference pages. Below, we discuss how some of the various input types generate pointing events differently.

Displacement Events

Displacement Events are another kind of continuous, location-based event. But where pointing events are about directing attention along some line in space, displacement events are about how far some interesting object or pose has moved.

Displacement events are useful for creating scrolling or zooming behaviors. For example, in the case of a map, you might have the user makes some hand pose, then pan it back and forth to pan the map correspondingly.

There are three states or stages of displacement:

  1. DisplacementAppear: the device or hand pose is found. You can configure which hand poses are of interest.
  2. DisplacementMove: the device or pose moved in space. The event reports both the distance the hand has moved since the last DisplacementMove (LocOffset) as well as how much it has moved since the DisplacementAppear event occurred (LinearOffset).
  3. DisplacementVanish: the device or hand pose is gone

Currently displacement events are only generated by 3D sensors and the hantenna pipeline. You can learn more about the syntax for handling displacement events in the reference pages.

Keyboard

Interaction with a keyboard produces Blurt events. Each time a user presses a key on the Keyboard, a Blurt (KeyDown) and BlurtVanish (KeyUp) event get called. These events are given a provenance that is specific to the operating system running the application, e.g. “keyboard/osx” or “keyboard/linux”.

Whenever a Greenhouse application is running and there’s a keyboard present, it re-publishes all the keyboard activity into a pool called “keyboard”. (You can see the keyboard pool in the list if you list the pools with the “p-list” command at a terminal.) Having keyboard events shared to the keyboard pool can be handy; for example, it means that you could have other applications – even on other machines – responding to events from the same keyboard: one keyboard driving multiple applications on multiple machines.

To see the kind of messages generated in the keyboard pool, enter “peek keyboard” at a terminal, run a Greenhouse program, and use the keyboard. For more on pools, see the reference pages

Mouse

Interaction with a mouse produces pointing events. The mouse is considered to be a consistent device and therefore does not produce PointingAppear or PointingVanish events. The PointingMove, PointingHarden, and PointingSoften events map to traditional MouseMove, MouseDown, and MouseUp.

As with the keyboard, the mouse provenance string includes the OS, e.g. “osx-mouse” or “linux-mouse”.

Whenever a Greenhouse application is running and there’s a mouse present, it re-publishes all the mouse activity into a pool called “mouse”. (You can see the mouse pool in the list if you list the pools with the “p-list” command at a terminal.) Having mouse events shared to the mouse pool can be handy; for example, it means that you could have other applications – even on other machines – responding to events from the same mouse: one mouse driving multiple applications on multiple machines.

To see the kind of messages generated in the mouse pool, enter “peek mouse” at a terminal, run a Greenhouse program, and use the mouse. For more on pools, see the reference pages

Mobile Device

By default, Greenhouse apps can take input from mobile devices running the free g-speak app (iOS and Android). The app generates blurt and and pointing events.

The blurt events are fairly straightforward; a set of touch gestures that are recognized by the device OS are forwarded along to your application: swiping the screen left, right, up, or down, etc. You can read about the syntax of these Swipe Events here. You can learn more about connecting your mobile device to your applications here

Pointing events are an approximation. When the user presses and holds one finger on the app, and then waves it around in the air, the app guesses, based on gyroscope and accelerometer measurements, how its orientation has changed, and sends along a stream of pointing events. When the user lifts the finger, the pointing event stream stops. It’s not a perfect pointer by any means, but for some use cases, it can be handy. Note that mobile pointing is highly sensitive to Wifi interference.

When a new mobile device connects, a PointingAppear event is generated. When the user presses and holds a finger to the screen, PointingMove events get generated with a location and aim. PointingHarden and PointingSoften events are generated when the user double-taps the screen. PointingVanish is called when the connection to the mobile device is lost or times out.

Each mobile device has its own unique id which gets used as the provenance associated with all events generated from that device. For iOS devices, the provenance will look something like “iOSc7g9e5” and for android devices it will look something like “android-b28339bb-116f-4b77-b1cd-b07f463af6cb”.

All data from the mobile device goes into a pool called “remote”. To see the kind of messages generated in the remote pool, enter “peek remote” at a terminal, run a Greenhouse program, run the g-speak mobile app, connect it to your machine, and start using it. For more on pools, see the reference pages

Hantenna

Hantenna is the input pipeline we provide with Greenhouse for pulling events from 3D depth sensors like the Microsoft Kinect for Xbox. This pipeline does image processing on the data from the Kinect’s depth camera to detect hand poses. A Greenhouse app works with the pipeline to register that:

  1. It’s interested in some of these poses
  2. Whether it wants each pose reported as a blurt, a pointing, or a displacement event

For quick and easy development, Greenhouse apps register for a default set of poses, but you can override this and set up hantenna as you like.

Below is a diagram describing all the hand poses hantenna can detect. You can learn more about connecting your 3D sensor to your applications here.

Hantenna hand poses

Default Registered Gestures

The default gestures that Greenhouse apps register for are simplified Fist, Finger, and Victory events. These events have their own custom callbacks in the API (such as FistMove or Victory).

The fist gesture is defined as the gesture sequence of a user holding his/her palm open and facing the screen, closing the hand to make a fist as if grabbing something in the air and then moving the fist around (figures d & b). Under the covers, it’s a pointing event, but for the sake of simplicity, a number of the stages are discarded. When the open palm is found, Greenhouse calls FistAppear. As the hand moves in the palm open state, Greenhouse discards the PointingMove events and waits for the hand to close. At the point when the hand closes, Greenhouse begins sending along the PointingMove events by calling FistMove. When the user drops out of the two poses, FistVanish is called.

The finger gesture is defined as the gesture one makes when they hold up their index finger (figure c) and move their hand around. This gesture results in Pointing events but unlike the mouse or the fist action, it does not have the concept of hardening. Therefore, Greenhouse hides the Appear and Vanish events so that you simply get move events in the form of a call to FingerMove.

Finally, the victory gesture is defined as the gesture one makes when they hold up the index and middle finger in the “v” formation and move the hand towards and away from the screen (figure a). This gesture results in Displacement events, which the Greenhouse SDK abstracts into Victory events… hiding the Appear and Vanish events like with the finger gesture. The Victory event comes with a displacement vector which contains the displacement of the hand since the previous victory event.

Just to be clear, none of these default events results in a call to PointingAppear, PointingVanish, etc. None of those callbacks are invoked until you begin to register for your own gestures of interest, described in the next section.

The seismo sample, in /opt/oblong/greenhouse/samples (and also in the Greenhouse samples repo on Github) uses the default events alongside some other application-defined gestures.

Application Defined Gestures

Each of the hand poses seen in the diagram above can be used by themselves or in combination with others to register for blurts, pointing or displacement events. In the installed sample seismo, we register for the default events along with the LShapePose (figure e) as a one-handed blurt event that toggles the view of the globe. You can learn more about the syntax of registering for your own non-default gestures in the reference pages.

Leap

We’ve written some listener code which helps transform the points and rays which the Leap SDK gives us into (pseudo-) pointing and other events. This is based on how Greenhouse Leap support works by default, but this behavior can be changed by editing LeapListener.h and LeapPointing.h. You can learn how to get your Leap set up here.

Pointing Events

Each finger detected has its own provenance and generates its own pointing events. They generate PointingMove(), PointingAppear() and PointingVanish(), but not PointingHarden() or PointingSoften() because the Leap can’t detect different hand poses.

Leap-Specific Events

For information about Leap-specific events, see the Leap Input API.