• Keine Ergebnisse gefunden

A New Model for Tabletop Interaction

Im Dokument Bringing the Physical to the Digital (Seite 87-91)

Part III

71 In the first two parts of this thesis we explored interaction styles for tabletop computing based on a literature review (cf. Chapter2) followed by our own explorations and the analysis thereof (PartII). As discussed in5we have seen that both gesture-based and tangible (or hybrid) interaction are viable options for tabletop computing applications. We have also uncovered some limitations for both paradigms. Primarily aspects ofphysicalityin our prototypes have shown to be valuable. Real world resemblance of on-screen elements such as the pseudo-physical gestures and behavior in the case of BrainStorm (cf.3) and physical affordances of tangible artifacts in the case of Photohelix (cf.4). However, both approaches were limited by the lack of flexibility in the interface which is caused by pre-programmed or scripted behavior of virtual interface elements.

In the following Part we discuss a new model for tabletop interaction that aims to combine the important aspects of physicality and flexibility in a coherent way. A model that increases the interaction fidelity of direct-touch interfaces beyond the “finger-as-cursor model” while main-taining a maximum of flexibility and ridding the interface of scripted and static behavior. This model is based on two fundamental concepts:

Allowing users to make use of their full manual dexterity when interacting with virtual worlds by enabling rich interactions with multiple fingers and whole hands, both hands at the same time and also through physical objects.

Making virtual objects feel more real so that users can apply real world knowledge to ma-nipulate virtual objects and apply various manipulation strategies as they see fit. Realistic behavior of virtual objects may also help to make interfaces easier to explore and discover functionality. In our interaction model the rules that apply to virtual objects are similar to those of the real world.

One important precondition for this new model is hardware capable of sensing these rich ways of direct-touch interaction. Our own early explorations have been carried out with tabletop hardware with sensing capabilities limited to two contact points. Furthermore, the used Smart-boards [SMA03] only provide a single x,y coordinate for each touching object thus poorly ap-proximating the shape and size of contacts – that is not differentiating between a flat hand and a fingertip for example. These limitations have shown in various ways during our explorations.

For example, effectively limiting Photohelix to a single-user application. Another effect of these limitations prevented users from assuming a more natural posture for handwriting inBrainstorm (i.e., holding down the sheet of paper with the flat of one hand and writing with the other hand).

Emerging tabletop hardware promises to address these limitations by sensing multiple points of contact (such as fingertips) simultaneously and whole hands or even the outlines of objects – both of hands and of other physical objects – in contact with the display surface. Enabling multi-ple users to interact with the system simultaneously and enabling individual users to interact with the system in more natural and flexible ways, applying different manipulation strategies similar to interactions with the real world. To provide an overview of available hardware platforms and approaches to rich interactive surface sensing we discuss solutions proposed and demonstrated by the research community and products now being offered by various companies in Chapter6.

72

After the related work we discuss our own interactive surface prototype capable of sensing multi-ple simultaneous contacts by fingers, hands and objects in Section6.2, followed by a comparison and assessment of various sensing approaches in Section6.3.

In Chapter 7 we then discuss our interaction model combining rich input from interactive surface hardware with a 3D environment powered by a gaming physics simulation. We describe various iterations of the model and discuss trade-offs between implementation alternatives. We showcase some of the compelling interactions enabled by this new model. Finally we discuss results and observations from a lab-based user study. This study revealed interesting insights into users’ reactions to our new model for tabletop interaction.

During our evaluations of this model we discovered one major limitation. Because sensing in most surface hardware is optimized toward detectingon-surfacecontact and to discriminate it from fingers and other objectsabovethe surface, the sensed data is usually 2D. In contrast, our proposed model is based on a gaming physics engine which is inherently 3D. Here the mismatch of input and output dimensionality is a fundamental limitation in manipulating objects using the third dimension. When interacting in the real world we routinely manipulate objects in 3D – even on tables – such as leafing through the pages of a book, stacking objects on top of each other or holding sheets of paper in various ways to reveal or hide content from others. The mismatch of sensed input and displayed graphics makes some of the simplest real-world actions such as stacking objects or placing them in containers difficult if not impossible using these interfaces.

In Chapter 8 we discuss several emerging display and sensing technologies that allow to, simultaneously, project a stable image onto the surface and image through the display to capture the users’ hands at a greater distance than possible in traditional setups. Allowing us to develop techniques that include the space above the tabletop into the interaction with the digital realm.

This part of the thesis is then concluded by a chapter on a technique based on these emerg-ing display technologies developed to enable interactions onthe surface and abovethe surface (Chapter9). Our goal has been to define a technique that feels as natural and as direct as possible, giving users the sense they are actually lifting the objects off the surface. We chart the evolution of this technique, implemented on two rear projection-vision prototypes. Both use special projec-tion screen materials to allow sensing at significant depths beyond the display surface. Existing and new computer vision techniques are employed to detect hand gestures and poses to allow object manipulation in 3D.

Chapter 6

Im Dokument Bringing the Physical to the Digital (Seite 87-91)