• Keine Ergebnisse gefunden

Tangible and Hybrid Interaction on Tabletops

Chapter 5 Discussion

In Chapters 3, 4 we discussed two interaction paradigms typical for many tabletop applica-tions; gesture-based [Min84, ZM98] and tangible interaction [IU97, UI00]. Tabletop settings often lack keyboards and mice, therefore performing complex commands can be challenging [SRF+06,MCP+06]. It seems promising to use gestures for command invocation [WB03]. Many interesting interaction techniques and applications based on this principle have been demon-strated [MHPW06,WB03,WSR+06,FS05,WMW09]. The gesture-based interaction paradigm is a promising candidate to address the special design constraints imposed by the tabletop char-acteristics, such as orientation (text and otherwise), coordination of (multi-user) interaction [KCSG03] and lack of standard input devices. Gesture-based input allows for an immediate interaction, the possibility to directly touch information and spatial coupling of input and output.

Also the possibility to design applications without abstract menu or command interfaces seems promising in terms of learnability and general ease of use. However, many gesture-based appli-cations have been designed ad-hoc and little knowledge about how to design such interfaces is available. Furthermore, studies have revealed that too complex gesture sets can be challenging for the user and hard to remember (especially for interactions that are not performed on a regular basis) [HTB+07].

Our own studies of the gesture-based interaction style using a prototypical application for col-laborative problem solving confirmed some of the assumptions often claimed in favor of gesture-based interaction. In particular the low learning threshold for novice users. Another positive aspect of this interaction style is the possibility to design more organic or pseudo-physical inter-faces and thereby create a tighter coupling between the digital and physical domain. However, our experiences with the BrainStorm system (cf3.1, [HTB+07]) and other gesture-based proto-types unveiled several issues with this approach. Two aspects seem to be especially problematic;

due to the lack of feed-forward (visualization of possible commands before gesture registration) many participants had difficulties in memorizing certain gestures - a problem that is likely to become even more severe in systems with a higher complexity and more commands.

Based on our observations the lack of flexibility was the most severe issue. Here the need to recognize a pre-defined set of gestures and the need to map these gestures to scripted system response seems to be fundamentally problematic. One might argue that our gestures were not

66 5. Discussion designed well enough and some of the problems we encountered in terms of user frustration could indeed be mitigated through additional feedback or better interface design. Although recent work by Wobbrock et al. [WMW09] has yielded interesting results from user defined gestures on tabletop computers, it remains debatable whether statically defined gestures and scripted gesture-command mappings are a good fit as an interaction style, especially when compared to the rich, flexible and often dynamically adaptive ways in which we engage with real-world objects and how we use these to solve problems or make use of them.

Tangible interaction [IU97,UI00] promises to design richer interactions making use of man-ual dexterity and motor skills. The 3D nature of many physical objects affords specific ways of usage and as such helps to mitigate learnability and long-term memorization issues [Cla00].

Many researchers have speculated that motor memory would bear efficiency benefits over other interaction styles (e.g., [RS00]) (although experimental evidence thereof is lacking as of now).

Finally many positive aspects for embodied interaction and social interaction (e.g., access alloca-tion, shareability) have been accredited to tangible interaction [HB06,BSK+05]. One drawback of tangible interaction is the need for special purpose devices as well as sensing hard- and soft-ware. In order to optimally exploit physical affordances one would have to custom build many different devices for every conceivable application. Furthermore, the flexibility and richness of interaction undoubtedly a quality of physical objects can be severely limited by the sensing scheme utilized. Interaction with real-world objects is often not restricted to simply functional modifications but consists of a myriad of nuances. For example, it might not be enough to be able to sense if an object has been touched in some situations but how it has been touched in order to correctly deduce the users intention. Even if sensing of interactions was not a problem, the recorded data still needs to be interpreted and mapped to some kind of response on the systems side. These mappings are usually designed and implemented a priori and might not appropriately capture the users intention in every situation.

Our experiences from Photohelix [HBB07, HK09] and other hybrid prototypes [TKR+08, Hil07, HWTB07] revealed interesting but not entirely unambiguous results. Our designs were received well by user study participants and the tangible aspects were commented on as being

“natural” to use, easy to learn and also “more fun” than purely digital interfaces. We could also observe how the usage of tangible objects enables richer manipulations and also re-purposing or re-designing of interaction elements – of course only to a limited degree, restricted by the limitations of the virtual interface. But it is noteworthy that we could not, across several user studies, unearth any experimental evidence for or against some of the claimed benefits of tangible input devices on digital tabletops (listed in4.1) over pure direct-touch interfaces. However, in4.5 we discussed and showcased several interesting aspects ofphysicalityin hybrid interfaces such as thePhotohelixthat warrant further investigation of this interaction paradigm.

The tangible interaction paradigm suffers from similar limitations as the gesture-based inter-action style discussed in Chapter3in the context of flexibility. Although we have seen how the physical handle enabled richer manipulations and various strategies as how exactly an object can be manipulated in the end these manipulations have to be sensed and mapped to events in the graphical user interface. In other words the recognition and interpretation bottleneck persists.

Mechanical constraints and possibly actuation can mitigate this problem somehow but can not do

67 away with the more fundamental problem of scripted and predefined system behavior. This con-straint suffers from the asymmetry in knowledge about the systems innards. While the developer knows (and defines) the rules of system behavior the user has to rely on system feedback to make sense of these rules. In the real world we all have developed an understanding about the laws of physics, through schooling and through first hand experience – therefore it is much easier to predict the behavior of individual objects and that of more complex mechanisms.

68 5. Discussion

Part III

Im Dokument Bringing the Physical to the Digital (Seite 83-87)