• Keine Ergebnisse gefunden

Observation and Analysis of Human Action in Real-life Situations

4. Interfacing Humans with Computers

4.1. Observation and Analysis of Human Action in Real-life Situations

4. Interfacing Humans with Computers

Figure 4.1.: Chatting people at the Grand Opening of the CITEC Graduate School in July, 2009. An example for Human-Human Interaction.

interface parts like sliders or buttons. These are popular and effective control interfaces for machines and probably originate from these. But reality has more to offer regarding control, interaction or manipulation, and while information is ubiquitous in western society and the operating on and with this information is an essential part of our day to day work, only few techniques found their way into interface design.

In the following, I will describe two scenarios in which people operate in and on their natural environment. These observations are intended to argue by their observed richness for the importance of research in human-computer interfaces and, in particular, for a richer and broader approach on human-data interfacing technology. They should be designed to actually make use of the insights and influences given by the observations of people’s action, reaction and interaction with reality.

4.1.1. Human-Human Interaction

Inspired by the image of chatting people shown in Figure 4.1, we can identify two types of building-blocks of social interactions: the sensing and the acting part. While sensing is established by the five human senses – hearing, sight, taste, smell and touch – acting can be differentiated into speech, gesture, mimic and action.

L1 L2

L R

L3L4

L5

R1 R3 R2

R4 R5

Figure 4.2.: Finger naming.

This complex system of interaction between hu-mans is currently only understood in parts [Par37]

[GDA91]. However, it is well-known that there is a direct coupling between interacting partners, es-tablished by common interaction building blocks as co-ordinated actions and reactions [GDA91]. The observation of such social behaviour can lead to deep insights for complex interfacing systems with algorith-mic functionality. On a symbolical level, the artificial intelligence program Eliza by Weizenbaum [Wei66]

managed to mirror social behaviour such that it was difficult to determine if it was a human chatting, or a computer. This is the intended behaviour that should be also possible on a subsymbolic level.

4.1. Observation and Analysis of Human Action in Real-life Situations

Figure 4.3.: Video stills in which Andy Goldsworthy explores leaves while crafting an art-piece [riv]. He sits under the tree from which the leafs are originating, assembles the artwork, and places it back to the tree. The second row shows stills from the sequence that is analysed in the main text.

4.1.2. Manipulating Objects

What can we take for human computer interface design from our day to day action on objects in reality, especially in natural environments? To investigate this question, I analysed a short video sequence, in which the British sculptor, photographer and environmentalist Andy Goldsworthy is shown working on an art-piece that is completely made of leafs and thorns [riv] (see also Figure 4.3). Goldsworthy’s leafs are a good example for the big potential of coming tangible data exploration systems for three reasons: First, Goldsworthy is a skilled person, exploring material he is generally familiar with. Second, details of the material, i.e. the particular leafs that are in his hand and there actual configuration are unknown to him before he starts to explore them. Third, the activity of sorting and selecting leafs is unfamiliar to most of the observers (including me), the material. Nevertheless, it is sufficiently well known, making the exploration process comprehensible. This aspect supports an observation that is not too much bound on high-level symbolic meaning, as it would be the case when analysing a worker painting a wall. In the examined sequence, Goldsworthy works his way through a pile of leafs next to him. During this process, he masters very complex operations with his fingers; He for example feels their quality and selects some to use them in his sculpture. I claim that the analysis of his rich interaction patterns is beneficial for interface design considerations concerning future data exploration systems. I therefore underwent the video clip a qualitative visual analysis. The schema shown in Figure 4.2hereby shows the finger coding used below.

At 49.5s: The flip gesture is a fast scanning through the pile of leaves. The gesture starts Flip gesture

by having leafs in each hands: One leaf ais in the left hand (L) and hold with the thumb (L1), while two other leafs (band c) are stacked and hold with R1 in the right hand (R).

1. Let go leafb with R1.

4. Interfacing Humans with Computers

2. Move L1 between leafb andc.

3. Align L1 with index finger such that leaf (b) now lies aligned to leafa.

4. Move L2 such that leafaand b lie together and are hold between L2 and L3.

The complete gesture takes about 2 seconds and is often repeated to scan through the pile, sometimes in the following variants.

At 63.5s: Flip gesture with leaf. Goldsworthy picks a leaf and stores it between L2 and L3.

Variants

This alters the gesture such that the previously performed action of L2 is now executed by the combination of L2 and L3, whereas the L4 is now responsible for the movement previously performed by L3.

At 65.0s: Flip gesture with changed hand roles. Goldsworthy alters his flip gesture such that the right hand (R) now has the part of the left (L) and vice versa.

At 77.5s: Goldsworthy puts R2 between the leafs and flips it repetitively from right to left,

Index finger flip

each time flipping one or more leafs from one side to the other. Each iteration takes about half a second. This gesture really looks like it is easy to perform. It is supported by the relatively big gap between the single leafs that is caused by their waviness.

At 73.5s: Goldsworthy takes up a leaf with his right hand (R) while slightly rubbing it

Selection and texture

test between R1 and R2. He may feels its structure/texture and then decides to keep it.

At 79.5s: Take and select the leaf next to L2. It is not visible (also not to Goldsworthy).

He therefore uses only his tactile sense for the selection process.

Based on this analysis, I argue that Goldsworthy used his experience on the specific structure

Remarks

of leafs, not only for his sculptures, but also for their formation and building process.

Action and inter-action in reality, possibly incorporating other people or objects is often a

Observations

complex attempt. Goldsworthy’s use of his fingers to sort, identify and select leafs gives us a hint of this circumstances. The complexity of his movements and their variety, though, are fundamentally different from the common manipulations we physically apply to interfaces designed for data processing. In the short example, already four different manipulation gestures can be identified. Sorting the leafs involves many subconsciously performed tasks and analyses of peripheral information. Not only their size or colour are of interest, but also their texture, their material, quality or stiffness. It is difficult to cover this highly direct coupling of sensing and understanding in one word, the German wordbegreifen as a polyseme for to touch and to understand may fit best.

In difference to the observed behaviours and actions of Goldsworthy, today’s typical environments for data processing and exploration use only a rather limited part of our interaction and manipulation skills. In difference to Goldsworthy’s leafs, they often feature a symbolic interface to mediate the users intends and the data representation, which, in addition, is uni-modal most of the time. It is an aim of this thesis to apply techniques to interface design learned from the leafs example.