• Keine Ergebnisse gefunden

Conceptual Design for a Researcher Interface to Control

3.4 PocketBee – A Multimodal Diary and ESM Tool for Longitudinal

3.4.7 Conceptual Design for a Researcher Interface to Control

We agree with Froehlich et al. (Froehlich, Chen, Consolvo, Harrison, & Landay, 2007), who stated that there are two distinct audiences for diary/ESM tools: the participants, who will use an interface on the device, and the researcher, who will have to design the study. In both cases, the respective UIs need to resem-ble the underlying architecture in order to support the entire range of study de-signs. Our efforts presented so far have covered only the client user interface, which has been implemented and evaluated in multiple studies. In this section, we will present the conceptual design for the PocketBee researcher UI, the PocketBee Designer. It incorporates a visual language based on a pipe/filter and zoomable UI (ZUI) concept. The interface concept was inspired by Squidy (König, Rädle, & Reiterer, 2010), a toolkit for modeling multi-modal interaction

Wheelchair study Well-being study

Note entries/Person 17.4 (2.5/day) 14 (2/day)

Audio 3.7% 10%

Video 5.5% 2.4%

Photograph 45.4% 14.6%

Drawing 3.2% 3%

Text 42.2% 70%

Media (Audio, Video, Photograph, Draw-ing) with additional text input

57% of all media ent-ries

75% of all media ent-ries

techniques. Based on their discussion of the benefits of a zoomable user inter-face, the idea for the PocketBee Designer is to have a flexible interface that al-lows to visually model the event architecture as presented in 3.4.3. All screens in this section are visual mockups of the interface.

3.4.7.1 The PocketBee Designer UI

The task of the researcher is to set up a study and maintain it over the course of the study, and also to do the analysis. This conceptual design focuses on the set-up and maintenance parts. The set-up process usually includes configura-tion of devices and preparaconfigura-tion of any data-gathering or response events. The web interface presented earlier allows the researcher to easily set up and man-age a study remotely. However, this did not yet include the complexity of condi-tion-based designs in general, as we have discussed here, focusing instead on simple time-based questionnaires and tasks. Handling condition-based designs (either alone or in combination with human recognition-based designs) makes the design of a study much more complex. To date, the MyExperience tool pro-vides the most convenient approach by allowing the researcher to define a sen-sor-trigger-action logic in XML. However, as noted by (Khan, Markopoulos, &

Eggen, 2009), this still requires the researcher to be tech-savvy, especially if initial configurations turn out to be faulty. We counter this problem by presenting a visual language approach that allows abstraction from detail if necessary, without compromising on the ceiling (Myers, Hudson, & Pausch, 2000) of the tool. Basically, we rely on a pipe-and-filter metaphor in combination with a ZUI.

The pipe-and-filter metaphor visually resembles the condition chain, allowing researchers to easily combine conditions and link them to the actions. The ZUI allows smooth access to a detail view for the configuration of each condition or action.

3.4.7.2 The Pipe/Filter Metaphor and Semantic Zooming on the Canvas

Figure 32: Pipe & Filter concept (left) and zoomable canvas (right)

A zoomable canvas serves as the interaction space for the researcher. Condi-tions and acCondi-tions can be placed on the canvas via drag and drop and are sented as nodes. Visual links can be established between conditions and repre-sent logical AND connections, resembling a data flow of true and false.

Con-necting one or multiple conditions with an action completes the condition chain (see Figure 32, top). By default, every action needs to be connected with at least one condition, which must trigger the activation of the action. For static actions in human recognition-based designs, this could be the time schedule of the study. For condition-based designs, more complex condition chains might be useful. Upon zooming into the canvas, more information and functionality is provided to the user. Each condition and action provides specific methods of configuration, which we will discuss in the respective sections below.

3.4.7.3 Toolbar

Figure 33: The toolbar with condition-objects on the left and action objects on the right

As every condition and action is implemented in a modular way, we aim to pro-vide these modules as individual elements on the user interface, which can be easily extended if new modules are created. They are collected in a toolbar, which is accessible to the user in the bottom part of the UI. The user can simply drag and drop an action or condition on to the canvas.

3.4.7.4 The condition-object

Figure 34: The GPS condition-object dialog appears after zooming into the node

From a user perspective, we think it is sensible to not separate sensors and conditions but instead to name conditions after the sensor they control. Seman-tic-zooming provides ways to configure the conditional handling of the sensor data. For example, a time-based sensor allows definition of the exact schedule.

A location condition allows configuration of the exact position or range which should be used to trigger an action (see Figure 34, middle). Every condition module is integrated into the visual UI object. Upon zooming in, the user can choose between the various implemented modules available. For example, there might be two modules for time-based conditions, one for defining fixed schedules and one for random schedules. It is also possible to place the same condition-object multiple times on the canvas and to connect them in one condi-tion chain.

3.4.7.5 The action-object

Action-objects behave as the condition-objects do. Each action-object needs at least one condition connected. On the user interface, zooming into an action provides additional functionality. Independent of the specific action at hand, the researcher can define the action type and for which participant or devices the action will be activated. Currently, we provide the following set of actions:

Tasks

A task asks the participant to fulfill a certain action. Bound to this action is a data-gathering or response activity. For example, a task could ask the partic-ipant to try out a certain functionality of a product and afterwards comment on it by recording a voice message. This action object thus allows the follow-ing factors to be specified: 1) A task instruction for the participants, 2) the modality for data-gathering they may use, 3) whether the dialog appears immediately as a modal dialog or within the PocketBee client UI so that the user can start the task whenever he or she is ready, and 4) an explanatory help dialog.

Core questions

For human recognition-based designs, we provide the notion of core ques-tions. These are basically reminders of the pre-defined core situations in which participants should do the data gathering. Thus, instead of having a simple “create diary entry” button on the UI, the core question provides a visual reminder of what to look for. This action object allows the definition of 1) the core question itself as it appears on the UI, 2) a description of the core question, and 3) the modalities available to the participant for data-gathering. A core question always appears on the PocketBee Client UI and not as a modal dialog.

Questionnaires

Questionnaires can be used to serialize several questions, requiring the re-searcher to design the questionnaire. Our user interface allows a smooth transition between designing the event conditions and defining the

question-naire by nesting an additional canvas within the PocketBee Designer (see Figure 35). Upon zooming into a questionnaire, a similar pipe-and-filter con-cept is applied, which allows the researcher to drag different questionnaire items onto a canvas and to link them together in the order of appearance on the user interface. Zooming into an item allows definition of this in detail, such as the kind of rating scale used, whether an answer is forced, or whether participants can answer an open question by entering text or by re-cording a voice message. A branching object supports the branching of questionnaires. In this way, the user can simply define multiple output pipes.

Figure 35: Zoom into questionnaire action opens a new zoomable canvas that allows the placement of questionnaire items in the same style

Further actions

We also provide a log system-action, which allows logging a specific sensor if conditions are met. More complex actions could include the possibility to establish a direct (phone) communication channel to the researcher or fur-ther system-actions such as Xensor or MyExperience include: automatically triggering the recording of an external sensor device, such as a camera, for example.

3.4.7.6 The Linking

Figure 36: Drop Targets (top) and Boolean connectors (bottom)

Links can be easily established by making use of automatic drop targets that appear as soon as the researcher drags a new object out of the bottom tool bar.

Thereby, the researcher can connect conditions by simply dragging and drop-ping them into a potential target location. This visually resembles the logical layer and provides the primary benefit of the pipe-and-filter metaphor. In this way, it is possible to create complex condition chains without losing the over-view. It allows the simple and effective reuse of conditions with different actions or the use of multiple condition chains that end up with the same action. Each condition-object has an input connector and an output connector. A Boolean node allows the definition of the output connector as a true or false output, thereby integrating a Boolean-like AND (true) and NOT (false). Each connector can be used to connect multiple objects, not just one. An action-object also has an input connector and an output connector in case a conditional output value is to be defined. In such a case, a link can be established from the action to a fur-ther condition. An additional output selection-object is automatically created be-tween the two, as the researcher needs to define the conditional event for the action (e.g., whenever the participant takes a photograph).

Figure 37: A first running prototype of the PocketBee Designer. Top: Overview of condition-action chain; middle: condition details; bottom: questionnaire configuration.