• Keine Ergebnisse gefunden

3.4 PocketBee – A Multimodal Diary and ESM Tool for Longitudinal

3.4.4 User Interface Design

In the following section, we will illustrate how PocketBee works and how the user interface is designed to address the research questions.7 To this end, we will present a scenario of use.

7 A short video demonstrating the PocketBee tool can be accessed online:

http://www.vimeo.com/13397614.

Figure 28: left: Home-Screen Widget with 2 core-questions and a questionnaire (lower part), right: diary entry form (empty)

The PocketBee client diary application is written in Java/Android. It directly inte-grates drawing and text notes modalities and additionally inteinte-grates the An-droid’s internal camera, video, and voice recording applications seamlessly into the user interface. The user interface consists of a home-screen widget and the data-gathering application (see Figure 28). The widget allows the participant to use the mobile device while having constant access to the diary application. It provides entry points for the user and acts as a constant reminder of any pend-ing tasks. Essentially, the widget supports both human recognition-based diary designs and condition-based designs. The upper part is reserved for core ques-tions, such as are applied in a human recognition-based design. These core questions serve as visual and cognitive triggers; the user can simply wait for these events to happen and is constantly reminded to “get triggered” by them.

Upon selection of a core question, the interface allows the participant to com-pose a diary entry out of several notes and multiple modalities, including text, photograph, video, voice, and drawing. Upon saving, a diary entry is immediate-ly sent to the server in the background. By providing different modalities for da-ta-gathering, PocketBee increases flexibility for the participants, as they can simply select the most convenient one. The bottom part of the UI is reserved for condition-based designs, which are reflected in PocketBee by task and

ques-tionnaire actions. Alternatively, the researcher can define these actions to be displayed as a modal pop-up dialog that is displayed immediately after condi-tions are met. For the questionnaire, we support different item types, including rating scales, ranking, open-ended questions, sliders, and the integration of ad-ditional modalities (e.g., voice). Participants are always notified of new actions by the internal Android notification system, which then shows the action icon in the top status bar.

The interface concept supports all of the types of research designs that we have presented in Chapter 3.1.4. Furthermore, the widget concept allows the partici-pant to be aware of actions even in a non-modal dialog setting, which might be most appropriate in human recognition-based designs.

3.4.4.1 Scenario of Use

Electrically powered cars are not only environmentally conscious but add to the customers’ mobility and flexibility. Instead of having to rely on fixed gas stations, any power outlet can become a source for recharging. While little is known about how practical this might be, investigating it is difficult, to say at least. Di-rect observation is scarcely possible, since the car is a very private environ-ment. Using interviews, retrospective effects might hide the little hurdles one must master during the charging process. We will outline in the following section how PocketBee can support such a study. The PocketBee client’s user interface consists of a home-screen widget (see Figure 28, left) and the diary application itself (see Figure 28, right). The widget allows the participant to use the phone itself while having constant access to the diary application. It provides the entry points for the user and is a constant reminder of any pending tasks. Essentially, the widget supports both condition-based and human-recognition based de-signs. The upper part is reserved for what we call core questions. As it can be a mental burden for participants to constantly think about whether they should record a diary entry during any given situation, these core questions serve as visual and cognitive triggers to reduce this burden. In our scenario of use, these are a “charging car” event and a “needing to charge” event. The user can simply wait for these events to happen and is constantly reminded to “get triggered” by them. Such a human recognition-based diary design also allows researchers to

couple the diary entries closer to the events that need to be reported, as they motivate an instant capturing. By tapping on a core question, a diary entry is created that can then be enriched with data (see Figure 29, left). Let us assume that our participant Sarah is about to charge her car. The interface allows her to compose a diary entry out of several notes. To begin with, she might want to simply write a text note, that she is about to charge the car at a friend’s place.

During charging, the display in the car tells her how long it takes to fully charge the car. She takes a picture of the display and adds a textual note. She would like the researchers to know that she would like to enter the distance she wants to drive so that she knows how long to charge for a specific ride. She then saves the diary entry; it is now immediately sent to the server in the back-ground, together with her current geo-location (if she has agreed to this prior to the study).

Figure 29: left: diary form with two entries (voice and drawing), middle: temporary postponed entry, right: questionnaire item

Later on, she gets another idea: the car should send her a text message as soon as the charging is complete. She quickly records an audio note while walk-ing to the car to check the status by herself. By providwalk-ing these different modali-ties for data-gathering, PocketBee reduces the burden on the participants, as they can just select the most convenient one. By allowing the composition of several modalities in one entry, we seek to provide rich and in-depth data.

Moreover, the GPS location can help during an additional retrospective inter-view to remind the participant of this particular situation for further discussion.

The researcher, on the other hand, has immediate access to the diary entry via the control center (see Figure 30) or as soon as the device has a network con-nection (WiFi or GSM/3G). This allows the researcher to 1) start the data analy-sis right away, 2) prepare the data for an interview session, and 3) react to the data. We currently provide a basic list-like view for the entries that can be sorted and filtered by several criteria (e.g., core question, participant, etc.) as well as exported for data analysis (with MS Excel, for example). In order to react to the data, the researcher can modify existing or create new core questions as well as create additional tasks and questionnaires individually for each participant.

The last two reside on the lower part of the home-screen widget. Tasks are meant to provide specific instructions, such as “please take a picture of the power cable,” allowing the researcher to interact more closely with the partici-pant, tightening the bond between the two as the latter receives direct feedback on his or her actions. This will also help to increase the motivation for continu-ous use of the diary. Questionnaires can be designed in an XML template that provides several different question types for most necessities, such as multiple selections, rating scales, or open-ended questions (see Figure 29, right). This last option also allows the participant to record voice instead of typing text. The template allows branching questions as well as forced or optional questions.

Figure 30: Web-based Control Center

Our participant Sarah comes home again. As on every evening since the study started, the device notifies her with two short beeps that the daily questionnaire is now available, asking her about the mileage she drove today, how she rates the ease of use of the charging device, and for additional feedback.