• Keine Ergebnisse gefunden

5.7 Benefits and Shortcomings

6.1.2 Design and Evaluation

When we talk about design, we have to differentiate between two types: conceptual and physical design. While conceptual design deals with creating a conceptual model to describe how the interactive product will work and what features are included, the phys-ical design is concerned with design details like icons, graphics, menu or screen struc-tures. The iterative process visualized in the lifecycle models involves users in design-evaluation-redesign loops, following the UCD approach.

Evaluation plays a very important role in this context. To evaluate interactive products, it is essential to create an interactive version - a prototype . This can be done in different ways, from early paper mockups via clickable screenshots up to a working system. De-pendent on the situation, i.e. ”Is a completely new product created or is an existing one modified?”, the effort for prototyping can vary. Small changes can sometimes be made to an existing product to add, adapt, or remove features without developing a completely new product. But it is still necessary to test the new features. Diverse prototyping proce-dures are known and used in practice. We have to distinguish between two different kinds of prototypes: low-fidelity and high-fidelity prototypes. They differ in appearance and appropriated material. While the low-fidelity versions often include paper-based versions, the high-fidelity ones look more like the final product, e.g. a Visual Basic prototype of a software system is of a higher fidelity than a paper-based mockup. To build high-fidelity software prototypes a software tool is needed. Examples are Visual Basic, Smalltalk, or Macromedia Flash. A low-fidelity prototype is often built using more simple artefacts.

Thus, different techniques can be used, as there are e.g.

Storyboarding, Sketching, Prototyping with Index Cards, or the Wizard of Oz.

Based on this comparison, decisions have to be made about which kind of prototype to use at which development stage. The idea of prototyping is to test a specific aspect of

the product being developed. Naturally, compromises have to be made e.g. functionality versus development costs. It requires more time to create a full working prototype but a paper-based one does not provide all the necessary features. Another compromise that has to be addressed is breadth of functionality versus depth. The corresponding methods are called horizontal prototyping (provide a wide range of function but with little detail) and vertical prototyping (provide a lot of detail for only a few functions). Yet another decision can influence the process of prototyping: will the prototype be included in the final product (evolutionary prototyping) or will it be thrown away after the evaluation (throwaway prototyping) ? This will have a direct effect on the quality of the prototype.

To implement the requirements detected in the first phase of the lifecycle model we have to establish a conceptual model in the conceptual design step. A conceptual model can be defined as:

Definition 6.7 A conceptual model is a description of the proposed system in terms of a set of integrated ideas and concepts about what it should do, behave, and look like, that will be understandable by the users in the manner intended. [PRS02]

Therefore, some main principles of conceptual design are:

• Keep the mind open to new ideas but do not lose the focus on users and their context

• Include stakeholders as often as possible in design discussions

• To get rapid feedback on ideas and specific design aspects, use low-fidelity proto-typing

• Iterate, iterate, and iterate!!!

Fudd’s first law of creativity supports the decision for low-fidelity prototypes: ”To get a good idea, get lots of ideas” [Ret94].

During the development process of a conceptual model, different design aspects arise.

You have to think about

1. Which interaction mode should be used?

2. Does an appropriate interface metaphor exist?

3. Which interaction paradigm should be used?

For a better differentiation of conceptual models, we can split them into two main categories: activity-based and object-based models. Activity-based conceptual models

can be divided into different actions, like instructing, conversing, manipulating and navi-gating, exploring and browsing or a mixture of these.

Object-based models orient themselves on objects or artifacts, like a calendar or a book. Because of their tight connection to a particular object, these models are often more special. A prominent example of this kind of conceptual model is the spreadsheet [Win96].

Interface metaphors are another way to delineate a conceptual model. The idea is to take something known from the physical world and transfer it to the virtual world, while adding specific features and characteristics. The previously mentioned desktop or spread-sheet, or the recycle bin, are typical examples. Although there are benefits from using interface metaphors, such as the fact that users are familiar with the (physical) device and therefore are supported in understanding and learning how to use a system, there are also drawbacks. Some rules familiar from the real world are broken (e.g. no one would place a recycle bin on the desktop instead of under the desk), which can possibly - but not nec-essarily - lead to confusion. Interface metaphors can be too constraining, so useful tasks might be left out although they would improve the interface. Conflicts with design prin-ciples can arise that lead to bad design solutions, or to the system’s functionality beyond the metaphor not being understood. Sometimes, existing bad designs are translated liter-ally, which is another trap designers can fall into, or the imagination is limited in creating new paradigms and models by the use of ideas that are based on well-known technologies.

Interaction paradigms are positioned on a more general level of development. A paradigm decides the general way of interacting with a system. For a long time the main paradigm in interaction design was restricted to the fixed artefacts computer, monitor, mouse, and keyboard creating WIMP interfaces (Windows, Icons, Menus, and Pointers or Windows, Icons, Mouse, Pull-down menus) . Nowadays, we think in larger connections, which can lead to completely different paradigms, for example Ubiquitous computing, Pervasive computing, Wearable computing, Tangible bits, augmented reality, and physi-cal/virtual integration, or Attentive environments.

So far, a lot information about lifecycle models, requirements analysis, design of sys-tems, conceptual models, and so on has been introduced. However, one important task is still to be discussed - evaluation. As we saw before, iteration is indispensable. To get feedback about ideas that we try to realize in prototypes, we have to test these implemen-tations. This can be done in many different ways. General evaluation paradigms are e.g.

a ”Quick and dirty” evaluation, Usability testing, Field studies, or Predictive evaluation.

These paradigms give a good overview of the way evaluations can be done. However, it is still important to consider the techniques that can be used. Dependent on the re-spective paradigm, some techniques can be leveraged in different ways. Observing users,

Asking users, Asking experts, User testing, and Modeling users’ task performance belong to these techniques.

A lot of different paradigms and techniques have been presented so far. To conduct an evaluation, however, we still nee a simple guide - an explanation that describes the proceedings step by step. This can be done by the DECIDE framework, for instance. It provides a checklist to help inexperienced evaluators performing an evaluation. Again, DECIDE is an acronym that now will be explained letter by letter.

The DECIDE Framework:

1. Determine the goals: first of all, you have to investigate the diverse goals that are pursued. The main questions in this context are: who is the user? Why does he want the evaluation?

When the goals are decided, they can serve as a guide through the whole evaluation.

2. Explore the questions: as soon as the high-level goals are determined, questions have to be generated. Every goal can be investigated by a great variety of ques-tions. Let us assume that one goal is to find out why people still visit their local bank branch instead of using online banking. Possible questions could be: do users trust the system or are they sceptical about security? Is it hard to fulfill a task be-cause of a badly designed interface? Does every user have the possibility of using online banking, i.e. is the infrastructure in place?

Questions can be broken down into sub-questions, and these sub-questions can again be broken down, and so on. A typical example for a question concerning a badly designed interface could be: is enough feedback provided? Is it easy to navigate? Is the wording consistent or is it confusing?

3. Choose the evaluation paradigm and techniques: the next step deals with the choice of paradigm and techniques. Depending on the evaluation paradigm, diverse tech-niques can be used. However, there are still other issues to consider like time, money, or equipment. These can lead to a completely different selection than would have been made under other circumstances. The next item will consider this aspect in more detail.

4. Identify the practical issues: before the evaluation is conducted, you have to identify some practical issues. These include finding appropriate users, assessing the equip-ment, considering time and money constraints, and the know-how of the evaluators.

Test users have to represent the complete target group (if possible). Equipment and facilities have to be in place, e.g. camera, batteries, or empty recording tapes.

Schedule and budget are very important and have a strong influence on e.g. the number of users and the kind of technique. Evaluators have to be prepared for the specific kind of evaluation that is to be undertaken. It is not meaningful to use e.g.

videotaping if there is no expertise and equipment to analyze the results.

5. Decide how to deal with the ethical issues: the privacy of test people is a point that must not be neglected. Personal information about life circumstances (like educa-tion, age, illnesses) should be confidential. No name should be associated with a specific questionnaire or collected data, unless users are in agreement. People that participate in an evaluation should be treated with respect. An ”Informed Consent Form” is therefore indispensable to inform the users about their rights, and also their responsibilities. They should know what the goal of the study is, how the process will run and approximately about how long it will take. It should be made clear that the users can stop the evaluation at any time they wish. A good motto for treating the test person in the right way is: ”Do unto others only what you would not mind being done to you!”

6. Evaluate, interpret, and present the data: still open are questions about what kind of data is to be collected, and how, and about the methods for analyzing and presenting it. Keywords for this approach are reliability (Is an experiment repeatable on differ-ent occasions and does it yield the same result under the same conditions?), validity (Does the experiment really measure what should be measured?), biases (Are there any distortions that influence the result, e.g. the interviewers tone of voice?), scope (Is it possible to generalize the results?), and ecological validity (Does the environ-ment have an influence on the results?).

To reduce the number of possible inconsistencies in the evaluation, a pilot study should be done before starting with the main study. Inexplicit questions can set the user on the wrong track, inoperative equipment can lead to an unintentional delay. If problems arise, they can be solved in advance. This enables an undisturbed and trouble-free performance of the evaluation.

During the development process of the VisMeB system, a variety of evaluations, rang-ing from predictive to laboratory evaluations (see also [RLK+03]) were made. The next section will describe the different stages in detail.