• Keine Ergebnisse gefunden

4. Executable UI Models 88

5.4. Interaction Modeling

5.4.4. Interrelations between Interaction Elements

to Focused, resulting e.g. in a highlighted button. This behavior is realized by mappings between the state attributes and the setState execution elements. Similarly, the selec-tion of a button triggers the execuselec-tion of the related command via a mapping targeting the execute method. In combination with multiple models, this creates a multi-level event propagation, that allows the incorporation of information from multiple models to interpret a received input event or to create the related output.

In contrast to the abstract-concrete relations that span two levels of abstraction, the concrete-concrete relations address the need to integrate input and output means not directly relevant for the interaction with the application.

Concrete-Concrete Relation

While interaction at the abstract level is application driven and focuses on the goal of the interaction, interaction on the concrete level is driven by the capabilities of the modalities.

A long list on a screen e.g. requires scrolling, even if the anticipated abstract interaction is only the selection of a list element. Thus, there is another level of interaction that is not directly required for the completion of the task but important for comfort and clarity reasons. This interaction level is expressed as relation between the input and the output elements on the concrete level. Using the list involves the selection of a viewport and thus the number of items to display in parallel as well as the change of that viewport and thus the scrolling through that list. Such interaction is not directly related to the underlying user task, but crucial for the graphical modality. Additionally, scrolling through that list can also be realized via multiple modalities, making it a multimodal interaction directly related to a specic output modality. This example is illustrated in gure 5.9.

This example shows that input capabilities vary depending on the current form of the presentation and thus also depend on the currently active presentation objects. The direct relation of input elements to output elements is thus supported to facilitate controlling the modality specic representation. This way, the activation of a concrete output object can also activate additional concrete input elements, not related to the abstract interaction.

Interaction with these input interactors is then directly mapped to the concrete output to inuence its presentation. Figure 5.9 also shows that input elements can require specic output elements they rely on. In case of an active voice command, there may e.g. be an additional hint explaining the available voice commands.

The two dierent types of relation between interaction elements show that dierent levels of interaction means exist, that have to be addressed, when modeling multimodal user

interfaces. While this is often addressed by widgets, smart enough to announce and pro-vide their required interaction means, full control of this behavior within the development process can be achieved by making this behavior explicit.

Figure 5.9.: Abstract example of a graphical list, that can be controlled via voice.

Content Dependent Interaction

Besides the interconnection of the dierent interaction models, with the utilization of interaction models at runtime the inuence of dynamic (domain) data on user interface and interaction is another issue. This rises from the need to reect dynamic information in the user interface. Three cases can be distinguished, that have to be addressed:

1. The most obvious case is probably a list, showing dynamic content (a list of available services, a list of values from a database). In this case, list elements have to be dynamically created. As this is a standard case in graphical user interfaces, it becomes more dicult with multimodal interaction. An example are list items displayed on a screen, that can also be directly addressed via voice. In this case there is also a need to create the required voice commands.

2. The dynamic creation of elements which are not part of a list. In this case, the type and number of needed interaction elements varies with dynamically acquired information (e.g. three vs ve buttons with dynamic captures taken from a complex object, stored in the domain model).

3. Dynamic content can also play a role if multiple situations known at design-time should be considered (e.g. dierent numbers of buttons or an image instead of

text). This way multiple variants of the user interface can be created and the most appropriate variant is selected at runtime.

While the rst case is addressed by the list interactor itself, which takes care of the cre-ation of the items to display, the second case is handled by the provisioning of templates within the DynamicInput and DynamicOutput interactors. The actors handle the cre-ation of interaction elements and take care of their presentcre-ation or input processing. In the third case the dynamic interactors allow the denition of dierent variants, that can be activated according to other model information. Variants and template utilization are handled via mappings to the execution elements and the activated elements are stored as situation (child-) elements. Storing these elements as situation elements is crucial for the approach, as they are created at runtime, according to the model processing logic and are not provided by the designer.

Based on these cases and requirements for the mappings between interactors, a set of mapping types expressing the described means can be identied as described in the next section.