• Keine Ergebnisse gefunden

3. Fundamentals 22

3.4. Architectures

3.4.10. Discussion

In this section, eight selected architectures for the creation of multimodal, distributed and adaptive user interfaces have been presented and common aspect have been identied.

The architectures have been developed with dierent foci and dierent applications in mind and cover various abstraction levels. Compared to the features identied in chapter 2, none of the architectures completely covers all of the features, but each approach addresses some of the features and each of the features has been covered by at least one of the architectures. However, if one would set out to implement a Ubiquitous User Interface there would be no framework, architecture, tool or reference implementation, covering all the needs.

The following tables provide a comparison of dierent aspects of the presented architec-tures with respect to the feaarchitec-tures of UUIs. Table 3.1 compares the aspects related to multimodality:

• Multimodality: Modalities covered by the approach.

• Fusion: Describes how the combination of user input from multiple modalities is supported.

• Fission: Identies the means to separate output across modalities.

• Separation of input and output: Denotes the capability of the system to sep-arate input and output on the UI level.

Framework

Multi-modality Fusion Fission I/O

Separation

W3C MIF

voice, handwriting,

keyboard, graphic

multimodal integration component

output generation component that selects the used

modalities

explicitly separates input

processing and output generation

Framework

Multi-modality Fusion Fission I/O

Separation

MMDS

voice, handwriting, gesture, face,

gaze, lip reading, keyboard, graphic, haptic

output

fusion component

response generator component

input and output interface

are distinguished

ICARE

input only, e.g.

voice, mouse, location/

orientation tracker, graphics

composition components (implement the

CARE properties)

ssion is not addressed

considers only input

Cameleon-RT not multimodal fusion is not addressed

ssion across modalities is not

addressed

does not separate input

and output

DynaMo-AID

input and output modalities depend on the available UIML

renderers

fusion is not explicitly addressed

ssion across modalities is not

addressed

does not separate input

and output

FAME

input is abstracted as

observers, output depends

on available/

supported devices

directed by the Behavioral Matrix and controlled by the adaptation

module

directed by the Behavioral Matrix and controlled by the adaptation

module

distinguishes user input and

presentation updates

Framework

Multi-modality Fusion Fission I/O

Separation

DynAMITE

addresses e.g.

avatars, speech, gesture, position

recognition, haptics

fusion component aims

at deriving user intentions

presentation planning and

generation components are

proposed

perception and rendition are distinguished

SmartKom

supports gesture, speech,

graphics and a character agent

time-stamped hypotheses and

unication grammar

presentation pipeline with

presentation planner

separates intention analysis and presentation

planning

Table 3.1.: Comparison of the architectures part 1.

Table 3.2 compares distribution and adaptation as additional aspects, directly related to the features of UUIs as well as main aspects like the functional core and the underlying model:

• UI Distribution: Identies the capabilities of the system to distribute a UI across multiple interaction resources and to dynamically change that distribution at run-time.

• Adaptation: Denotes the capabilities to adapt the UI to the context of use.

• Functional Core: Identies the capabilities of the system to connect to external application functions and services

• Modeling Approach: Lists the models supported by the approach.

Framework Distribution Adaptation Functional Core

Modeling Approach

W3C MIF

focus on multimodal

interaction

does not focus on adaptation

provides a component representing the

available application

functions

not model-based

Framework Distribution Adaptation Functional Core

Modeling Approach

MMDS

focus on multimodal

interaction

no focus on adaptation

integration of the functional core is not

explicitly addressed but

tasks and a database are

considered

not model-based

ICARE considers only input

no focus on adaptation

based on arch and integrates a

functional core adapter

not model-based

Cameleon-RT

provides a distribution

layer, supporting the

handling of distributed components

provides an open adaptation

manager

integration of the functional core is not

explicitly addressed

models can be considered, but

are not explicitly addressed

DynaMo-AID

provides a distribution

manager

context adaptation is considered (e.g.

for task selection)

the functional core is integrated via a data controller, making service calls based on the task model

considers a task-based application model with multiple variants for

dierent contexts

Framework Distribution Adaptation Functional Core

Modeling Approach

FAME

does not explicitly focus on distribution, but provides

multimodal ssion

based on the Behavioral

Matrix

integration of the functional core is not

explicitly addressed

platform&

devices-, environment-,

user- and interaction

model are considered

DynAMITE

addresses interaction

within distributed environments

context is considered, but

dynamic adaptation is not explicitly

discussed

considers the control of functions of the external world

domain-, discourse-, user-,

resources- and environment

model are considered at

runtime

SmartKom does not focus on distribution

a dynamic action planning

can consider context information

a function model connects external services

interaction-, discourse-context- and function model

are considered

Table 3.2.: Comparison of the architectures part 2.

In summary, a major drawback of the presented approaches is the lack of distribution and adaptation support within the MIF, MMDS and ICARE architectures. ICARE is additionally limited to multimodal input and does not support the creation of multi-modal output yet. While aiming at UI adaptation, Cameleon-RT and DynaMo-AID lack support for multimodal interaction. The DynAMITE system aims at the creation of multimodal systems with a focus on distributed interaction in smart environments.

While the system is able to incorporate context information into the interaction, it does not focus on the provisioning of adaptive user interfaces. SmartKom provides a very interesting approach to create symmetric multimodal systems, with a focus on speech and gesture modalities, but does not address the dynamic combination and alteration of these modalities. Adaptation is considered only by the action planning component.

The most interesting approach from the perspective of this work is the FAME framework and architecture. It presents a model-based approach to generate adaptive multimodal user interfaces. The framework uses an interaction model, comprising multiple templates for dierent modalities and modality combinations and facilitates a behavioral matrix, to select the most appropriate template for the current interaction context. Although this approach provides means to adapt to predened contexts, it does not facilitate open adaptation and the semantic understanding of the templates by the system. It is also unclear, how the templates for the dierent modalities are dened and how they are synchronized at runtime. Additionally, the approach lacks any means to integrate a functional core or backend services within the developed application.

In summary, the presented frameworks and architectures cover a broad range of topics and provide various capabilities to create innovative user interfaces for dierent purposes.

However, none of the approaches covers all identied features and thus, none of the archi-tectures is suitable for the creation of Ubiquitous User Interfaces for smart environments in the current state.