• Keine Ergebnisse gefunden

TI RBI

6. Information Displays

6.3. Auditory Displays and Sonification

dimensions of coloured light are not perceived as equivalent, which prevents it from being used as three independent parameters. This feature differentiates colour from position.

The two low-level visual qualities of texture and shape can be controlled with a high-dimensional parameter set. However, their high-dimensionality is difficult to determine and relies on the actual task. Perception-wise, there is a big dependency between the parameters of texture and shape. One parameter influences the perception of the others. Although the interdependency is a promising source for additional research, it will not be further elaborated in this thesis, since it does not contribute to the argumentation of this thesis.

For more information, please refer to G. Stiny’s book on Shape [Sti06].

6.2.1. Examples

There is a large number of widely used and known techniques for visual displays. However, it is clearly out of the focus of this work to provide a detailed overview. I therefore restrict the following description to only a few landmark techniques and recommend to Berthold et al. for a more detailed overview [BH03].

Geometric-based displays are primarily used to represent multidimensional data sets. Geometric-based

Representatives of this class of visualisations are e.g.scatter plot matrices [Cle93] [And72]

orprojection pursuittechniques [Hub85], by which users may define how data is geometrically shown to them, e.g. as a vector field or as parallel coordinates.

In a completely different approach,Iconic Displays (orGlyphs) map attribute values have Iconic

derived from multidimensional data to features of otherwise unrelated icons. Standard icon sets are e.g. Stars [War94],TileBars [Hea95] orChernoff Faces [Che73].

In the Dense Pixel display type, each dimension of a data item is mapped to the colour of Dense pixel

one pixel element. All pixels for one data item are then grouped together and represented against other Dense Pixels rendered from the other data items [Kei00].

Stacked display techniques represent data in a hierarchical concept. TheDimensional Stack- Stacked

ing technique introduced by le Blanc et al. [LWW90] therefore embeds several visualisations into one bigger meta-visualisation. The display fields may be rendered according to one of the other visualisation techniques.

All above-described visualisation techniques can also be expanded by introducing time-based Dynamics

dynamics. An example for a Dense Pixel Display implemented by me as a design study is shown in Figure 6.1.

6.3. Auditory Displays and Sonification

In contrast to the human visual perception, the auditory senses are well developed concerning time varying structures like rhythms or patterns. Other features of the auditory modalities include a native multi-person involvement caused by its undirected nature, mass delivery of information, native support of time-based structures, and the possibility to flexibly change of being subconscious or alarming.

Auditory Displays utilise these features for information mediation by representing data and algorithmic structures via sound. As of today, six types of Auditory Display techniques are distinguished: Auditory Alarms, Auditory Icons, Earcons, Audification, Parameter

6. Information Displays

Mapping and Model-Based Sonification [Her08]. While the first three types can be classified as (static) Auditory Displays, the latter three are usually grouped under Sonification.

The evolving discipline of Sonification is therefore a sub-field of Auditory Displays that uses the human auditory capabilities to (dynamically) represent data. It is defined by Hermann [Her08] as

A technique that uses data as input, and generates sound signals (eventu-ally in response to optional additional excitation or triggering) may be called sonification, if and only if

(C1) The sound reflects objective properties or relations in the input data.

(C2) The transformation is systematic. This means that there is a precise definition provided of how the data (and optional interactions) cause the sound to change.

(C3) The Sonification is reproducible: given the same data and identical interactions (or triggers) the resulting sound has to be structurally identical.

(C4) The system can intentionally be used with different data, and also be used in repetition with the same data.

Auditory Alarms In complex working environments such as medical operating rooms or hospitals in general, there are often alarm signals needed, indicating that a reaction to a certain incident is demanded. Research in Auditory Alarms specialised in the design and analysis of such alarms, focusing on their separation and differentiation into categories like urgency and type.

Earcons If discrete states of a system (e.g. a computer program) are given, each of them can be mapped onto a specific predefined sound to display its current state. This approach is called Earcon and was introduced 1994 by Brewster et al [BWE94].

According to the very limited and explicit mapping, there are only limited possibilities to apply this method to data exploration. For example the discrete results of a classification method could be mapped to specific sounds. In this case, the designer has to act according to psychological surveys of the human perception system.

Auditory Icons Sounds of everyday actions are mapped onto equivalent virtual events, like the click-sound that can be heard when pressing the shutter release of an analogue camera is mapped onto the button of its digital equivalent to let the user know that he took a picture [Gav94].

Like the earcons, this approach is only limited in use for data exploration since the choice of sounds depends to a large extend of the specific data domain.

Audification Let(xα)α=0...mbe an ordered list of multidimensional data recordsxα ∈ Dn, which can be represented as a discrete time series. Usually the number m of items is high, and an order is given by the measuring point in time. Examples for this data type are electroencephalograms (EEG) or seismographic data (see Section 2.1). Such data types are candidates for directly mapping onto loudspeakers as audio streams on several

6.3. Auditory Displays and Sonification

channels, each for one dimension ofxt. This is achieved by setting each multichannel-sample s[t] ∈ Rn to the appropriate data item xt. If there are variations in the data set, they will be translated into variations in the samples, which leads to audible effects, providing that the variations lie in the human audible range. This method is commonly known as Audification. Audification of seismographic data was investigated by Hayward [Hay94] and Dombois [Dom01] [Dom02].

One prominent disadvantage of this approach is, however, that periodic patterns of the data have to be located in the audible frequency range of the human ear, ranging from approximately40Hzto about 4kHz. It can be solved by pitching the whole data set, i.e. by compressing or stretching the time axis. This possibly leads to another problem based on the linkage of temporal and spectral components in audio streams: Given a sampling rate of 44.1kHz, a data set must consist of 44 100 data items to produce one second of sound. Therefore the data set has to consist of many data items in order to let the Audification produce any perceivable sound. To avoid these problems, many strategies have been developed. Utilising Granular Synthesis (cf. to Section 6.4.1) for audification, as described by de Campo [dCFH04] allows a completely independent control of temporal and spectral components. However, such technologies introduce additional algorithmic-based artefacts that may occlude the actual data representation.

Parameter Mapping Sonification Given an ordered multidimensional data set x0,x1, . . . ,xm−1

=X∈Rm×n. (6.1)

Each data point xα can be mapped by a functionRn→Rp to ap-dimensional parameter vector of sound attributes that are used to feed a predefined sound rendering process. Take for example three-dimensional data points

xα= (xα0, xα1, xα2)τ; α= 0. . . m−1 (6.2) A Parameter Mapping Sonification may characterisem sinusoidal grains4 using the identity as mapping function. In this case,xα0 is mapped onto the α’th grain frequency,xα1 onto its amplitude andxα2 onto its duration. This way, a Parameter Mapping Sonification can use all available audio signal parameters to convey the given data values to the user.

To specify the mappings in a meaningful way, there has to be a specificmapping methodology evolving from the data domain. If there is no domain-specific information available, it cannot be determined if one mapping is better than the other: Since the different sound synthesis parameters change the resulting sound in differently well perceptible ways, every mapping methodology may procures an unintended structure that possibly drown out the data inherent structural information.

Model-Based Sonification Why should data produce sound? Normally data can be perceived as passive elements of the world or of its description. Passive elements only act in response to another action; they are reactive. From this point of view, one has to excite these passive objects to get structure-born sounds. This suggests to design an excitable model in which one can include data to produce sound that is directly influenced

4A sinusoidal grain will be defined in Equation6.3.

6. Information Displays

by the data and the stimulus, e.g. the interaction given by the user. This approach is called Model-Based Sonification. Thus the data becomes more or less directly the sounding instrument on which the user can operate [HR99]. According to Hermann, the following elements have to be defined for a Sonification Model:

Setup a model of dynamic elements in a vector space (model space),

Dynamics rules how elements in the model space interact and react to external triggers e.g. by motion equations, and their initial state;

Excitation option and parameters of the model that users can manipulate

Sound Link Variables variables linking dynamics of the model to physical audio signals Listener sound wave transfer and receiver characteristics.

As the inventor of this technique, Hermann proposed several Sonification Models such as Principle Curve Sonification [HMR00] or Growing Neural Gas Sonification [HR04] featuring the Model-Based Sonification approach. In Section9.5, I will present a Sonification Model as part of the TDS Tangible Auditory Interface.