• Keine Ergebnisse gefunden

Bad for Data, Good for the Brain : Knowledge-First Axioms For Visualization Design

N/A
N/A
Protected

Academic year: 2022

Aktie "Bad for Data, Good for the Brain : Knowledge-First Axioms For Visualization Design"

Copied!
6
0
0

Wird geladen.... (Jetzt Volltext ansehen)

Volltext

(1)

Bad for Data, Good for the Brain:

Knowledge-First Axioms For Visualization Design

Michael Correll

University of Wisconsin-Madison Department of Computer Sciences

mcorrell@cs.wisc.edu

Michael Gleicher

University of Wisconsin-Madison Department of Computer Sciences

gleicher@cs.wisc.edu

ABSTRACT

Traditionally, visualization design assumes that the e↵ec- tiveness of visualizations is based on how much, and how clearly, data are presented. We argue that visualization re- quires a more nuanced perspective. Data are not ends in themselves, but means to an end (such as generating knowl- edge or assisting in decision-making). Focusing on the pre- sentation of dataper se can result in situations where these higher goals are ignored. This is especially the case for situ- ations where cognitive or perceptual biases make the presen- tation of “just” the data as misleading as willful distortion.

We argue that we need tode-sanctify data, and occasion- ally promote designs which distort or obscure data in ser- vice of understanding. We discuss examples of beneficial embellishment, distortion, and obfuscation in visualization, and argue that these examples are representative of a wider class of techniques for going beyond simplistic presentations of data.

1. INTRODUCTION

Information visualization is sometimes tacitly evaluated in terms of “data throughput” — i.e. we have created a suc- cessful visualization if we present the most data as quickly and as efficiently as possible. Many design mantras and guidelines originate from this interpretation, especially for- mative principles from Tufte such as calls for “clarity,” high

“data-ink ratios” and low “lie factors” — all heuristics for visualization design couched in terms of how well a partic- ular visualization presents the data. While these heuristics sometimes have utility, they are insufficiently nuanced to as- sist designers of visualizations. Dogmatic adherence to the letter rather than the spirit of these laws can result in de- signs which might satisfy from the perspective of data-ink or lie factor but fail to support human beings in their actual uses of data.

Visualization, whiledata-driven, ought to behuman-centric.

Human beings are the ultimate targets of our designs; it is not enough to simply show them the data, we must also make sure that human beingscomprehend and responsibly use the data. There are many cases where designs that are

“good for the data” may not be “good for people” — percep- tual and cognitive biases, human methods of analysis and reasoning, and human priorities all contribute to situations where we might wish to distort, obfuscate, or scramble the data in our visualization in order to make things clearer or easier for humans.

In this work we present a number of examples of these sorts of beneficial distortions, and how we might modify or extend Tufte-inspired design principles to account for the complex- ity and richness in how humans use and comprehend data in information visualizations. We focus especially on con- flicts between “data-first” designs (designs based on maxi- mizing the data per pixel, and minimizing extraneous visual information) and “knowledge-first” designs (designs based on improving human use of the visually presented data, for knowledge or decision-making). These design conflicts arise for a number of reasons: cognitive and perceptual biases which may promote incorrect judgments from overly sim- plistic presentations of data, di↵erent epistemological per- spectives requiring di↵erent uses of data, and the needs of persuasion. We conclude by o↵ering up some rules of thumb which can serve as simple guides for designers that allow for consideration of these conflicts.

2. THE CASE FOR ADORNMENT

Tufte provides three definitions of the data-ink ratio, which he argues are equivalent [31]:

1. The data-ink ratio is an explicit numerical value that is the total ink used to print the graphicdata ink ,

2. The proportion of a graphic’s ink devoted to the non- redundant display of data-information,

3. 1.0 proportion of a graphic that can be erased without loss of data-information.

All of these definitions share the idea that visualizations should maximize “data-throughput,” and that this is to some extent a quantifiablea-priori anddomain-agnosticproperty of a chart.

Maximizing the data-ink ratio in the Tufte sense means avoiding elements of a design that are extraneous for show- ing the data per se. There has been a great deal of re- cent argumentation both for and against adornment and embellishment in visualizations. This debate has mostly centered around the concept of “chartjunk,” visual elements of charts which are seen as extraneous. Some authors have argued that these visual elements can improve engagement, comprehension, and memorability [4, 7, 17], while others have been more skeptical of these arguments[12]). Most re- cently, Haroz et al.[14] have performed extensive quantita- tive research showing that the careful use of domain-relevant DECISIVe : Workshop on Dealing with Cognitive Biases in Visualisations, IEEE VIS2014, Nov 9th 2014, Paris

Konstanzer Online-Publikations-System (KOPS) URL: http://nbn-resolving.de/urn:nbn:de:bsz:352-0-329455

(2)

Figure 1: An example of “cruel pies” from Sygnatur [29], as used in Dragga & Voss [11]. While in the abstract both pie charts are encoding identical data types, one pie shows employment records while the other shows fatalities. By ex- cluding the human element and moral connection to the data at the expense of abstract similarities in encoding, Dragga

& Voss argue that this information visualization is “cruel.”

Adornment and embellishment beyond the minimalist pie chart could better emphasize the qualitative di↵erence in information types.

(a) MS Sans Serif (b) MS Serif

Figure 2: Serifs are an example of “wasted” ink by a literal interpretation of Tufte’s data-ink ratio; serifs are, by defini- tion, adornments that are not strictly necessary to form a letter. However, serifs impact other factors such as reading speed and legibility that might justify this “wastage.” The choice of typeface might even impact higher level properties such as the believability of the contents [22]. There are many considerations when it comes to choosing typefaces, but the complexity of how typefaces are read and interpreted means that data ink is not an informative measure.

glyphs can result in better performance compared to more standard glyph-less graphs.

Implicit in arguments against chartjunk is the assumption that there exists a “neutral” presentation of “just the data,”

and that designers then place extraneous graphical elements on top of this presentation. This concept is to some extent self-defeating: even this idealized version of the chart (the bare-bones or minimalist or “Tufte-ist” version) is not neu- tral. The mere decision to presentany type of chart at all, even an irrelevant one, can create measurable impacts in how data are perceived [30]. Data domains are also not cre- ated equal. For instance, if I present two identical styles of chart, one of sales figures and another of casualty figures, by treating these two data types identically I am giving them equal weight to the viewer even though from a moral or eth- ical standpoint the latter ought to have vastly more weight than the former. Dragga and Ross call this equal weighting in the service of minimalism “cruel,” and call for embellish- ment to lend additional emotional weight to datasets that are otherwise structurally identical to more “neutral” sources [11] (see Fig. 1). This is an example of where ink that is

“wasted” in the data-presentation sense still has utility in grounding and contextualizing the data for the viewer.

Common objections to the data-ink ratio are concerned with exceptions to the general rule that a higher data-ink ratio is a better design than a lower one (e.g. when designing for memorability one might chose an otherwise non-optimal design in the data-ink sense). We go a step farther and claim that the data-ink ratio is not informative for the ef- fectiveness of designs in general. If interpreted literally the data-ink maxim is simply false. For instance, serifs in type- faces are by construction visual elements that can be erased without loss of information, but that does not mean that in all visualizations sans serif fonts are strictly superior — in some cases serifs are more legible than sans serif fonts, and the landmarks provided by serifs may increase reading speed [5, 3] (see Fig. 2). If interpreted figuratively, the maxim to some extent becomes tautological: “a visualization is more useful if all the parts that aren’t useful are removed.”

There are several counter examples that fall between these two extreme interpretations. An example is in how design- ers might choose the size of glyphs — strictly speaking once glyphs are large enough to be legible, additional size reduces the data-ink ratio of the resulting visualization, and yet re- lated aspects (such as the ability to distinguish color [28]) increase exponentially with the size of glyphs. A glyph might be perfectly legible (and thus it would be a decrease in the data ink ratio to increase its size) but still insufficiently large for the types of comparisons or judgements required for the relevant analysis tasks. Likewise, adding connections be- tween points in dot plots are, strictly speaking, superfluous marks, but encourage particular interpretations of the data (c.f. bar charts, which even when presenting identical data to line graphs still encourage comparison between individ- ual data points more frequently than comparison of overall trends) [34]. In both cases the data-ink ratio does not lead us to an informed decision about design — the quantity and size of marks is less important than our existing design knowledge of what sort of information we wish the viewer to see, and how accurately we wish them to see it.

(3)

A possible replacement that preserves much of the semantics of the original formulations is a modification of Occam’s razor: “do not needlessly multiply design complexity.” This formulation incentivizes simplicity, but allows for a more complete picture of what simplicity entails; simplicity for the viewer perceptually or cognitively may have little to do with the number of marks or data points in a visualization, and even if it did, it would be an overreach to quantify this value numerically for the purposes of comparison.

3. THE CASE FOR DISTORTION

Tufte [31] defines the lie factor as: size of e↵ect shown in graphic size of e↵ect in data

If the lie factor 6= 1.0 then we are distorting the data by over- or under-weighting values. The visualization commu- nity, in many cases drawing from the existing work in the field of critical cartography [21, 33] and statistics [16], has extended this concept to other ways that visualizations can

“lie” [25]. The Vis Lies annual event highlights examples of visualization that mislead in ways that are more subtle than exaggerating a data point. Common (but relatively minor) distortions such as altering the base line and changing aspect ratios have measurable e↵ects on how data are interpreted [24].

While we agree that a designer ought to present informa- tion honestly, we believe that this honesty is not to the data per se, but to the knowledge contained in the data. That is, we can design visualizations in which the lie factor is 1.0 based on how the data arepresented, but systematic biases in cognition or perception would lead to falsehood and ex- aggeration in how the data are understood. To overcome cognitive biases we need to be willing to distort the funda- mental placement and encoding of data: dishonesty from the perspective of data, but honest from the perspective of knowledge.

An example of a beneficial distortion is Agrawala and Stolte

“Route Maps” [1] for depicting suggested routes for travel- ers to follow. Rather than na¨ıvely presenting the map with the route superimposed over it, Route Maps systematically distort the data to give greater weight to areas of the route that are more important or require more attention from the viewer. For instance, the first and last few steps of a route from one city to another might involve a large number of turns in relatively small geographic areas (maneuvering in city blocks), whereas the middle of the route might be a small number of turns in a large geographic area (the large stretch of highway between the two cities). Route Maps magnify and exaggerate these critical steps and the begin- ning and end, and simplify and compress the large stretch of highway driving (see Fig. 3).

We can also distort data in the service of de-biasing, or to make certain aggregate properties of data easier to deter- mine. In Correll et al. [10], we distort the apparent length of words in tagged text visualizations to counteract a percep- tual bias in which numerosity is conflated with area. Like- wise, by shu✏ing values within a time series (and thereby distorting the connection between position and temporal lo- cation), we can make certain aggregate statistics such as average value and variance easier to determine than if we

(a) A non-distorted geospatial route.

(b) Route Map from Agrawala and Stolte [1].

Figure 3: A beneficial distortion for the task of navigating using a map. In the non-distorted version the long (but from a direction-finding sense uninteresting) highway portion of the route dominates the display, whereas tight turns that take up very little geographic space are hard to make out.

Distorting the map makes these turns easier to see.

(4)

had left the data alone [2].

Given the power of distortion on how data are perceived, it is not a step to be taken lightly. That being said, there is a distinction between being “truthful” and “useful.” We candistort without necessarilymisrepresentingthe dataset to viewers.

4. THE CASE FOR OBFUSCATION

A common maxim is to strive for clarity in our visualiza- tions. Kostelnick has attempted to place competing concep- tions of clarity within the perspective of di↵erent rhetorical approaches to data [9]. These perspectives highlight that what counts as a “clear” visualization is highly contingent on not just the designer, but the audience, and even the critical framework within which we evaluate the purpose and extent of visualizations.

Even the choice of which visual variable to use to encode a data variable can create conflicts between utility and clarity.

For instance, di↵erent visual variables have di↵erent levels of accuracy for comparison tasks [23], but di↵erent variables also have a di↵erent semioticconnection with quantities of interest [19]. Blur and saturation are, in the mental model of many viewers, tightly coupled with uncertainty. This is despite the fact that viewers are remarkably poor at compar- ing blur and saturation in actual displays, perhaps to only a few qualitative levels of di↵erence [8]. Designers are there- fore faced with two seemingly opposed definitions of clarity:

task clarity (ease of interpretation for specific comparison task) or semiotic clarity (ease of interpretation in terms of alignment with the mental model of the viewer). We can make an informed decision to support either type of clar- ity. An example is in the work of Borkin et al. [6] where the rainbow colormap was self-reported as being “clearer” for their users to interpret (since it was the ramp they were used to using in their existing tools). It was only after concrete measurement of the costs of such a colormap in terms of er- rors in analysis that users were willing to switch to superior non-spectral versions.

The project of de-biasing further complicates what can al- ready be an under-defined and over-applied term. For in- stance the “availability bias” is the tendency to overweight and overvalue recent or extreme data points at the expense of general trends. One remedy for this bias in visualizations is to make new data points in (for instance) streaming data visualization less visually salient; by intentionally making data less clear visually we make it cognitively clearer. Like- wise, in the attempt by Micallef et al. [20], to address the

“base rate fallacy” (in which humans do a remarkably poor job at assessing conditional probabilities as generated from processes such as Bayes’ rule) the most successful de-biasing occurred in situations where the exact numbers involved in the conditional probabilities were intentionally hidden from the viewers — by forcing the viewer perform an additional consultation step using the visualization in order to retrieve values of interest, the viewers could no longer rely on their (in this case, flawed) intuitions to make judgments. The process of calculation was made more difficult, but the per- formance was improved.

Conceptual “clarity” can often result in (sometimes benefi-

(a) 1665 from Hooke [15] (b) 1978 from Grumbling et al. [13]

Figure 4: Di↵ering views on a “clear” diagram of a fly’s head over time. Both versions are simplifications and abstractions of a photographic view of the head, but di↵erent audiences have di↵ering conceptions of clarity and di↵erent needs.

cial) distortions to visualizations. Jack [18] argues that, in Robert Hooke’s Micrographia [15], the first visual presen- tation of images from microscopy, Hooke makes informed decisions in what complexity to show or hide in the ser- vice of both making the images legible to an audience un- familiar with such images, and to advocate for the utility of microscopy in general . Hooke’s hand-drawn figures dif- fer significantly from photographic views of microscopy, and also from the simplified diagrams used in today’s science literature (see Fig. 4).

Much (but not all) of the thinking in the visualization lit- erature about di↵erent conceptions of clarity and utility are reflected in the work on task analysis and problem spaces.

For instance, the Ecological Interface Design framework and its extensions [32] provide guidance for structuring design based on the goals and information requirements of users.

At a lower level, there has been formative work on select- ing between visualization designs based on specific tasks and vice versa [27, 35]. These design considerations are impor- tant, and noticeably absent from many data-first axioms, but there are other considerations that also warrant con- sideration. The emphepistemological and rhetorical envi- ronments in which data are collected and used may suggest di↵erent designs even for similar tasks and goals.

In general we should remember that “data” are inherently rhetorical — they are collected and curated for the purpose of argument [26]. How we present the data ought to be in service to our goals, which may or may not have much to do with the clarity of this presentation. This “clarity” itself may have a multitude of meanings: for instance, journalists, technical writers, and academics all strive to clearly com- municate in writing, and yet it is stylistically very easy to distinguish a news article from a technical manual from a journal article. What it means for a visualization to clearly make its case to the viewer may have little to do with the literal visual clarity of data points.

(5)

5. CONCLUSION

We should be skeptical of formulations such as the lie fac- tor and the data-ink ratioprima facie — they all purport to involve specific calculations and comparisons that occur before an intended viewer even sees the design. By overem- phasizing data, and the conversion from data to screen, we lose sight of the great deal of activity that happens in the journey from screen to brain. If we wish to seriously tackle to problem of cognitive biases in the perception and use of visualizations, we need to be willing to sacrifice fidelity and data throughput to promote greater fidelity and bandwidth of knowledge.

In place of these data-first maxims we promote a set of per- haps less prescriptive but more general guidelines:

• Avoid Unnecessary Complexity: Complexity is not just visual complexity, but complexity in concept, readability, and interpretation. Even the ease with which conclusions from a chart are put into practice is a form of complexity. Something that is visually very simple can be quite complex in the viewer’s mind, es- pecially if there is a clash in mental models between viewer and designer.

• Don’t Be Afraid of “Little White Lies”: avoid harmful distortions and misrepresentations in your vi- sualizations, but be aware of the distinction between misrepresenting thedataand misrepresenting thefacts.

There is not a one to one relationship between pixels on the screen and knowledge in the head, so be willing to distort the former for benefits to the latter.

• Find Out What “Clarity” Means On A Case By Case Basis: There is no universal standard of what it means to clearly present data to an audience. Compar- ing designs in terms of clarity requires a consideration of the needs and rhetoric of viewers.

Note that these guidelines require discretion and judgment in their application: they are not dogmatic, and intention- ally so. The process of concretizing these guidelines for a specific design — for instance determining what complexity is necessary to present the information, determining mis- matches between data and mental model, and investigating the rhetoric of one’s audience — is an important part of the design process. Careful application of these guidelines, if nothing else, encourages thinking that should already be part of how visualizations are designed. Overly simplistic, data-first guidelines, while possessing some of the character- istics associated with objectivity (quantification, invariance to context), encourage designers to do evaluations without proper consideration of the tasks and background of the in- tended audience.

There are two, somewhat contradictory counter-arguments for preserving Tufte’s laws in their current form. The first is that these are guidelines, not rules — the assumption is that even Tufte would not literally interpret the guidelines he set out. However, if these design maxims can generate bad de- signs if interpreted literally, then why not replace them with guidelines that do not have this defect? The second ob- jection is that Tufte’s laws represent a starting rather than

ending point for designers; that is, that they will either get you most of the way towards good designs, or that if you lack enough sophistication in the visualization field to con- sider nuanced design principles, they will at least improve your current thinking. The comparison here is to Newto- nian physics — technically incorrect, but useful in many if not most real physics problems. Indeed, in many cases Tufte’s laws and the principles we lay out in this paper are largely in accord. Nevertheless, we believe that visualization as a field is mature enough, and situations where dogmati- cally following Tufte’s laws are common enough, that “good enough” is not good enough — part of our work as visualiza- tion designers (especially in academic contexts where there is a focus on both pedagogy and novel designs) should be to introduce nuance and mindfulness to our work.

6. ACKNOWLEDGMENTS

This work was funded by NSF award IIS-1162037 and a grant from the Mellon Foundation.

7. REFERENCES

[1] M. Agrawala and C. Stolte. Rendering e↵ective route maps: improving usability through generalization. In Proceedings of the 28th annual conference on Computer graphics and interactive techniques, pages 241–249. ACM, 2001.

[2] D. Albers, M. Correll, and M. Gleicher. Task-driven evaluation of aggregation in time series visualization.

InProceedings of the 2014 ACM annual conference on Human Factors in Computing Systems, pages 551–560.

ACM, may 2014.

[3] A. Arditi and J. Cho. Serifs and font legibility.Vision Research, 45(23):2926 – 2933, 2005.

[4] S. Bateman, R. L. Mandryk, C. Gutwin, A. Genest, D. McDine, and C. Brooks. Useful junk?: the e↵ects of visual embellishment on comprehension and

memorability of charts. InProceedings of the SIGCHI Conference on Human Factors in Computing Systems, pages 2573–2582. ACM, 2010.

[5] M. L. Bernard, B. S. Chaparro, M. M. Mills, and C. G. Halcomb. Comparing the e↵ects of text size and format on the readibility of computer-displayed Times New Roman and Arial text.International Journal of Human-Computer Studies, 59(6):823–835, 2003.

[6] M. Borkin, K. Z. Gajos, A. Peters, D. Mitsouras, S. Melchionna, F. J. Rybicki, C. L. Feldman,

H. Pfister, et al. Evaluation of artery visualizations for heart disease diagnosis.Visualization and Computer Graphics, IEEE Transactions on, 17(12):2479–2488, 2011.

[7] M. A. Borkin, A. A. Vo, Z. Bylinskii, P. Isola, S. Sunkavalli, A. Oliva, and H. Pfister. What makes a visualization memorable? Visualization and Computer Graphics, IEEE Transactions on, 19(12):2306–2315, 2013.

[8] N. Boukhelifa, A. Bezerianos, T. Isenberg, and J.-D.

Fekete. Evaluating sketchiness as a visual variable for the depiction of qualitative uncertainty.Visualization and Computer Graphics, IEEE Transactions on, 18(12):2769–2778, 2012.

[9] S. K. Card, J. D. Mackinlay, and B. Shneiderman.

Readings in information visualization: using vision to

(6)

think. Morgan Kaufmann, 1999.

[10] M. Correll, E. Alexander, and M. Gleicher. Quantity estimation in visualizations of tagged text. In Proceedings of the 2013 ACM annual conference on Human Factors in Computing Systems, CHI ’13, pages 2697–2706. ACM, May 2013.

[11] S. Dragga and D. Voss. Cruel pies: The inhumanity of technical illustrations.Technical communication, 48(3):265–274, 2001.

[12] S. Few. The chartjunk debate: A close examination of recent findings.Visual Business Intelligence

Newsletter, 2011.

[13] G. Grumbling, V. Strelets, et al. Flybase: anatomical data, images and queries.Nucleic acids research, 34(suppl 1):D484–D488, 2006.

[14] S. Haroz, R. Kosara, and S. L. Franconeri. Isotype visualization–working memory, performance, and engagement with pictographs. InProceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, pages 1191–1200. ACM, 2015.

[15] R. Hooke.Micrographia, or, Some physiological descriptions of minute bodies made by magnifying glasses, with observations and inquiries thereupon: by Robert Hooke; with a preface by RT Gunther. Dover Publications, 2003.

[16] D. Hu↵.How to lie with statistics. WW Norton &

Company, 1954.

[17] J. Hullman, E. Adar, and P. Shah. Benefitting InfoVis with visual difficulties.Visualization and Computer Graphics, IEEE Transactions on, 17(12):2213–2222, 2011.

[18] J. Jack. A pedagogy of sight: microscopic vision in Robert Hooke’s Micrographia.Quarterly Journal of Speech, 95(2):192–209, 2009.

[19] A. M. MacEachren, R. E. Roth, J. O’Brien, B. Li, D. Swingley, and M. Gahegan. Visual semiotics &

uncertainty visualization: An empirical study.

Visualization and Computer Graphics, IEEE Transactions on, 18(12):2496–2505, 2012.

[20] L. Micallef, P. Dragicevic, and J.-D. Fekete. Assessing the e↵ect of visualizations on bayesian reasoning through crowdsourcing.Visualization and Computer Graphics, IEEE Transactions on, 18(12):2536–2545, 2012.

[21] M. Monmonier. How to lie with maps. 1991.

[22] E. Morris. Hear, all ye people; hearken, o earth (part one).New York Times Opinionator, 2012.

[23] T. Munzner.Visual Analysis and Design, chapter 5, pages 94–114. A K Peters/CRC Press, 2014.

[24] A. V. Pandey, K. Rall, M. L. Satterthwaite, O. Nov, and E. Bertini. How deceptive are deceptive

visualizations?: An empirical analysis of common distortion techniques. InProceedings of the ACM Conference on Human Factors in Computing Systems, pages 15–03, 2015.

[25] B. E. Rogowitz, L. A. Treinish, S. Bryson, et al. How not to lie with visualization.Computers in Physics, 10(3):268–273, 1996.

[26] D. Rosenberg. Data before the fact. In L. Gitelman, editor,“Raw Data” is an Oxymoron, pages 15–40. MIT Press, 2013.

[27] B. Shneiderman. The eyes have it: A task by data type taxonomy for information visualizations. In Visual Languages, 1996. Proceedings., IEEE Symposium on, pages 336–343. IEEE, 1996.

[28] M. Stone, D. Albers Szafir, and V. Setlur. An engineering model for color di↵erence is a function of size. InColor Imaging Conference, 2014. To appear.

[29] E. Sygnatur. Logging is perilous work.Compensation and Working Conditions, 3(4):3–9, 1998.

[30] A. Tal and B. Wansink. Blinded with science: Trivial graphs and formulas increase ad persuasiveness and belief in product efficacy.Public Understanding of Science, page 0963662514549688, 2014.

[31] E. R. Tufte.The visual display of quantitative information. Graphics Press, 2 edition, 2001.

[32] C. Upton and G. Doherty. Extending ecological interface design principles: a manufacturing case study.International Journal of Human-Computer Studies, 66(4):271–286, 2008.

[33] D. Wood.Rethinking the power of maps. Guilford Press, 2012.

[34] J. Zacks and B. Tversky. Bars and lines: A study of graphic communication.Memory & Cognition, 27(6):1073–1079, 1999.

[35] J. Zhang. A representational analysis of relational information displays.International Journal of Human-Computer Studies, 45(1):59–74, 1996.

Referenzen

ÄHNLICHE DOKUMENTE

A different method, the Joint Probabilistic Data Association (JDPA) [Fo80], suggests allowing weighted sum association of a single observation to multiple targets in

In this paper, we address the issue of jointly managing knowledge and metadata, in order to warehouse complex data and handle them, at three different levels: at the supplier

2 pwOmics: An R package for pathway-based integration of time-series omics data using public database knowledge 25 3 Decoding Cellular Dynamics in Epidermal Growth Factor

Squidy is a Zoomable Design Environment which eases the design, integration and combination of novel input devices as well as appropriate interaction techniques.. By providing a

While wikis represent an excellent and particularly interesting application area of collaborative information management, the presented NLP techniques could be applied to any means

Wieduwilt et al., Test beam measurements of (final-type) DEPFET pixel detector modules for the Belle II experiment, in preparation (2020). Jansen et al., Performance of the

Big data is certainly enriching our experiences of urban planning and management, and it is offering many new opportunities for more informed urban decision-making and planning

„[…] Daten (die für sich keinen Informationswert haben) werden zu Information (das heißt, es wird den Daten eine Bedeutung vermittels ihrer Organisation zugewiesen), die