• Keine Ergebnisse gefunden

A Framework for Cognitive Process Models (Jarecki, Tan, & Jenny)

This chapter demonstrates that the tool "process model" is ambiguously defined, by scholars and in publications alike, and proposes a framework specifying the properties of (general) cognitive process models.

6.1.1 Ambiguity of process model properties

The first questions concerns whether process model properties are clearly specified — either implicitly by expert opinions, or explicitly in the literature.

H1a: Experts judgments disagree about the properties of process models. As expected, experts who classified the decision science models judged different models as process models, with a low inter-rater agreement (Fleiss-Cuzick’s =.27 with values above .60 regarded as ’good’ agreement;

Fleiss & Cuzick,1979). Classifications by professors were slightly more consistent than researchers or students, but still low ( =.33, .17, and .14, respectively). This suggests that meta-theories related to process models, most prominently Marr’s (1982) three levels of analysis (algorithm, computation, implementation) have underspecified the properties of process models (despite their frequent use, e.g.

Chater,2009; Griffiths, Lieder, & Goodman,2014; K. Huang, Sen, & Szidarovszky,2012; Jones &

Love,2011; McClelland et al.,2010). When asked if the algorithmic level clarifies what process models are, the 38 people familiar with Marr were divided equally between the extremes "does not clarify at all" and "claraifies completely" ( 2(2) =14.105,p<.001, Cramer’sV =0.431). This means that there is a lack of clarity about process model requirements among researchers.

H1b Definitions in the literature disagree about the properties of process models. The review uncovered that process model as a method appears in different contexts. In part of the literature process-type models are tested and compared againstrational-type models. These models describe choices that either solve certain tasks optimally (rational models, J. R. Anderson,1991a), or that reach optimality within a fixed margin of error (rational process models, Sanborn, Griffiths, & Navarro, 2010). In comparison, process models are seen as less flexible and rather mechanistic tools.

The second context sets process models in contrast toas-if-type models. These models assume either unrealistic mental operations (as-if models, cf. Berg & Gigerenzer,2010), or leave open whether the computations are realizable as long as the functional relationships capture behavior well (paramorphic models, P. J. Hoffman,1960). In comparison to these models, process models are seen as more realistic

Chapter 6 Results and discussion 21

Figure 6.1. A framework for cognitive process models. Grey boxes denote properties specific to process models; lines denote the interdependencies between the features of process models.

and data-constrained tools implementing, for example, cognitive capacity limits. Note, that also rational models make no assumptions about underlying processes, thus, the first two connotations overlap.

Thirdly, ’process model’ refers to a class ofdynamic, stochastic modelsof cognition (e.g., diffusion, or accumulator models, Busemeyer & Townsend,1993; Usher & McClelland,2001). These tools typically share one formal feature: random walk processes. In this context, process models share a formal feature.

These different contexts may amplify the lack of clarity regarding what constitutes a process model.

This suggests that the literature features a variety of characteristics required for process models.

6.1.2 A Framework for process models

To address the lack of clarity, I propose a framework for cognitive process models: As shown in Figure6.1a process model includes (a) one or more intermediate processing stages, (b) is built such that the intermediate stages are compatible with current knowledge about human cognition, (c) constructs the intermediate stages such that they can vary separately from the output, and (d) allows testable claims to be derived about both the intermediate stages and the output, given the same input. In addition, a tri-modal conceptual scope is necessary.

Tri-modal conceptual scope.The scope describes the phenomena to which a model applies qualita-tively. The tri-modal scope that process models require include inputs, outputs, and to which mental operations it applies (e.g., attention, neuronal activation, or causal beliefs). Gluth, Rieskamp, and Buchel (2012) model the decisions to acquire goods. Their model takes the good’s value as inputs, processes at the neuronal level, and outputs rejection/acceptance decisions. This scope is tri-modal.

By contrast, Fehr and Schmidt (1999) model contributions in economic games based on fairness, but

22 Chapter 6 Results and discussion fairness is defined in terms of the output.1. This is a bi-modal scope. Would fairness be defined as emotional or neuronal phenomena the scope would be tri-modal. The tri-modality requirement is deliberately broad. Of course, a conceptual scope may change in subsequent applications of a model — important is a reference to the cognitve processing phenomena.

At least one intermediate stage. In addition to a tri-modal scope, process models have at least one intermediate stage specified (in line with Svenson,1979; Weber & E. J. Johnson,2009). Intermediate stage are precise mental operations that—according to the model—transform input to output. A single intermediate stage may be attention distribution, neuronal activation pattern, or the structure of causal beliefs. Several intermediate stages may be sequences like a change in attention, neuronal firing, or network activation. A model without intermediate stages is cumulative prospect theory (Tversky &

Kahneman,1992). It formalizes the utilityu of receivingxwith probabilitypasu(x,p) =v(x)w(p), wherev(x)andw(p)are valuation and weighting functions. It under-specifies the order of operations (weighting before, after, or concurrent with valuation?). Kahneman and Tversky (1979) initially hypothesized weighting before valuation, which is missing in the model. Models containing equations can, however, have intermediate stages if the equation explicates the temporal order, like in sequential sampling or drift-diffusion models (Busemeyer & Townsend,1993). Importantly, intermediate stages require a tri-modal scope.

Compatibility.The transformations in the intermediate stages of a process model should be, by and large, compatible with our knowledge of cognitive capacities. It requires either relating the intermediate stages to supported cognitive theories, or relating it to data about cognitive capabilities. For example, implementing an attention distribution that does not violate established bottom-up influences, or showing that a modeled belief network is not computationally intractable. Compatibility refines calls for plausibility of process models (e.g., Winkel, Keuken, van Maanen, Wagenmakers, & Forstmann, 2014), where the meaning of plausibility tended to be subjective (for a potential list of criteria, see Gigerenzer, Hoffrage, & Goldstein,2008).

Separability. Process models yield at least two separate hypotheses given the same inputs, one hy-pothesis about outputs and, at least one about intermediate stages. For example, a process model of medical choices could jointly model the state of a causal network and the resulting diagnosis given the symptoms. The Take-the-best model (Gigerenzer & Goldstein,1996) derives both the order of information acquisition and the resulting decision from cue values and validities.

Separability allows output and process to be disentangled, while they remain connected through the model and theory where outputs result from intermediate stages, and intermediate stages result from inputs. Processes cannot be caused by outputs (except in models with feedback loops). Separability prevents reverse inference (concluding from supported behavioral predictions that processes are true;

i.e., affirming the consequent2), which is problematic due to the under-determination of outputs by processes.

1As the difference between the own payoff and the average of everyone else’s payoff.

2InterpretP=process andB=behavior. The logical fallacy makes the following invalid inference: (1) LetPthenBbe true.

(2) LetBbe true. (3) ThereforeP. The valid inference is: (2) Letnot Bbe true. (3) Thereforenot P. (Geis & Zwicky,2011, p. 562).

Chapter 6 Results and discussion 23 Testability.Testability means that the claims for outputs and intermediate stages are specific enough to be tested within the conceptual scope. While most formal cognitive models can be tested regarding their outputs (behavior), process models need to yield additional specific hypotheses for any intermediate stages encompassed by their scope. For example, the precise parameters of the state of a causal network as well as the numerical probability to decide for an option. The evidence strength here is asymmetric:

If a process model fails behaviorally its computations are invalid—independent of the support of its process predictions—but if a model performs behaviorally its computations may or may not be valid—depending on the support of its process predictions.

6.1.3 Discussion and limitations

Merging functional and formal aspects of human decision making needs cognitive process models and optimality models connected. However, to date it is unclear which properties cognitive process models possess. I proposed a general framework specifying the requirements for process models.

Importantly, it does not value process models over outcome models or optimality models. The framework aims at defining the term. In discussion with colleagues and at conferences, "process models" tends to be used like "good model". However, which model is good depends on its purpose and explanatory power, as well as the research question.

Explicating the properties of process models goes beyond terminological issues; it facilitates teaching process modeling, guides model development, informs conceptual integration, and instructs supple-menting behavioral models with a process level. It permits solving debates on the status of models, and aids model-based process tracing.

6.2 Robustness of categorization: Naive but robust computations in