• Keine Ergebnisse gefunden

Development of the model construction process

4.2 Development of the construction model

4.2.6 Step 6 - Model evaluation

4.2.6.1 Model evaluation - Theoretical foundation

Before describing the evaluation approach in the construction model in detail, the aspects of evaluation and validation as two different topics in information system research and particular in design science research, are described. Based on this theoretical basis, the evaluation approach used within the construction model will be described.

The aspect of validation has been discussed in the past decades increasingly in the field of information system research [Straub Jr.,1989;Boudreau et al.,2001], as the discipline has been confronted with criticism regarding the lack of a solid validation. Following Balci [1998], validation can be described as "building the right model". This contrasts the often mixed verification which is understood as "building the model right".

Within the existing publications on validation from the field of maturity model research, different types of validity can be identified, primarilyconstruct validity and instrument validity, whereas the first one consists of content and face validity [De Bruin et al.,2005;

Boudreau et al.,2001]. Although validation has been increasingly emphasized in recent information system research, the analysis by Boudreau et al. [2001] reveals, that solely 23 % respectively 37 % of 193 publications in the top tier journals from the field of information system research test for construct respective instrument validity.

Validation is aiming at the reproducibility of results based on a described procedure model [Balci,1998], in the case of maturity models the underlying construction model.

From the authors’ point of view, in the context of the maturity model construction, a

12The notionmeasurementis used synonym to the notion item in this work as the measurements are later translated into items in the questionnaire used for the data gathering.

validation of the resulting model is confronted with three major obstacles, making a full validation difficult to achieve:

i) Several process steps of the model construction are carried out in collaboration with members of the focus group and industry experts. As a first step, the results of the evaluation and fitting of the questionnaire during the interaction with the members of a focus group are based on the state of knowledge of the respective person, which may change in the course of time due to an increase in experiences.

ii) With regard to the relative character of maturity, the assignment of certain ca-pabilities to a maturity level changes in the course of time as well, leading to the need of a model maintaining phase in the construction process. Consequently, the model construction process could lead to different results at a later point in time, resulting from an increase in companies’ professionalism and technological development.

iii) In case a quantitative bottom-up approach is selected, the data for the model cal-culation are gathered from surveying companies’ current status, regarding certain processes. Again, the course of time has an influence of the companies’ capabili-ties, which in turn influences the data base and therefore may change the resulting model.

Summing up, the change of maturity in the course of time as well as the integration of individuals and companies in the model construction process has an influence on the resulting maturity model. Therefore, the reproducibility of the maturity model can only be achieved under several limitations that in turn reduce the applicability and practical relevance of the model. Consequently, the focus is on the evaluation of the maturity model instead of validation.

Evaluation, in a general sense, is understood as the systematic process applied to the targeted and goal-oriented evaluation of an object [Sanders, 2006, p. 25]. The execu-tion of an evaluaexecu-tion is not only connected with an interest in gaining knowledge. An evaluation can serve furthermore as the documentation of effects.

In the context of design oriented information system research, evaluation is understood as the assessment of the output of the Design Science Research process, which can be

artefacts or IS Design Theories [Venable et al.,2012].

Existing design research processes contain phases focusing explicitly on evaluation in-stead of validation, e.g. the approaches byPeffers et al.[2007],March and Smith[1995]

and Hevner et al.[2004] whose individual research steps can be grouped to the steps of Build, Evaluate, Theorize, andJustify.

Riege et al.[2009] is supporting the relevance of evaluation in the context of information system research by drawing a connection between evaluation and validation, stating that a constructed, not yet evaluated artefact does not represent a valid research result.

In order to achieve this "valid" research result, both the construction model as well as the results of the model application are evaluated.

Evaluation approaches in the field of design science research can be distinguished based on the evaluation of the research/artefact against the research gap or the real world problem [Bucher et al.,2008;Cleven et al.,2009]:

i) The artefact is evaluated against the identified research gap, the focus is on the evaluation of the accurate construction of the artefact, based on requirements defined before.

ii) The artefact is evaluated against (an expert of) the real world by applying the artefact to the real world problem in focus.

iii) The research gap is evaluated against the real world. This approach plays a sub-ordinate role in the field of information system research and therefore will not be further pursued.

The focus of the thesis at hand is both on

• the evaluationagainst the identified research gap( = evaluation of the con-struction model itself) and

• the evaluation against the real world ( = evaluation of the Big Data maturity model as a step of the model construction).

This approach goes beyond a sole focus on the construction process as demanded by Winter[2008].

The evaluation of the construction model against the research gap will be carried out in the end of Chapter 4. The evaluation of the maturity model against the real world as a step in the model construction will be explained in the following section.13

4.2.6.2 Evaluation against the real world

For the construction model at hand, a two-step evaluation approach has been developed.

First, the initial model, which resulted from the model population phase, will be dis-cussed with the members of the focus group. Subject of this discussion is the distribution of the items amongst the maturity models in case a bottom-up approach has been se-lected. This step is named Evaluation of the initial model (Step 6.1). The goal is to identify how the item difficulty, calculated in the population step before, and the result-ing item order are congruent with the item difficulty perceived by the member of the focus group. In other words, the goal is to identify, if the focus group members would assign the items/measurements on the same maturity levels as it has been done during the population step.

At this point, an additional member should complement the focus group, in order to integrate the opinion of a person who has not already influenced the model construction process. Based on the input of the focus group, the model is fitted respectively.

Second, after the incorporation of the feedback, the resulting fitted model is applied and evaluated with the help of the focus group. This step is named as the Evaluation based on the deployment of the fitted Model (Step 6.2). The focus group should again be expanded for this step as some members of the focus group have already participated in the construction of the questionnaire as well as in the first model evaluation step (step 6.1), and therefore have already realized an understanding of maturity in the overall model. This previous knowledge can lead to a falsification of the evaluation results.

For the second evaluation step, construction step 6.2, every member of the focus group designates at least one company, he is familiar with, based on consulting projects. By that, it is expected, that he is able to evaluate the companies’ capabilities and maturity

13With regard to the focus on the evaluation of the model against the real world, an evaluation of the resulting initial model from a statistical point of view based on the item fit [Reise,1990] values is not in the focus.

in the field of Big Data. The goal is to have a selection of companies in all spectrums, ranging from immature to mature, in order to test the different levels of the maturity model.14 The fitted model is used to evaluate these companies’ Big Data maturity. Con-current, the focus group member evaluates the same companies, he determined upfront.

Therefore, for each selected company in step 6.2, two maturity evaluation exists, the model based and the expert-based.

Potential differences between the results from the model and the maturity evaluation of the industry expert are discussed in the next step in order to investigate if the potential differences result from i) missing or wrong measurements in the model from the experts’

point-of-view or ii) the error-prone distribution of measurements on maturity levels.15 In the event that the model is rejected after these two evaluation steps, three potential starting points for the necessary model adjustments exist, depending on the expressed criticism during the initial model discussion and the model deployment.16 In the case that missing measurements in the initial model or during the model deployment are identified, the process step model population is carried out again. In case the degree of granularity of the model is criticized, the model construction starts again at step 4, Se-lect Design Level and Methodology. If criticism on a higher level targets the dimensions of the model, thedimension identification process is carried out again.