• Keine Ergebnisse gefunden

3.3 S YSTEM A RCHITECTURE

3.3.2 Related System Categories—State of Science

There are a number of different system categories described by state-of-the art research that could potentially support the requirements for an IgniteWorx implementation. In the following, the state of science regarding decision support systems, expert systems, recommender systems, semantic reasoning systems, and survey engines is described before analyzing their suitability for IgniteWorx.

3.3.2.1 Decision Support Systems

Decision support systems (DSS) support decision makers by aggregating and presenting relevant information from a variety of data sources. Spraque (1980) provided one of the

first comprehensive description of a DSS: “DSS tends to be aimed at the less well structured, underspecified problem that upper level managers typically face; DSS attempts to combine the use of models or analytic techniques with traditional data access and retrieval functions; DSS specifically focuses on features which make them easy to use by non-computer-proficient people in an interactive mode; and DSS emphasizes flexibility and adaptability to accommodate changes in the environment and the decision making approach of the user.”

DSS typically support different types of decision analysis (DA) methods. From an IgniteWorx point of view, one interesting branch of DA methods supported by some DSS are multi-criteria decision analysis (MCDA) methods, which are described in 3.3.3.5.

Holsapple (2000) defines the following five basic DSS approaches as:

 Text-oriented DSS

 Database-oriented DSS

 Spreadsheet-oriented DSS

 Solver-oriented DSS

 Rule-oriented DSS

He also introduces the compound DSS, a hybrid system that combines the characteristics of two or more of the above basic DSS categories.

Burstein and Holsapple (2008) also introduce the intelligent decision support systems (IDSS), which is performing selected cognitive decision-making functions by using artificial intelligence or intelligent agents technologies. Lasi (2012) introduces “Decision Support within Knowledge-based Engineering.”

Holsapple (2000) defines the main components of a DSS as follows:

 Inputs: Factors, numbers, and characteristics to analyze

 User knowledge and expertise: Inputs requiring manual analysis by the user

 Outputs: Transformed data from which DSS “decisions” are generated

 Decisions: Results generated by the DSS based on user criteria

It should be noted that some of the research related to DSS seems slightly outdated, e.g., based on the search results from Google Scholar. Arnott and Pervan (2005) state “The analysis of the professional or practical contribution of DSS research shows a field that is facing a crisis of relevance.”

3.3.2.2 Expert Systems

In the 1970s, availability of computers with larger memory modules enabled the evolution of artificial intelligence toward knowledge-based expert systems, pioneered by Edward Feigenbaum (Feigenbaum, 1977). An expert system typically combines an inference

engine and a knowledge base. The knowledge base represents known facts and rules. The inference engine applies the rules to the facts to derive new facts.

Ogu and Y.A. (2013) provide a good overview of typical expert system architectures, including the required components for user interaction. Figure 54 provides a generalized overview. The so-called expert system shell includes the interface for the end-user, as well as a knowledge base editor for the knowledge engineer, in addition to the inference engine and an explanation system. All these components access a shared repository with the knowledge base and rule base.

Figure 54:Architecture of an expert system

Expert system shells can also support the question/answer paradigm inherent to the Ignite project dimensions used for the assessment of individual IIoT projects. For example, Sosnin (2011) describes a question-answer shell for personal expert systems. Another example is Choi (2002), who presents a rule-based expert system using an interactive question-and-answer sequence.

3.3.2.3 Recommender Systems

Ricci et al. (2015b) define a recommender system (RS) as “software tools and techniques that provide suggestions for items that are most likely of interest to a particular user.”

Popular examples of the output or RS are Facebook’s “People You Might Know,”

Netflix’s “Other Movies You Might Enjoy,” and Amazon’s “Customers Who Bought This Item Also Bought…”

Dhillon (2015) defines six classes for RS: collaborative recommender system, content-based recommender system, demographic-content-based recommender system, utility-content-based recommender system, knowledge-based recommender system, and hybrid recommender system:

 Collaborative recommender systems “aggregate ratings or recommendations of objects, recognize commonalities between the users on the basis of their ratings,

and generate new recommendations based on inter-user comparisons.”

Algorithms can include graph-based algorithms (e.g., based on neighborhood) or latent factor model (Liang, 2012).

 A content-based recommender system “learns a profile of the new user’s interests based on the features present, in objects the user has rated.”

 Demographic-based recommender systems make recommendations based on demographic classes, but “the algorithms first need a proper market research in the specified region accompanied with a short survey to gather data for categorization.”

 Utility-based recommender systems make suggestions “based on computation of the utility of each object for the user.” For example, product availability can be factored into the computation.

 Knowledge-based recommender systems “suggest objects based on inferences about a user’s needs and preferences.”

 Hybrid recommender systems combine two or more of the above RS categories to improve recommendation results.

Liang (2012) describes the main architectural elements of the RS developed by Hulu, the video streaming company. The system consists of an online and an offline part. The online part includes five main modules: user profile builder (includes the user’s historical behaviors), recommendation core (generates raw recommendations), filtering (filter for raw recommendation results, e.g., on past user behavior), ranking (e.g., to increase diversity) and explanation (why was a recommendation made?). The offline system includes data center (including all user behavior), related table generator (for collaborative filtering and content filtering), topic model (a group of shows that have similar content), feedback analyzer (users’ reactions to recommendation results), and report generator (e.g., click-through rates and conversion rates). This example shows the high level of complexity inherent to RS implementations.

During the research on this topic, two common problems in RS emerged that also have a high relevance for the design of IgniteWorx: the so-called cold start problem and the product complexity problem.

The term “cold start” was chosen as an analogy to a car engine, which only runs on the optimal level once the engine has warmed up. For RS, this means that the system has to acquire a sufficient amount of metadata before it can make efficient recommendations.

There are two main issues described in the literature: product cold start and visitor cold start (see, for example, Volkovs et al. (2017) or Nadimi-Shahraki and Bahadorpour (2014)). The product cold start relates to new products that have no metadata available in the system, like user reviews or any other kind of “likes” from a certain group of users.

The user cold start relates to new users who have no history in the system that can be used to derive preferences.

Gope and Jain (2017) provide a survey on solving cold start problem in recommender systems. They differentiate between explicit and implicit solutions. Techniques for explicit solutions include active learning and interviews. Nadimi-Shahraki and Bahadorpour (2014) describe an ask-to-rate technique, in which a new user is asked to rate the selected items until having a sufficient number of rated items. Implicit solutions include adapted filtering strategies and external data collections, e.g., through social networks.

The cold start problem can also be applied to IgniteWorx, using the following mapping:

 User cold start: The “user” in Ignite is an IIoT project. Ignite dimensions are used to implement a kind of “ask-to-rate” strategy, as described by Nadimi-Shahraki and Bahadorpour (2014).

 Product cold start: The products in IgniteWorx are the elements of the result sets.

IgniteWorx rules are used to explicitly map relevant “products” to “users” (or, more specifically, IIoT projects to result set elements).

The second RS problem area with a high relevance for IgniteWorx is described by Ricci et al. (2015b): Most RS are designed today to recommend items with a relatively simple structure, e.g. music, movies, books. More complex item types, such as financial investments or travel, are considered to be atypical cases for current RS. Most current RS are designed to treat different configurations as different items. According to Ricci et al.

(2015a), “Complex products are typically configurable or offered in several variants. This feature still poses a challenge to recommender systems, which are instead designed to consider different configurations as different items. Identifying the more suitable configuration requires reasoning between the interactions of alternative configurations (classifying and grouping items) and calls for addressing the specificity of the human decision making task generated by the selection of a configuration.”

Felfernig et al. (2015) state, “Knowledge-based recommender technologies help to tackle these challenges by exploiting explicit user requirements and deep knowledge about the underlying product domain for the computation of recommendations.”

Assuming that the product domain in IgniteWorx is the recommended IIoT project setup, then the solution proposed here is to decomposition the “product” into multiple results sets, according to section 3.1.2.5.

3.3.2.4 Semantic Reasoning Systems

Building on the success of the early World Wide Web, Berners-Lee et al. (2001) introduced the concept of the sematic web, describing how ontologies improve how knowledge can be captured and made more easily accessible. At the core of these ontologies are taxonomies and inference rules. Taxonomies define classes of objects and

relations among them. Inference rules enable automated programs to deduce conclusions from the available data, enabling a truly intelligent Internet.

A key technology for adding intelligence to a knowledge base are semantic reasoners (SRs), which are designed to infer logical consequences from a set of asserted facts. At the core of an SR is an inference engine, which is processing inference rules. These rules are usually specified by using an ontology language, such as W3C’s Ontology Web Language (OWL); see also McGuinness et al. (2004).

OWL supports most of the key components of an ontology, including classes (as a way of abstraction) and individuals (concrete instances of classes), attributes and properties, and rules and axioms.

Applying an ontology language such as OWL to IgniteWorx, one could describe the different elements from IgniteWorx using the language specific syntax. Take, for example, Ignite dimensions: Each Ignite dimension provides multiple options.

Dimensions are grouped into categories. Using OWL abstract syntax, this could look as follows:

Namespace(iwx = <http://enterprise-iot.org/IgniteWorx.owl#>) Ontology( <http://enterprise-iot.org/IgniteWorx.owl#>

Class(iwx:Category)

Class(iwx:Dimension partial restriction(partOf someValuesFrom(Category)) Class(iwx:Option partial restriction(partOf someValuesFrom(Dimension)) DatatypeProperty (ex:name)

ObjectProperty(ex:has) Individual(type(ex:Category)

value(ex:has Individual(type(ex:Dimension) value(ex:name "Number of Assets"^^xsd:string)))) )

In this example, three classes are defined: category, dimension, and option. Dimensions are part of categories, and options are part of dimensions. Next, an instance of a category is created, with a child element dimension called number of assets.

This is, of course, only a simple example. The next step would be to also model concepts such as IIoT project assessment and result sets using OWL. OWL would then also be used to model the relationships among Ignite dimensions, IIoT project assessments, and result sets, using rules.

Potential advantages of using established standard such as OWL include interoperability, as well as the availability of commercially available tools.

Finally, Blomqvist (2014) provides a survey on the use of semantic web technologies for decision support systems, concluding, “Semantic Web technologies can help to solve basic DSS needs such as information interoperability, integration and linking, while additionally potentially continuing to support the development of ‘Intelligent DSS’, but in a new and more open manner than what was traditionally possible with AI technologies.” This should be interesting, given that decision support is also a concept related to many IgniteWorx requirements.

3.3.2.5 Survey Engines

Given that a key part of IgniteWorx is the user friendly capturing of detailed IIoT project assessments, survey engines are also considered as candidates for an IgniteWorx implementation.

Survey Engines are embedded into many modern web services or are even available as standalone services, such as Survey Monkey (Waclawski, 2012). There are also a number of patents defined that deal with reusable online survey engines (Kirkpatrick et al., 2007) or web survey tool builder and result compilers (Fuerst, 2001).

Schlereth and Skiera (2012) describe a dynamic intelligent survey engine (DISE) for capturing user preferences. The requirements for DISE are as follows:

 Broad support for different web-based data collection methods, including number, text, radio buttons, or spectrums

 Definition of quotas for the sampling of respondents

 Multilingual user interface

 Ability to conditionally show questions depending on previous responses

 Ability to create different versions of a survey and to assign respondents randomly to one of the versions

 Ability to integrate survey panel providers

The DISE architecture as shown in Figure 55 includes a communication layer, a process layer, an execution layer, an information layer, and a third-party integration layer.

Vertical services include survey construction, data elicitation, data analysis, and conceptual representation.

Figure 55: DISE architecture (Schlereth and Skiera, 2012)

The IgniteWorx solution will at least have to support all vertical services describes in DISE to support the user friendly capturing and analysis of IIoT project assessments.

3.3.2.6 Evaluation for IgniteWorx

In the following, a tabular evaluation of the different system categories that have been considered for IgniteWorx is provided.

System Category Evaluation

Decision Support Systems The main goal of IgniteWorx is to support IIoT project managers in making the right decisions about their individual project setup. IgniteWorx must support project managers in making some important decisions. However, IgniteWorx is also aiming to provide as much structure around this process as possible; this is what the main artifacts of IgniteWorx are about: dimensions, results sets, and rules. Since DS systems tend to be aimed at less structured, underspecified problems (Spraque, 1980), they are unlikely to be suitable for an IgniteWorx implementation.

Expert Systems The combination of explicit knowledge base (=>

IgniteWorx result sets), rule base (=> IgniteWorx rules), inference engine, and explanation system makes expert systems a very interesting candidate as a blueprint for IgniteWorx. Also, the described user interfaces are a good fit (end-user UI and knowledge editor).

Recommender Systems The web-centric roots of recommender systems make them attractive for IgniteWorx. Some of the problems that the RS community is dealing with can also be found in the IgniteWorx approach, especially the cold start problem (Gope and Jain, 2017), which seems directly applicable to the recommendations in the IgniteWorx result sets.

Semantic Reasoning Systems

The use of ontologies and open standards such as OWL could potentially be attractive for an IgniteWorx implementation, provided that they allow for easy integration with a web-centric and easy-to-use user interface.

Survey Engines Most of the requirements described by the DISE example for a flexible survey engine can also be mapped to IgniteWorx, especially for the construction of the IIoT project assessment tool (Schlereth and Skiera, 2012).

This means this tool category should also be considered, at least for the assessment module of IgniteWorx.

Table 23: Evaluation of system categories for IgniteWorx

In summary, it seems that IgniteWorx would very well fit a combination of expert system and semantic reasoning system, with additional features borrowed from recommender systems and survey engines.