• Keine Ergebnisse gefunden

Figure 7.5: Selection dialog box sets the inputs for the application description

Discussion: The GVSM is the most complex model discussed in this thesis. It extends the already comprehensive attack tree approach by vulnerabilities needed to execute the attacks. Thus it also inherits the problems and the complexity of the ATASM model.

Interestingly in the INA example the description overhead in the components is reduced.

This is caused by the description of vulnerabilities instead of attacks components. Typi-cally one vulnerability can be exploited by more than one attack, so that the total number of needed description is reduced.

The expressiveness of GVSM is met by no other model. It allows to trace vulnerabili-ties that are not solved and from this the resulting possible attacks. Thus, for a given configuration it allows to identify the weak spots of the system.

INA w/o encr.

sec: (0|0|0)

INA TestApp sec: (0|0|0)

ActiveMessage sec: (0|0|0)

Figure 7.6: Initial configuration of the INA-application: Simple network protocol and simple INA algorithm are small but insecure.

sec: (0|0|0) indicates that no security features are provided. The predicted memory consumption is 11 kBytes and the predicted energy consumption is low.

2: the required concealment is increased, the environment stays trusted As result HBH encryption is used.

3: additionally the environment is set to hostile, implicating that we cannot trust the aggregator nodes.

Then DF would be the preferred INA approach. CMT and hCDA are alternatives that are also listed, but they are larger and need more energy.

4: additionally integrity requirement is set to medium.

It disqualifies DF as proper solution because its integrity property is too weak.

Then CMT with a StreamCipher will be chosen instead.

5: additionally require the reliability feature.

Since CMT does not provide sufficient robustness the only remaining choice is the hybrid CDA which applies both CMT and DF.

Table7.4 shows the complete assessments and estimated footprints for the different configurations.

Table 7.4: Overview of exemplary inputs, the resulting configurations and their estimated properties. Int, Conc, Rel represent the security attributes Integrity, Concealment, and Reliability.

Estimated Properties

Test Case Alg. ROM

(kB)

Int Conc Rel. Energy

- No INA 10 0 1 0 0

2 H2H Encr. 13 1 1 0 2

1 INA w/o encr 11 0 0 0 3

4 CMT 15 3 3 0 1

3 DF 13 1 2 0 2

5 Hybrid (secure network) 25 3 3 2 0

5 Hybrid (no secure network) 16 3 3 0 0

DTSN sec: (0|0|2)

DSDV sec: (0|0|1)

ActiveMessage sec: (0|0|0)

INA TestApp sec: (3|2|2)

Hybrid CDA sec: (3|2|2)

CaMyTs sec: (3|2|1)

DF sec: (1|2|2)

CTRMode sec: (0|2|0)

SkipJack sec: (0|2|0)

Random sec: (0|0|0)

Figure 7.7: Output of the configuration process if all security properties have to be good or better

Results

As example for the output of the selection algorithm Figure7.7illustrates the output for test case 5. This case is in particular interesting since the reliability requirement (modeled with QCSM) also triggered the network layers to choose protocols with improved quality.

The need for a robust network protocol added the transport and network protocols DTSN/DSDV [MGN07]. This result initially was not anticipated, while it is reasonable for an actual implementation.

In this test case the results indicate that all security requirements are at least ’good’

(’2’). It can also be seen that the complexity of the system is already quite high. The total memory prediction is more than 25 kBytes (4kB RAM), so that it is still acceptable for the micaz node.

We executed the tests for each of the five security models. Table 7.5 shows the results. All models led to the same selection of components, which is reasonable since the five models are just different views on the same system including security features and vulnerabilities.

Table 7.5: INA Assessment results for different security models.

manual FCSM QCSM GASM ATASM GVSM

1 INA w/o INA w/o INA w/o INA w/o INA w/o INA w/o

2 HBH HBH HBH HBH HBH HBH

3 DF DF DF DF DF DF

4 CMT CMT CMT CMT CMT CMT

5 hCDA hCDA hCDA hCDA hCDA hCDA

Finally all models resulted in the expected results. It is no surprise since in context of this thesis we had full knowledge and control over all models and components. So what we could prove is that each model is able to respect all knowledge required to assess INA correctly.

Also concerning the extended properties, the selections and proposed configuration based on the inputs are reasonable, and the predicted memory consumption corresponds to the actual memory consumption determined in 7.1. The security assessments are correct considering our inputs. The energy evaluation is also correct and it allows direct com-parisons. Considered that these decisions have been taken based on the fuzzy inputs as shown in Figure 7.5 it is a clear step towards usable configurations of security-related applications and algorithms in the context of WSNs.

7.7 Conclusions

This chapter investigated how to describe the context and interrelations of the INA algo-rithms in the security models. All necessary was to define the XML files which describe the components, the requirement description, and properties. This could be successfully accomplished for all considered security models, which in first place demonstrates the practicability and flexibility of the configKIT framework. The evidence that eventually all security models resulted in the correct assessment may be a result of the methodology we applied to develop the models. All five models originate in the security ontology intro-duced in Chapter3. In this way, with the INA example we could successfully demonstrate all contributions of this thesis.

Indeed, filtering six INA approaches could have been done with significantly less overhead in a smaller database-like approach. However, is has not been the goal of this

Figure 7.8: Comparison of the complexity to model the security of the INA approaches with the five security models. QCSM has the smallest total overhead due to the min-imalistic model. GVSM has the most complex model, but the overhead to model new components is small.

chapter to find an algorithm for a problem, but to demonstrate the effectiveness of the configKIT methodology in general and of the security models in particular. For this purpose the example in this section was chosen to demonstrate how different aspects can be combined to solve a complex design problem. And ultimately we could solve the problem with all proposed security models.

Beside the expressiveness one of our major concerns has been the complexity of the models. Figure 7.8 finally compares the efforts for the five models segmented in the needed lines of XML description for the requirements description, components descrip-tion, and model description. It is obvious that the total complexity is highest for ATASM and GVSM. However, in particular for GVSM this overweight is caused by the complex model which is fixed. In fact, the overhead for adding new components is the lowest for GVSM, rendering it appealing for developers of components. The model with the lowest total overhead is the quality-based QCSM, due to the straight requirement definition and the slim comparison model consisting of only three rules.

Subjectively, the integration of some models felt more natural than for others. For in-stance the integration of QCSM just worked and provided good results from the beginning while the more complex models needed several tweaks to result in something sound. In particular the attack tree approaches bear the risk that the knowledge of the properties of the protocols influenced the implementation of the attack trees. In practice and for other protocols it may be necessary to refine the trees.

Chapter 8

Conclusions and Future Work

This thesis presented a tool-supported development flow, named configKIT, that helps users to integrate secured applications in the domain of Wireless Sensor Networks. It consists of a component-based framework that selects and composes configurations of components for WSN applications from user requirements, automatically. During this configuration process the framework employs various models to assess functional and non-functional system attributes, such as security. As result of the design flow, the system can then be assembled and deployed in the network. After a thorough analysis of alternative techniques this approach of compile-time synthesis promises to provide the best possible performance for the design of WSNs.

The first step of the design flow, which was actually implemented, is a novel require-ment definition, which bridges the semantic gap between requirerequire-ments given by users and technical requirements needed for the automatic selection process. In this require-ment definition process, first, the user defines the requirerequire-ments on a domain specific level by selecting and parameterizing attributes provided by a catalog. These high-level re-quirements are translated to detailed technical rere-quirements utilizing a graph structure of interconnected requirement types. The resulting requirements constitute their direct input for the component-based configuration process.

Core of this component-based configuration process is a selection algorithm that searches the design space of available components for system compositions that are com-patible – structurally and semantically– and fulfill the given requirements. The structural properties of the composition are tracked by the component model, while the semantics of the system under development are maintained by the Property-Relation-Graph structure.

The component model provides a rather high-level abstraction of software and hard-ware modules represented by the components. Interfaces control how the components can be assembled to a complete system. The component model utilizes state of the art concepts such as abstract components and interfaces and a meta-description of properties of the components, which allows the assessment of the behavior of the composed system.

The Property-Relation-Graph structure, which is instantiated as working model of the system, implements a powerful abstraction for the semantics of all aspects of the system. The structure consists of properties, which represent all forms of attributes of the system, and relations which use existing properties, either to define new properties or to implement constraints. Based on this structure, extensive models for system