• Keine Ergebnisse gefunden

2 State of the Art of Crash Compatibility

2.3 Methodology and Tools

The aim of this section is to discuss the general approach of the previous works and find potentials for improving vehicle safety in this study.

The general approach in previous works for studying crash compatibility of vehicles in frontal impacts can be broken down into five steps using four main tools (Fig. 2.19).

The general approach begins with a statistical analysis of collisions to identify the crash compatibility problems and potentials to enhance vehicle safety. The identified problems are investigated using in-depth statistical analyses and some theories are developed for improving crash compatibility. The theories are validated and further developed in full-scale crash tests and simulation analyses, and the results are used to develop a solution for assessing crash compatibility. Finally, the developed assessment approaches are validated using full-scale crash tests and simulation analyses.

FIMCAR criticized [91, p. 10] the accident analysis in earlier works from EEVC WG15 and VC-COMPAT that were based on old datasets. The accident data in these works were collected between 2004 and 2005 and contained few vehicle models built after 2000. Since ECE R94 became mandatory for newly registered vehicles from 2003, many cases from the accident data did not meet the mandatory safety requirements and were not, therefore, representative of the modern vehicle fleet in Europe.

2 The author has been informed through several conversations and discussions with experts in conferences and meetings that the MPDB test procedure would be announced as the new Euro NCAP frontal impact offset test in 2018. However, there is no published material on this decision to be cited in this work.

FIMCAR analyzed two datasets from the United Kingdom and Germany [91], which covered a wide range of vehicles with masses from less than 750 kg to more than 2000 kg. The focus of FIMCAR was not on microcars, and about 80 % of the vehicles from the German dataset and 75 % of the vehicles from the United Kingdom dataset were within the range of vehicle mass from 1000 kg to 1749 kg [91, pp. 27-81]. Thus, special issues of microcars might be underrepresented in the FIMCAR results from the accident and structural analyses.

In this work, besides the results of the FIMCAR project, the results from the accident analysis in the Visio.M project are used to cover the entire vehicle fleet of passenger cars in Europe. Visio.M conducted a statistical analysis [68] on more than 22,000 accidents from the German In-Depth Accident Study’s (GIDAS) Database with the focus on safety requirements of electric microcars, which will fill the gap in the statistical analyses of the previous works.

Expert knowledge has been used in different studies for interpreting the structural analyses and defining compatibility requirements that should be assessed in crash tests.

Although there is a common understanding of crash compatibility, the previous projects failed to provide any rigorous definition for crash compatibility and its objective. FIMCAR declared the disagreement on terminology and the presence of individual definitions by citing compatibility as a reason for lack of progress in this field [98, p. 107]. Therefore, a successful research project that aims to advance crash compatibility must discuss the definition and objectives of crash compatibility to avoid inconsistency in expert opinions.

For crash analysis, development of solutions, and validation of their efficiencies, two main tools have been used: full-scale crash tests and simulation analysis. Each of these tools has some advantages and disadvantages (Tab. 2.5), and it is common to combine both to study the crash compatibility of vehicles in frontal impacts.

EEVC WG15, VC-COMPAT, and FIMCAR conducted many real crash tests, the results of which are published in different papers and reports and are available for this study.

Furthermore, the NHTSA provides free access to two datasets of crash tests and real-life accidents. The NHTSA crash test database [107] contains reports of results, videos, pictures, and data measurements of more than 7,800 crash tests from various types.

The National Automotive Sampling System (NASS) [108] contains more than 129,000

Figure 2.19: Approach and tools for studying frontal crash compatibility

sources and the high costs of real crash testing, this work does not conduct any new crash tests and only reviews results from previous works and databases of the NHTSA.

Table 2.5: Advantages and disadvantages of full-scale crash testing and simulation analysis for studying frontal crash compatibility

Tool Advantages Disadvantages

Full-scale crash testing

Similar to real-life accidents

Considers all influential parameters

Trustworthy results

- Expensive

- Difficult to fade out the influence of undesirable parameters - Issues with reproducibility and

repeatability

- Difficult to analyze the results - Limited possibility for variation

of the vehicle models

Simulation analysis

Lower costs relative to crash testing

Possibilities for studying the influence of individual parameters

No problem with reproducibility and repeatability

Simplicity of extensive analysis of the results

Full control for varying the vehicle models

- Issue of trustworthiness of the results

- Need to perform full-scale crash tests to validate the models

One of the main tools for studying crash compatibility in this work is virtual testing with the Finite Element (FE) simulations. Virtual testing and FE simulations have made a great progress in recent years, and it is expected that virtual testing can replace real testing for most safety regulations in the near future [109, p. 33]. However, an analysis should be conducted on the FE simulations that are used in this study to assure the trustworthiness of the results.

The objective of this analysis is to assure that the considered simplifications and numerical errors in a simulation analysis do not affect interpretations of the simulation results. The analysis consists of two parts: verification and validation.

The American Society of Mechanical Engineers (ASME) defined verification as “the process of determining that a computational model accurately represents the underlying mathematical model and its solution” [110, p. 11], which consists of code and model verification.

The code used for this work is LS-DYNA, which is a general-purpose FE software developed by Livermore Software Technology Corporation (LSTC). This is a common code for automotive crash analysis and is introduced as an acceptable analysis code [111, p. 240]. For pre- and post-processing, LS-PrePost from LSTC and HyperGraph from the software suite of Altair HyperWorks are used. To have comparable results, all simulations are done on the same computer cluster with 32 CPUs (E5-2670 Intel Xenon 8 Core CPU 2.60 GHz). Depending on models’ complexity and simulation time, each simulation took between 5 hours (e.g. microcar in the FWRB test) and 39 hours (e.g.

Toyota Camry vs. Toyota Camry in the car-to-car test).

Verification of a simulation model is not standardized, and many methods in existence are based on the experience of experts. Ray et al. [111, pp. 105-108] categorized relevant issues for model verification into five groups: geometry generation, mesh sensitivity and quality, contact stability, energy balance, and time step issues. Cordero et al. [112, pp. 19-21] provided a list of requirements (Tab. A.1 in Appendix A) to evaluate the model regarding the verification issues. In this study, simulation models are considered verified if they fulfill these requirements.

ASME defined validation as “the process of determining the degree to which a model is an accurate representation of the real world from the perspective of the intended uses of the model” [110, p. 11]. Therefore, predictability of a simulation model might vary in different applications. E.g., a vehicle model that is developed and validated for side impacts is not necessarily validated for frontal impacts. Cordero et al. [112, pp. 21-22]

provided a list of requirements (Tab. A.2 in Appendix A), which evaluate the predict-ability of a model. In this study, the simulation models are considered validated if they show a good correlation with the kinematic, time history signals for acceleration and deformations of a set of real test results in the application range. This will ensure the trustworthiness of the simulation analyses for investigating the crash compatibility of vehicles in frontal impacts. Appendix B presents the simulation models used in this work and describes the results of their trustworthiness analyses.