• Keine Ergebnisse gefunden

D = max

z∈Ω ds

(Discrepancy(z))

(5.2)

The smaller

D ∞

is , the better the method is in terms of the dened

re-quirements. It means, the method with the smallest

D

an sample the

P-dimensional design spae

ds

eiently by overing most of its orners and

edges. In this way, the data will be more informative by minimum possible

number of samples and aordingly simulations. [72℄ investigates all possible

methods ofsamplingbasedon theintrodued terms. Theresultisthat

Sram-bleSobolsequeneandSrambleHaltonsequenearethebestoptionsinterms

of

D

,gure5.3 .

0 0.2 0.4 0.6 0.8 1

0

0.2

0.4

0.6

0.8

1

Simulating the whole system model, i.e. vehile, ontrol system, and

atuator model, regarding determined driving dynamis maneuvers and

thesampled design spae.

Evaluation of the simulation results for alulatingthe objetive driving

dynami targetsof eahdriving maneuver.

Thisproedureisdepited ingure5.4 . Consequently,theinputsto a

meta-modelaresamplepointsofthevehile-genes,theontrolsystem,andthe

atua-tordesign variables. Theoutputsarethe evaluatedobjetive drivingdynamis

of eahdriving maneuver.

PSfragreplaements

Assessment

Figure5.4: Theproedure ofgenerating input-output data

5.2.2 Response surfae model

After gathering enough input-output data, a meta-model has to be trained.

In our ase, for eah output, we train an individual neural network, multiple

inputandone outputmodel. Itshouldbenotied,ameta-modelapproximates

a simulation model whih is itself an approximation of the real system. As

a result, it is desirable that these two approximations have errors as small as

possible inorderto avoidabig deviation inthe naldesign result[35 ℄.

Asdisussedabove,we usetheartiial neuralnetworks(ANN) fortraining

the data. ANNs arequasi-mathematial surrogatemodels,basedon the

inter-onnetion of numerous biologial neurons. One an eetively overome the

non-linear problems with high omplexity and many dimensions with ANNs

[69 ℄.

Currently, there exist dierent types of neural networks, e.g. Feedforward

Neural Network,ConvolutionalNeuralNetwork,ModularNeuralNetwork,

Ra-dial Basis Funtion Neural Network, Reurrent Neural Network, et. [33 ℄. In

thisdissertation,weusetheFeedforwardNeuralNetworkwithBakpropogation

algorithm. Figure5.5demonstratesatypialshematiofasimplefeedforward

neural network. The network issplitinthreelayers: Input layer, hiddenlayer,

and output layer. The input layer gets all samples of the ontrol system and

atuator designvariables,and vehilegenes.

Figure5.5: A typialshemati ofa simple feedforward neuralnetwork

Alldesignvariablesaresaledbetween 0and1byanenoderbeforeentering

the network, sine it is easier for the bakpropogation algorithm to nd the

optimal weights more quikly. As a result, deoding of the outputs should

be arried out at the end. The output layer inludes the evaluated objetive

driving targets, i.e. CVs, of all maneuvers. Between these two layers, one

or more hidden layers are situated. Eah hidden layer an have an arbitrary

numberofneurons,wherebynomorethan 2layersareusuallyneededformost

of theproblems inthisdissertation.

Asmentionedbefore,aseparateneuralnetworkisdenedforeahCV(ob

je-tive driving performanemeasure), sothatthe network struture,i.e. number

ofhiddenlayersandneuronsofeahlayer,isespeiallyonstrutedwithrespet

to eah output. Eah neuron of a hidden layer is onneted to all neurons in

the previous and thesubsequent layer (fullyonneted neural networks).

Fig-ure 5.6 shows, how a neuron in the

j + 1

-th layer gets its information by the ativation of all neurons of the previous layer. If the

j

-th layer has

k

neurons

with weighting

w ji

and the ativation funtion is dened as

ϕ

, theativation

of aneuron inthe nextlayer proeedsasfollows:

N et j+1 = w j0 + X k

w ji · x ji

. . .

w j0

1

w j 2

w j

w jk

1j Net

. . .

   1  j Net

Layer j Summation Activation Function Layer j+1

Figure 5.6:Shemati oftheativation ofa neuron

where,

w j0

is the so-alled bias and a onstant value.

x ij

represents all

the neurons in the

j

-th layer. The ativation funtion

ϕ

an be, for instane,

sigmoid logisti or sigmoid hyperboli tangent. In this work, we use sigmoid

logisti asativation funtion:

ϕ(N et j+1 ) = 1

1 + e −N et j+1

(5.4)

Model Quality

Modelqualityof aomputed neural network isvery important intermsof the

predition of the real model, sine an overtting of an ANN leads drastially

into failed preditionsof unseen data. Figure 5.7[69 ℄ represents an overtting

senario. The network iswell trained on thetrainingdataand an preditthe

output of the training data very well. But, as soon as the input data of the

trained network is not thetraining data, itfails to predit theorret output.

As a onsequene, datais divided in two groups: training data and test data.

Usually, thetest groupis smallerthan the traininggroup.

Figure5.7: ModelQuality,the proedureof overtting

Themeta-model, ANN,is trained withthe trainingdata. AdesiredANN is

one,whihhasthesmallestpreditionerror. Thepreditionerrorismeasuredin

dierentways. Thelassioneistheoeientofdetermination

R 2

,omputed

asfollows:

R 2 = 1 − P N

i (y i − y ˆ i ) 2 P N

i (y i − y) ¯ 2

(5.5)

where,

y i

,

y ˆ i

and

y ¯

aretherealoutput,thepreditedoutput, andtheaverage

of all predited output, respetively. In order to avoid any overtting, the

quality of the meta-model ANN is ompared with the samples from the test

data. For this purpose, the ross validation method is applied [69℄. In this

dissertation, themodelqualityequal or larger than0.9is onsideredasagood

quality.

Proposed Algorithm

As mentioned above, the number ofhidden layers as well as neurons an vary

with respet to the struture of the feedforward neural network. Inreasing

the number of neurons and hidden layers an enhane the predition of more

omplex relationships between inputs and outputs, whih an deliver a very

goodqualityforthe trainingdata. However,itmayleadinto modelovertting.

It should also beonsidered that there isno alulation to determine thebest

struture of an ANNwith the bestmodelquality. Subsequently,we introdue

hereapragmatialproeduretondthebeststrutureforour appliationwith

thebestmodelqualitywithout meetinganykind ofovertting.

The number of the hidden layers and neurons is varied. For example, the

number of hiddenlayersis set to one at the rst attempt and theneurons are

Here, inorder to avoid anyovertting, the qualityof the meta-model ANN is

ompared withtheone fromthetest data. Attheseond attempt,thenumber

of the hidden layers is inreased to 2 and the neurons are varied again from

5 to 20. The model quality of eah ANN is saved again individually. This

proedureanbedonealsoformorethan2hiddenlayersandisrepeatedforall

theoutputs. Inthe end,all the modelqualities areompared witheah other,

and the struture withthehighestmodelqualityispiked out.

5.2.3 Classiation

In the beginning of this setion, we laried why we need meta-models and

whih ones we use in this work. After a brief review of the ANN, now the

support vetor mahine (SVM) is introdued as a lassiation operator. The

rst question isthatwhyaSVM isrequired at all?

Inthe setionof Design of Experiments,we have delared, thattheoutputs

ofthesimulationaretheresultsofeahdrivingdynamismaneuver. Inorderto

trainANNs,weneedenoughdata. Asaresult,thenumberofsimulationstobe

exeuted, whihisaround4000,dependsonthenumberofdesignvariables. As

all simulations areautomatiallyarriedout, therearesimulations whihhave

beenaborteddueto unstableongurations ofdesign variables. Therefore,the

resultsofthesimulationsareeithernumerialoremptyoutputs,whihstandfor

theabortedsimulations. Thenumerialoutputsshouldbeassessedtoompute

theobjetive drivingdynamis performanemeasures ofeahmaneuver, gure

5.4 . Whileautomatiallyassessing,thesimulationresults,whiharenotinthe

predened range of objetive driving dynamis performane measures (CVs),

will not beevaluated and willbedropped out of theevaluationproedureand

will be represented as an empty evaluation. For example, the CVs assoiated

with the CSST maneuverare dened in the range of [0,2℄Hz. If theevaluated

CVsareoutofthisrange,theywillbedroppedoutoftheassessmentproedure.

Aordingly,theoutputsobtainedfromthe evaluationproedurean be

at-egorized inthreegroups:

1. Evaluated CVs (non-empty).

2. Empty CVs due to the fat, thattheyare outof predened ranges.

3. Empty CVs due to the fat that simulations have been aborted beause

of unstableongurations of designvariables.

ANNs are trained with all data inluding all three above-mentioned

ate-goriesofdata. Theproblemappearing hereisthatANNinterpolatesallempty

outputs. Consequently,thesolutionspaeofthedesignspaeaftersetting

on-straintsonthe designspaeis notreally reliable. Thiswillbelaried withan

example.

Imagining, there aretwo design variables and all the driving dynamis

ma-neuvers have been simulated for numerous ongurations of these two design

variables, the outputs appliable for training ANNs are the evaluation of the

simulationoutputs,CVs. Theyareeitheremptyorevaluated. Figure5.8shows

availableoutputs after theevaluation. Thegreen and blakdots represent the

evaluated and empty outputs, respetively, for all ongurations of thedesign

variableone and two.

Figure 5.8:Available Outputs

These outputs are trained with an ANN. A onstraint is then set on the

outputs of thetrained network whihuts the design spaelike ingure5.9.

Figure5.9: ANNPredition ofa onstraint on themeta-modeloutput

Asanbeseenfromgure5.10,thedesignspae,trainedbyANN,iswrongly

interpolated,sinetheareaofblakdotsmustbeonsideredbythenetwork. As

aonsequene,thereisaneedtoapplyalassiationoperator,whihseparates

theemptyoutputs fromthe evaluated ones.

Figure5.10: Wronginterpolation ofANN

TheappliedlassiationinthisworkistheSupportVetorMahine(SVM).

Support Vetor Mahine - SVM

A supportvetor mahineisa lassiation operator whihlassies datainto

two lasses by nding the hyperplane whih maximizes the margin between

them. In our ase, it lassiesthe assessed objetive driving dynamis

perfor-mane measures from simulationresults into theevaluated and empty lasses,

representedby1 and -1,respetively.

Funtonality

Looking at a two-dimensional design spae, shown in gure 5.11 , the redand

green dotsrepresent two dierent lasses. Thetaskof supportvetor mahine

is to nd the best line in two dimensional design spae or a hyperplane in

high dimensional design spaein order to separate the two lasses. The SVM

attemptstotrythelosesttwodotsfromtwolasses. Thesetwodotsarealled

support vetors. A line is drawn between these two dots and the SVM nds

thebestline whih bisetsand isperpendiularto theonneting line.

w 1

w 2

w

  1 T    b x w x

  1 T   b x w x

  0 T   b x w x

Figure 5.11: The shemati of SVM funtionality for two dimensional design

spae

Mathematial Problem Formulation

A given dataset anbe representedas:

where

n

isthenumber oftrainingdata,

y i

isthelassassoiated withthe

i

-th

dot

x i

and

x i ∈ Ω ds

.

y i

iseither 1or -1.

Alinear lassierhasthe following form:

f (

x

) = w T

x

+ b

(5.7)

where

w

is the normal vetor to thehyperplane. Here, support vetors are thedotslosesttothehyperplanes. Themarginbetweenthesetwohyperplanes

is alulated by

2

kwk 2

. The goal is to nd

w

and

b

in Equation 5.7 suh that

the margin willbemaximized. Itmeans,

maximize

2 k w k 2

subjetto

y i (( w T · x i ) + b) ≥ 1, i = 1, ..., n

(5.8)

This problem refers to an optimization of a quadrati funtion, subjet to

linear onstraints [44℄. In this way, the lassiation is onservative and the

mislassiation istherefore assmallaspossible.

Soft Margin Solution

Sometimes, the above-mentioned optimizationproblem isnot solvableand the

data is therefore not separable or the found lassiation hyperplane delivers

a very narrow margin. In this ase, some mislassiations an be allowed,

through relaxing the inequality in equation 5.8, whih leads to a wider

las-siation margin. As a onsequene, we introdue two new variables

C

and

ǫ i

, alled box onstraint and slak variable, reeptively, and bring them into equation 5.8 :

maximize

2 k w k 2 + C

X n i

ǫ i

subjetto

y i (( w · X i ) + b) ≥ 1 − ǫ i , i = 1, ..., n

(5.9)

Here, the parameter

C

has to be seleted arefully, as a small

C

leads to

exessiverelaxationoftheonstraintverymuh,ausingthemarginto bevery

large and vie versa. If

0 < ǫ < 1

, dots are loated between the margin and

on the orret side of the hyperplane. But,

ǫ > 1

auses a mislassiation, i.e. some dotsanbeloatedonthe wrongside ofthehyperplane. Asaresult,

ǫ

an be substituted with

ǫ = max(0, 1 − y i f ( x ))

. Now, equation 5.8 an be

written as:

maximize:

2 k w k 2 + C

X n i

max(0, 1 − y i f ( x ))

(5.10)

This optimization problem is then unonstrained, onvex and has a unique

minimum [15℄. Therefore, this optimization problem an be arried out by

any gradient-based algorithm. In onlusion, it should be notied, that box

onstraint parameter

C

playsan important roleinnding alarge margin with

low amount of mislassiation. Thisparameter hasto be adjusted suh that

lessmislassiation isensured.

So far, we laried the idea of lassiation with a straight line, at plane

or anN-dimensional hyperplane. However, there are ases,where a non-linear

regions an separate the lasses more eiently. Suh ases our when data

is not distributed linearly, gure 5.12 . In suh ases, the non-linear support

vetor mahines must be applied.

Figure5.12: Non-linear distributeddata

Non-linear Support Vetor Mahines

When the data annot be separated linearly, it is mapped into higher

dimen-sional spaes by applying a kernel funtion where the mapped data an be

lassied linearly. Thisis alledKernel trik. Somekernel funtionsarelisted

below:

Linear:

K( x i , x j )

=

x i x j

Polynomial:

K( x i , x j )

=

1 + ( x i x j ) p

for any

p ≥ 0

Gaussian:

K( x i , x j )

=

exp

kx i −x 2 j k 2

There are several studies whih have investigated eah of above-mentioned

kernel funtionswith respet to thesize of margins [10, 50℄. In this work, the

Gaussian funtion is applied, as it normally produes bigger margins [10 ℄. In

this ase, the parameter

γ

hastobeadjusted suh thatthemislassiation is still keptsmall.

Briey,thenon-lineardataismappedwithGaussian kernelfuntionintothe

higher dimensional spaes and a linear lassier is found for this spae based

on equation 5.10. As a result, we need enough data, the adjusted

C

, and

γ

parameters toobtain thetrained SVM model, gure5.13 .

Figure5.13: SVM-Funtionality

ModelQuality

The goal of lassiation is to separate datainto two denite lasses with less

mislassiation. But,gettinglessmislassiationsometimesleadstothe

over-tting ofthis meta-model. Quiteontraryto theneural networks, whih need

twogroupsofdata,namelytrainingandtestdata,itisneessarytotrainSVM

withall available data. Otherwise theexat lassiation maynot be reahed.

Aordingly,arossvalidationisemployedforthealulationofthe

mislassi-ation and avoiding anyovertting. The applied rossvalidation istheK-Fold

[69 ℄.

Theavailabe data issplit into Kgroups. One group is employed to test the

modelquality. The SVM is thentrained withK-1 groups of thedata and the

assoiated mislassiationisomputed. Thisproedureisthendone K-times.

Aordingly, the meta-model is nally trained with all available data and the

model quality is then the average of all mislassiations obtained from eah

iteration. Allthealulatedmislassiationsdepend ontheparameters

C

and

γ

. They have to be adjusted suh that the mislassiation of eah iteration beomesassmallaspossibleandagoodmodelqualityisthenreahed. However,

adjusting these two parameters manually for high dimensional spaes is very

diult and time-onsuming.

Consequently, we apply optimization methods suh as geneti algorithm or

partial swarm algorithm to nd the most optimal

C

and

γ

, whih hand in a

minimummislassiation. In this dissertation, thegeneti algorithm and the

mahinelearning toolboxinMatlab [27, 17 ℄areemployed to obtain a trained

support vetor mahine with a good model quality.

C

and

γ

are disposed to

the range of

[10 −5 10 5 ]

in our algorithm in order to have a better distributed populationinthewholedesignspae. LikeANNs,wealsogenerateonespei

trained SVMforeahofthedrivingdynamisperformanemeasures. Thegoal

is to have a trained SVMfor eah CVwitha mislassiationless than

15%

.

The ombination of ANN and SVM for thetwo variables ingure 5.10 an

be seen in gure 5.14. As shown, this ombination leads to no interpolation

failure.

Figure 5.14: The ombination of SVM and ANN for the design spae of two

designvariables

5.3 Top-down mapping

As mentioned at the beginning of this hapter, the V-Model is an eetive

way to break down qualitative requirements of a large-sale systeminto

qual-itative subsystem requirements. But, the question is how we an break down

thequantitative requirementsformulated ona large-salesystemlike a vehile

with a ontrol system and an atuator into the quantitative requirements for

omponents suh as the ontrol system and the atuator, espeially with

re-spet to unertainties. Suh a quantitative method must be able to deal with

unertainties. In ourase, there aretwo typesofunertainties:

Lak of information about the derivatives of a vehile: In the earlystageofthevehiledevelopment,weknowthatwedo notdealwith

only one vehile but rather dierent derivatives. They aredistinguished

mainly by their mass, moment of inertia, rear axle ratio, and enter of

gravity hight.

Disarded subjetive driving dynamis performane measures:

Sofarwehavejustintroduedtheobjetivedrivingdynamisperformane

measures. But, inthelatestagesofthevehiledevelopment, eah vehile

mustbedrivenbyanexperieneddriver,tunedandassessedsubjetively.

As a onsequene,there are still subjetivepereptions whih annot be

formulatedasobjetive driving dynamisperformanemeasures. For

ex-ample, parametrization of the logi of a ontrol system virtually based

on only the objetive driving dynamis performane measures may not

be able to deal with the subjetive driving experiene. Most of these

subjetive hallenges arereferredto asthe steering feeling,whih annot

be formulated objetively. In other words, a virtual parametrization of

the ontrol systeman satisfyall the objetive driving dynamis

perfor-mane measures but annot still deliver agoodsteering feeling, assessed

subjetively.

Even ifwewouldhaveno unertaintyinthe design proedure,itwouldbestill

notdesirabletondoneoptimalsolutionregardingallonsideredrequirements,

beause the realization of only one optimal design of a omponent may be

impossible or highlyexpensiveifpossible.

The approah to keep unertainties under ontrol and deompose the

re-quirementsquantitatively isaso-alled solutionspae[77℄,whihndsatarget

region ofallgooddesignsfor designvariables,whih fulllall objetivedriving

dynamis performanemeasures (CVs). Thismethodisalso appliableto

arbi-trary non-linear and high-dimensional problems and doesnot demandspei