• Keine Ergebnisse gefunden

Evaluation of the Numerical Methods

Predictive Inverse Kinematics

5.2 Numerical Solution

5.2.4 Evaluation of the Numerical Methods

Problem Formulation

The presented approaches and variations are evaluated for the example given in section 5.2.3 using random initial configurationsq0and task space goal positions wend. For this evaluation, the cost function is modified as follows:

l(q,q, ˆ˙ u) =ζcmflcmf(q) +ζjlaljla(q) +ζq˙lq˙(q,q˙)

| {z }

12q˙Tq˙

+ζuˆ luˆ(uˆ)

| {z }

12uˆTuˆ

(5.42)

Parameter Symbol Value Evaluation

Number of experiments nexp 100

Varied Parameters

Initial Configuration q0 q0iR∩[−2/3π,2/3π] Task Space Goal wend wend,xR∩[−1.5, 1.5] wend,yR∩[0, 3] Cost Function

Scaling factor for collision avoidance ζcoll 0

Joint Limits [qmin,i,qmax,i] [−π,π]

Comfort Pose

Scaling factor ζcmf 0.5

Comfort Pose qcmf (0 0 0 0)T

Joint Limit Avoidance

Scaling factor ζjla 1

Lower Soft Limit qmin,soft,i qmin,i+π/2

Upper Soft Limit qmax,soft,i qmax,iπ/2

Scaling factor for velocity penalty ζq˙ 2

Scaling factor for input damping ζuˆ 2

Table 5.2:Adaption of parameters from table 5.1 for the statistical evaluation presented in section 5.2.4 of the 4-DOF robot following a straight-line task space path while opti-mizing joint velocities and its kinematic configuration.

lcmf(q) = 1

For the quantitative comparison of the numerical approaches in many random settings, the term for collision avoidance is omitted. The parameters shown in table 5.1 are retained, except for modifications listed in table 5.2.

The following results are obtained solving the dynamic optimization problem for nexp = 100 random settings. The initial configuration q0 and goal position

wendhave a uniform distribution within the intervals q0,iR∩[−2/3π,2/3π]

wend,xR∩[−1.5, 1.5] wend,yR∩[0, 3].

(5.44)

Evaluation Criteria

For the evaluation, the following aspects are considered:

- Improvement: Integral of optimal costs l?,t) over time compared to the instantaneous costsl0,t):

Improvement := Rtend

t0 l?(·)dt−Rtend

t0 l0(·)dt Rtend

t0 l0(·)dt (5.45)

A negative value means an improvement w.r.t. the costs while a positive value is a deterioration (which is considered as a failure).

- Relative Improvement: Difference between the improvement obtained by the method yielding the minimal costs and the considered method. A value of 0 means that a method yields the minimal costs among the considered solutions.

- Success Rate: A solution is considered as successful when its cost improve-ment differs by 5% relative to the method yielding the minimal costs. A value of 1 means 100% success while a value of 0 means no success.

- Computation Time: Computation time of gradient based methodsTcalc,GMis compared relatively to the computation time needed by the TPBVP solution (Tcalc,BVP):

Rel. Computation Time := Tcalc,GMTcalc,BVP

Tcalc,BVP (5.46)

Absolute Results

In order to evaluate the expectable improvement and computation time of the op-timization, a statistical evaluation of the reference TPBVP solution is performed.

The cumulative costs for joint velocities Lq˙, comfort pose Lcmf and joint limit avoidance Ljla resulting from the optimization are compared to those of the ini-tial guess. The costs for input damping Luˆ are neglected since they are not taken into account by the instantaneous solution. As depicted in fig. 5.5, the costs are reduced in average by 16% while the distribution is skew symmetric. It should be pointed out that this quantification of the improvement relies on the absolute

1 0.75 0.5 0.25 0

0.11

0.16

Improvement[]

0 1 2 3 4 5

1.14 1.57

Computation TimeTcalc,BVP[s]

Figure 5.5: Top: Improvement of the cumulative costs for joint velocities Lq˙, comfort poseLcmfand joint limit avoidanceLjlaof the TPBVP solution relative to the initial guess (instantaneous solution). Bottom: Absolute computation time needed for solving the TPBVP. Time measured using Matlab R2016a, Ubuntu 15.10 on anIntel® i5-4310U@

2.00GHz. The average value is denoted byand the median as the vertical line in the box. Tukey Boxplot, Whiskers±1.5Interquartile Range.

Fixed Stepsize Backtracking Adaptive Explicit 0

0.5 1

TPBVP

Polak/Ribiere Fletcher/Reeves Hestènes/Stiefel Steepest Descent

Figure 5.6: Success Rate of the various combinations of conjugate gradient and line search algorithms. For comparison, the success rate of the TPBVP solution is drawn as a dashed line.

weighting of the cost function in this example. In order to quantify the compu-tational effort, the computation time Tcalc,BVP is measured as well: The average computation time is 1.57 s while half of the experiments converge after 1.14 s us-ing the MATLABR2016A routineBVP4Con anIntel®i5-4310U@ 2.00GHz.

Relative Results

Results obtained by the different methods are shown relative to the TPBVP (→ computation time) or minimal cost (→ relative improvement) solution. In the following, the approaches presented in section 5.2.2 for calculating the conjugate gradient and in section 5.2.2 for determining the optimal step size are combined with each other.

Success Rate In fig. 5.7 the success rate of the conjugate gradient/line search combinations is shown. As defined in section 5.2.4, an optimization is considered assuccessfulwhen the resulting improvement compared to the instantaneous so-lution by less than 5% relative to the soso-lution yielding the minimal costs. While the TPBVP solution is successful in 84% of the experiments, the highest success rate is achieved by the adaptive line search algorithm and the Fletcher/Reeves conjugate gradient. Stable results are also obtained using a fixed step size. The ex-plicit line search algorithm performed the worst for all combinations. However, the convergence depends heavily on the parametrization of the cost function, the initial step sizes and the problem itself.

A more detailed illustration of the convergence is given in fig. 5.7. This dia-gram shows the difference of the improvement achieved by the respective method to the method yielding the minimal costs ("most optimal" solution). In partic-ular, a bad convergence is obtained by the backtracking line search with the Fletcher/Reeves and Polak/Ribière conjugate gradient and the explicit line search (all CG versions). By contrast, the versions with fixed step size and with adaptive line search yield the best results.

Computation Time Apart from the quality of the results as investigated in the previous paragraph, the computational effort is of particular importance for real world applications. In fig. 5.8 the relative computation times of the different methods are given w.r.t. the computation time of the TPBVP solution. In doing so, only successful experiments are considered. While the quality of the results is high for the fixed step size algorithms, computation times are longer. However, using the conjugate gradient methods, in particular the Fletcher/Reeves algo-rithm, the convergence can be accelerated significantly. A similar behavior can be observed using the backtracking line search, which shows even faster conver-gence but worse reliability. The best results are achieved at a high quality level by the adaptive line search strategy in combination with the Fletcher/Reeves conju-gate gradient. The lowest computational effort is observed for the explicit line search methods. However, its optimization process is often stopped prematurely, before the solution is optimal.

Success

0 5·102 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5

ExplicitAdaptiveBacktrackingFixedStepsizeTPBVP

Relative Difference to Optimal Solution[]

Polak/Ribiere Fletcher/Reeves Hestènes/Stiefel Steepest Descent

Figure 5.7: Relative Improvement: Difference of improvement relative to the best so-lution. A value of 0 means the solution with minimum costs. All samples within the range of 5% are considered in section 5.2.4 as a success (gray). The average value is denoted byand the median as the vertical line in the box. Tukey Boxplot, Whiskers

±1.5Interquartile Range.

1 0.5 0 0.5 1 1.5 2 2.5 3

ExplicitAdaptiveBacktrackingFixedStepsize

Relative Computation Time[−]

Polak/Ribiere Fletcher/Reeves Hestènes/Stiefel Steepest Descent

Figure 5.8: Comparison of the computation time Tcalc needed for the solution of the dynamic optimization problem relative to the solution as a TPBVP. The average value is denoted byand the median as the vertical line in the box. Tukey Boxplot, Whiskers

±1.5Interquartile Range.