• No results found

On the application of sequential quadratic programming to state-constrained optimal control problems

N/A
N/A
Protected

Academic year: 2021

Share "On the application of sequential quadratic programming to state-constrained optimal control problems"

Copied!
40
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

On the application of sequential quadratic programming to

state-constrained optimal control problems

Citation for published version (APA):

Jong, de, J. L., & Machielsen, K. C. P. (1985). On the application of sequential quadratic programming to state-constrained optimal control problems. (Memorandum COSOR; Vol. 8507). Technische Hogeschool Eindhoven.

Document status and date: Published: 01/01/1985

Document Version:

Publisher’s PDF, also known as Version of Record (includes final page, issue and volume numbers)

Please check the document version of this publication:

• A submitted manuscript is the version of the article upon submission and before peer-review. There can be important differences between the submitted version and the official published version of record. People interested in the research are advised to contact the author for the final version of the publication, or visit the DOI to the publisher's website.

• The final author version and the galley proof are versions of the publication after peer review.

• The final published version features the final layout of the paper including the volume, issue and page numbers.

Link to publication

General rights

Copyright and moral rights for the publications made accessible in the public portal are retained by the authors and/or other copyright owners and it is a condition of accessing publications that users recognise and abide by the legal requirements associated with these rights. • Users may download and print one copy of any publication from the public portal for the purpose of private study or research. • You may not further distribute the material or use it for any profit-making activity or commercial gain

• You may freely distribute the URL identifying the publication in the public portal.

If the publication is distributed under the terms of Article 25fa of the Dutch Copyright Act, indicated by the “Taverne” license above, please follow below link for the End User Agreement:

www.tue.nl/taverne Take down policy

If you believe that this document breaches copyright please contact us at: openaccess@tue.nl

(2)

EINDHOVEN UNIVERSITY OF TECHNOLOGY Department of Mathematics

PROBABILITY THEORY, STATISTICS, OPERATIONS RESEARCH AND SYSTEMS THEORY GROUP

Memorandum COSOR 8S-07 ON THE APPLICATION OF SEQUENTIAL

QUADRATIC PROGRAMMING TO STATE-CONSTRAINED OPTIMAL CONTROL PROBLEMS. by J .L. de Jong K.C.P. Machielsen

Paper prepared for presentation at the Fifth IFAC Workshop on Control Applications of Nonlinear Programming and Optimization,

June 11-14 1985, Capri, Italy.

May 1985

(3)

B S C

CONTENTS page

AB 5 TRA.CT.. . . . " , " .. .. .. " " " .. " " " " • .. " .. .. " .. " . .. " " " " " " " .. . " " " .. .. " " .. .. " " • " " .. .. .. " 3 CHAPTERS

1 INTRODUCTION .•..•.•.•...•...•...•.••...•.•...••....• 4 2 SEQUENTIAL QUADRATIC PROGRAMMING IN BANACH SPACES .•••.••..••. 5 3 APPLICATION TO OPTIMAL CONTROL PROBLEMS ••..•••..•••.••..•.••. ll 4 EXPERIMENTAL NUMERICAL IMPLEMENTATION OF THE METHOD •.••.•.••• 21

5 EXAMPL ES " .. . " .. " .... " " " " " " , .. " • " " ... " co • " " . . . " • " .. D . . . . " " " " 0 • " " " " • " .. " ,,26 6 FINAL R.EMARKS .... " ... " .... "" ... .,,,,,.,, ... ,, ... ,, ... ,, .. ,,,,. 2 9

7 REFERENCES •• " .. """." ... "" ... "".." .. "." .. " ... ""..."...",, ... ,,"" 3 0 APPENDICES

A EXPLICIT EXPRESSION FOR7~ ON BOUNDARY ARCS .•••••...•..•••.•.. 34 B BOUNDARY VALUE PROBLEM IN CASE OF SLACKVARIABLE TECHNIQUE •••. 36 C RELATION BETWEEN BOUNDARY VALUE PROSLEMS •••••••.••.•.•••.•••• 38

(4)

..,.... J ..

ON THE APPLICATION OF SEQUENTIAL QUADRATIC PROGRAMMING TO STATE-CONSTRAINED OPTIMAL CONTROL PROBLEMS.

Abstract

by J.L. de Jong

Department of Mathematics Eindhoven University of Technology

and

K.C.P. Machielsen CAM-Centre CFT Philips Eindhoven

The Nether lands

In this paper a numerical method for the solution of state-constrained optimal control problems is presented. The method is derived from an infinite dimensional analogue of sequential quadratic programming. The main purpose of the paper is to present some theoretical aspects of the method. An experimental numerical implementation of the method is also discussed.

An analogue to finite dimensional sequential quadratic programming is developed in Banach spaces.

Application to state-constrained control problems follows similar lines as in case of deriving the minimum principle from the abstract necessary conditions for optimality.

In the setting of optimal control pro~lems, the analogue to the inversion'of the Hessian matrix of the Lagrangian is the solution of a linear multi point boundary value problem. Each iteration of the method involves mainly the solution a linear multi point boundary value problem.

Numerically, a collocation method based on collocation with piecewise polynomial functions is proposed for the solution of the linear multi point boundary value problems.

The resulting set of linear equations is solved by Gauss elimination.

The method is derived type. Inequality constraints by means slackvariables. considering only constraints are of an active constraints of transformed set strategy the equality into equality or by using

(5)

CHAPTER 1 INTRODUCTION

An important class of practical optimal control problems is the class of state-constrained' problems. The optimality conditions for the solutions of these problems are pretty well understood at this moment thanks to Bryson et al. (1963), Jacobson et al. (1971), Norris (1973), Maurer (1979a), Kreindler (1982), etc. However general efficient numerical methods are still lacking.

A powerful tool for the numerical solution of state-constrained optimal control problems is the code developed by Bulirsch (1983), which is based on the ideas of multiple shooting (cf. Bock (1983), Well (1983». An important drawback of this method for practical use however is that a priori knowledge about the structure of the

solution, in particular about the sequence in which various

constraints are active, c.q. paSSive, is r~uired.

An other alternative is the Sequential Gradient-Restoration Algorithm (SGRA) of Miele (1980), which treats state-constraints by means of a slack-variable technique. A drawback for the practical use of this method is the low rate of convergence which is mainly due to the fact that SGRA is a first order gradient method. Consideration of possible alternative methods not having these drawbacKs seems therefore useful. At present one of the major topics in the area of numerical nonlinear programming is the developement of sequential quadratic programming methods. Inspired by the many promiSing results in this area (cf. Tapia (1974a,b;1977i1978), Han (r976), Powell (1978), Gill et al. (1981), Schittkowski (1981), Stoer (1982), Bertsekas (1982» a translation of these methods towards the application to state constrained optimal control problems was initiated. In this paper the first preliminary results of the translation effort is reported.

In section 2 the sequential quadratic programming method is discussed in the context of an abstract vector space. In section 3 the application of the method to state constrained optimal control problems is conSidered. An experimental implementation is discussed in section 4, whereas two worked out examples are given in section 5. Some final remarks are given in section 6.

(6)

CHAPTER 2

SEQUENTIAL QUADRATIC PROGRAMMING IN BANACH SPACES.

Consider the solution of the following abstract problem: Problem (P) :

Given the Banach spaces X, Y and Z, the twice continuously Frechet differentiable mappings f:X -> R, g:X -> Y and

h:X -> Z and a closed convex cone B having nonempty

interior with vertex at zero, then find an

x

e

g-I (B)t1;V<h), if such exist, such that

"

f(x) ~ f(x)

Sequential quadratic programming methods are methods based on the observation that 'near' the solution ~, the original problem may be replaced by a suitable quadratic programming problem. They consist of the sequential solution of. quadratic subproblems which yield directions of search. Along these directions better approximations to

the solution are generated,

Alternatively, they may be considered as methods which calculate directions of search using Newton's method applied to the necessary conditions for optimality.

In order to formally derive the subproblems, we require a. brief review of the optimality conditions for problem (P). These are given in the following lemma, which is taken from Maurer et al.(1979b). A central role in this lemma is played by the Lagrangian

..

..

..,

..,

L(x,y ,z ) := f(x) - y .g(x) - z .h(x)

(Variables with an asteriks are elements of the dual spaces cf. Luenberger (1969».

(2.1)

(7)

A

Lemma 2.1 I f x

e

x

is a solution to problem (P),

~(h'(~»

=

Z and

[

h'(x)(d)=O

"

A"

g(x)+g'(x)(d)~int(B)] A

.. *' * *

then there exist multipliers

y

Co Y such that,

..

... ,.. 1\ " L'(x,y

,z

)(d) = 0 x

..

" A Y .g(x) = 0 + +

....

..

)..

yltB with B

: =

(y E. Y : <y , y>

"

,z )(d)(d) ~ 0

..

[ '" y.(g(x)+g'(x)(d»=O h'(x)(d)=O " ,I.

A

A ]

1\*

A"

Conversely, if there exist multipliers y and

z

satisfying

(2.S), (2.6) and (2.7) and numbers &>0 and (3)0 such that

..

'"

,.

L"(x,y xx 'Vdf X A

..

"

,z

)(d)(d) 2 ~ 3.lldll " A 1\ D

I

A

"

[y . (g(x)+g' (x)(d»~ ~ lid

I

h' (x)(d) =0] then x is a local solution of problem (P).

We note that (2.4) is a Slater-type constraint qualification (cf. Maurer et al, (1979b». In the following part of this paper, we will assume that this condition holds.

Of importance for the sequel is to note that the lemma states that the

.

J\"

.1\" '"

Lagrang1an L(x,y ,z ) has a local minimum at x=x in the subspace spanned by the linearized constraints. This fact gives the motivation for the idea to calculate a direction of search for the improvement of the current estimate of the solution by solving the linearly constrained subproblem

..

..

(2.3) (2.4) (2.5) (2.6) (2.7) (2.8) (2.9) minimize 6x L(x + Ax ,y ,2: ) i i i i (2.10) i subject to g(x ) + g' (x ) (Ax ) ~ B (2.11) i i i h(x.) + h'(x )(~x ) = 0 (2.12) i i i

(8)

' \ . .

...

In general this is a problem with a nonlinear objective function, which may be approximated by a second order expansion at x=Xt:

* * * * * L (x ....

6x ,

Y I Z ) ,.., L (x , Y I Z ) .... f I (x ) (6.x ) - y • g' (x ) (~x i i i i i i i i i i i i

..

..

- Z .h'(x )(t:.x ) .... L"(x ,y ,z )(~x )(6x )/2 (2.13) i i i xx i i i i i

Based on this expression we may construct the following linearly constrained quadratic subproblem for the calculation of a direction of search Ax ••

,

Problem (QPEI) minimize

Llx

i subject to .. 1< f'(x )(~x ) .... LIf(x ,y ,z )(dx )(Ax )/2 (2.14) i i xx i i i i i g(x ) + g' (x ) (~x ) € B (2.15) i i i h (x ) + h t (x

)(.1

x )

=

0 (2.16) i i i

In this problem formulation, the term (y:.g'(x:) , , +

z~.h'(x.»(Jx.)

,

(

.

is omitted. The reason for this is that we want to obtain a quadratic

...

subproblem which has in the optimal point x, the same Lagrange multiplier y~and ~~as the original problem. The Lagrange multioliers

A M . A~

of the subproblem at x = x would otherwise have been y - y.-and z - z"

which would have meant that the Lagrange multipliers of the subproblem would have converged to zero as x.->~. Especially in the case of

r

inequality constraints, this is an undesirable phenomenon. With this modification the ~agrange multipliers obtained via the solution of the problem (QPEI) may be used as new estimates of the Lagrange multipliers Y·and ~*of the original problem.

One of the difficulties of the solution, along these lines} of the original problem is the way in which the inequality constraints g(x)'B are handled. _ One way is to solve the problem (QPEI) as a quadratic programming problem with linear equality and inequality constraints. Another way is to first transform problem (1') into an equality

constrained subproblem and then solve a quadratic subproblem with only equality constraints. In this paper we will follow the latter road

and will almost exclusively deal with equality constrained

subproblems.

The transformation of- the constraint g(x)€ B into an equality constraint can be done either by an active set strategy (this is

called a preassigned active set strategy) or by means of

slackvariables. The details of the transformation are strongly dependent on the actual spaces x and Y involved. The transformation used by us will be considered in the next section.

(9)

Based on the sequential solution of equality constrained subproblems we are led to the following two, slightly different algorithms.

!n these algorithm use is made of a bounded, linear operator

a,

which may be interpreted as a mapping used to immitate an innerproduct in the Banach space X, which takes the form.

"(xly)"

=

<G.x,y>

In Hilbert spaces the mapping G becomes the identy operator. Algorithm :

Given the invertible (normalisation) ~p Ge B(X,X ] (0) x given,

z

=

0, i=O (i)

o

0 calculate a estimate -~ z 1 G.d - h' (x 1 h' (x ).d i 1 i first order from

.,

..

) . z = - f' (x 1

= -

hex i Lagrange multiplier ) i

(ii) calculate an approximation to the Hessian of the Lagrangian at x :

..

W(x ,2: ) : : i 1 i fUCx i

- Z

.hU(K ) 1 i

(iii) calculate a second order Lagrange multiplier estimate

and the Newton direction d from

2 2 ..,

..,

W(x ,z ).d - h'Cx ).2:

= -

f'(x ) i 1 2 i 2 i h'(x ).d

= -

hex ) i 2 i

(iv) If lid

II

~ eps then ready

2 else goto (v)

(v) Calculate a steplength ~ using bisection on the interval (0,1], starting with 0(=1, using the merit function : M(x,z) := f(x) - z.h(x) +~ ,Q(L'(x,z »/2 1 x +

l2

.P(h(K»/2

(2.17) (2.18) (2.19) (2.20) (2.21) (2.22)

(10)

.. _ 1 ·

.. +

Here Q(.) denotes a mapping X -> R satisfying

..

.,.

..

Q(x )

=

0 <=> l(

=

0

And P(.) is a mapping Z -> R satisfying P(z)

=

0 <=> z

=

0

J\

and

J\

are penalty constants. (vi) Set l( := x + oed (2.23) i+l i 2 1r 1r 'I!!

..

-z := z + C(. (z - z ) (2.24) i+l i 2 i (vii) Either

'"

..

(a) z :: Z and goto (H)

1 i+l

or

(b) goto (i)

The difference between the two algorithms

(vii-a) is used, step (i) is only

initialization, when (vii-b) is used step every iteration. In the sequel, we algorithms as algorithm a and algorithm b.

is in step (vii), when executed as part of the (i) is executed once in will refer to the different

We note that steps (i) and (iii) are equivalent to the solution of the following quadratic subproblem :

Problem <QPE) : where minimize

Ax

i subject to

-

W

=

G

-

W

=

L" (x xx i

..

1r ,y

,z

i i f'(x )(11l( ) + W (4x )(dx )/2 (2.25) i i i i h(x ) + h' (x ) (Jx = 0 (2.26) '1 i i in step (i) ) in step (iii)

A more detailed analysis of the algorithms presented will be a topic of our future research. We will end this section with the following remarks

1r Because algorithms a and b are Newton-like methods, each

iteration involves calculations using the second

(11)

problems, this may be a serious handicap. Therefore we intend also to consider the application of quasi-Newton and discrete Newton techniques to algorithms a and b in the future.

In algorithm b each iteration involves the solution of two sets of linear equations, whereas in algorithm a each iteration involves only the solution of one such set. For algorithm b a slightly stronger convergence result can be derived (cf. Tapia (1977». However this difference may be regarded as insignificant for many practical problems. Hence algorithm a may be considered superior to algorithm b.

We note however to algorithm b that it provides a suitable initial estimate for the Lagrange multiplier

z:

for algorithm a and we also expect that algorithm b will behave 'better' away from the solution. We intend to verify this in the future using numerical results.

The algorithms discussed are in fact infinite dimensional analogues to existing algorithms for nonlinear programming

~~

As to the connection between algorithms a and to Bertsekas (1?82), who shows that algorithm a Newton-like method with variables in the whereas b is a Newton-like method in the space

b we a is in space X. refer fact XXZ~

We expect that the use of inequality constrained

subproblems (i.e. problem (QPEl» may be more favourable than the use of equality constrained subproblems as presented in this section. This will be a topic of our future research.

(12)

CHAPTER 3

APPLICATION TO OPTIMAL CONTROL PROBLEMS.

We consider the application of the algorithm, given in the previous section to the following, state-constrained optimal control problem : Problem (OCP) :

where

Determine a control function ~ E L:[O, T] and a state

trajectory ~€.W~-[O,T] which minimize the functional:

T .

h (x(O» + If (x,u).dt + 9 (x(T» (3.1)

a

0 0

o

subject to the constraints

x = f(x,u) a.e. 0 ~ t ~ T (3.2) D(x(O» =

(3.3) E(x(T»

=

a

(3.4) S (x,u) ~ 0 (3.5) 1 S (x) ~ 0 (3.6) 2 2 n 2 n m

T is the fixed final time; h :C (R -> R)i f :C (R x R -> R);

o

0 2 n 2 n m n 2 n c 2 n q 9 :C (R -> R); f:C (R x R -> R )i D:C (R -> R ); E:C (R -> R );

o

2 n m k 2 n k S :C (R

x

R -> R ); 5 :C (R -> R )i 1 2 n

W I>~ '" [O,T ] := ( x: [O,T] -> R ,absolutely contl.nuous,x e Ln [O,T]} • • 00 Sland S~represent the mixed and pure state constraints,

(13)

i:ls

i

'Os,.

-- f

0 and --

=

0 for all (x,u) considered.

()u

OU

The algorithm to be proposed for this problem can be derived in a rigrous fashion in a way closely related to the derivation of the optimality conditions for problem (OCP) from lemma 2.1 (cf. Jacobson

et al. (1971); Norris (1973); Maurer (1981». We will limit

ourselves here to a formal treatment for the sake of brevity. We start by summarizing the necessary conditions for optimality for problem (OCP) in the following two lemmas, taken from Maurer (l979a) :

" "

Lemma 3.1 : Let (x,u) be a solution of problem (OCP). Then there exist

{ A C A 1

a number ",}:- 0, vectors <5' E R, I"€R,

...

'"

a vectorfunction ~:(O,T] -> R,

a vectorfunction

7,:

(O,T] ->

R~'

;..

and a vectorfunction S:[O,T] -> R, of bounded variation,

2./.

not all zero with the following properties

1\ ,.. T AT A(O) = - ~.h [0] - ~.D (0]

o

Ox

x

A II T ~T ),(T) = - A.g (T] -

f.E

[T]

'0 Ox

x

t

"

.... T A(t 1 A{t )

o

= -

t)'

~~

x (tl A +

7.5

(t]) .dt 1 lx

o

(3.7) (3.8) t . -t

J

1 ST ,.. [t] .dS(t) 2x

o

for all pairs (t ,t ) with t < t in [O,T] (3.9)

0 1 0 1

T

H [t] +

7

'"

.5 (t]

=

0 a.e. 0 ~ t ~ T

u 1 lu

where the Hamiltonian H is defined as T

H(x,u,~,~ ) := ~ .f (x,u) + A.f(x,u) 0 0 0 and J\ (li(t) i=1, •.. ,k , ~ (t).S [t] = 0 i=1, ... ,k, (li l i (3.10) (3.11) (3.12) (3.13)

(14)

1\

SiS continuous from the right except possibly at t=O. (3.14)

A

SiS

nondecreasing on [O,T], i=1, .. ,k2 (3.1S)

... l.

~,iS constant on intervals where S [t] < 0 i=I, .•• ,k~ (3.16)

.1. 21

In lemma 3.1 we used the notation [tl to show the implicit dependence

of the relevant functions of t, i.e. as (x(t),Q(t» or

(x(t),u(t),~(t».

A sufficient condition for

A.>

0 and hence Ao= 1 may be obtained by expressing the constraint qualification (2.3)-(2.4) in terms of problem (OCP) (cf. Maurer (1981». As in the previous section we assume that there is at least one solution to problem (OCP) and that

A.>

0 can be taken for all solutions.

The adjoint equation (3.9) is numerically not very tractable. It is possible however, under additional assumptions, to transform this set of integral equations into a set of differential equations. Thereto we introduce the following terminology :

A subinterval [t ,t ]C[O,TJ, t< t , is called an interior arc of the

1 2 1 2

constraint 5 (j=l,2;i=I,2 ••. k), if

ji j

5 (t] < 0 for all tEEt ,t ] (3.17)

ji 1 2

Simil"rily, such a subinterval is called a boundary interval of the constraint 5 (j=1,2;i=I,2 ... k), if

• ji j

5 [t] = 0 for all t€[t ,t ] (3.18)

ji 1 2

Entry- resp. exit-points (also referred to as junction points) and contact points are defined in

an

obvious way.

The order of the state constraint S~~ is defined as the integer p: corresponding to the first time derivative of S.which contains the

l\

control explicitly, i.e.

~

i

[0

- - 5 (x(t» =

~u dt~

2i

+0

2

= 0,1,. '1P -1

.Q

= p i i

To this definition we note that we implicitly assumed sufficiently smooth and that the order of 5 is constant on

2.~

(3.19)

that 52 ~ is [O,T]. Using time derivatives of the Hamiltonian H, it is possible to show that the function is differentiable on boundary arcs (cf. Jacobson et

al. 1971; lemma 3.2 is taken from Maurer (1979a». The lemma below states conditions under which the adjoint equation (3.9) may be transformed into a set of differential equations.

(15)

Lemma. 3.2 Let (x,u) be a solution of problem (OCP) and let f,S be

"

"

p+r

C functions with

p

=

max p and r ~ O. l~i"k2. i

Let [t It

J

be a boundary Lnterval with t an entry and

2i 2i+l 2i

t an exit point, and let

2i+1 (p)

rank (5 ) = k *) (3.20)

u

p+r

Assume in addition that aCt) is a C function for t~(t,t ). 2i 2i+l A "

Then the functions

A

and

S

in the adjoint equation (3.9) are r+1

C functions on (t It 2i equation

• ....T "T

). In particular the adjoint 2i+1 ... T T ~

= ->-.f

[t) -?} .S [tJ

- "'.5

[t] - f [tJ (3.21) x holds on (t ,t 2i

.

"

"

(1 lx £2 2x Ox )

.

2i+1 r

where

7

:=

S

~ 0 is a C function, which satisfies an equation 2

of the form " AT]:.."

( ::).

.~Xt~) for all t e( t , t (3.22)

2 2i 2i+1

At junction and contact points the 'jump'-condition " " "T

~(t +) = ~(t -) -

Y.S

( t ] (3.23)

i A i i 2x i

holds with Y ~ O. i

*) With

(s(P;

we denote the kxn

matrix·lrSZ~~Wh.r.

s(P)ha.

u 5 · 2u

~

(P')

elements 5 . t. 2l. U. i' 2u ..

We note that equation (3.22) gives an explicit expreSSion multiplier on boundary arcs (cf. appendix A).

for the

In the sequel we will assume that the optimal trajectory has no contact points. The solution of problems with such solutions will be investigated in the future.

It may be noted that the necessary conditions for optimality presented in lemmas 3.1 and 3.2, are the conditions originally stated by Jacobson et al. (1971). It has been shown that they are equivalent to the conditions of Bryson et al. (1963) augmented with a number of conditions on the signs of various multipliers (Kreindler 1982). The reason for using the multipliers of Jacobson et al. (1971) is that

(16)

\ .

they can be used in a rather straightforward manner for the active set strategy to be discussed. It is of course also possible to calculate these multipliers from the multipliers of Bryson et al. (1963).

We next develop an analogue approach is the way in which We will consider two ways : slack variables.

to problem \ QPE) . Essential in this the inequality constraints 5 are treated. using an active set strategy and using When an active set strategy is used, an estimate of the set of junction points is made. The constraint 5 .is supposed to be active at

time t if : i"

2 Sji[tJ >

-(7

ji (t)+2.r1

·7

ji (t) )/f2

(3.24)

where.YI and Ylare the penalty constants used for the mer~t function

(ct. s~ction 2) and where the notation [t] is used to replace (x'(t),u'(t», Le. the current estimate of the solution of problem (OCP).

Using equation (3.24) an estimate of the set of junction points is made, taking care that the function

S

is nondecreasing on boundary arcs. Let:

t i=0, ....•

1

be entry points 2i

t i=O, ... t be exit points 2i+1

With this estimate we obtain the following nonlinear constraints

equality

S [t]

=

0 jk

for all (j,k)€I(t) and t ~ t

S

t i=0, ....•

1

(3. 2S)

2i 2i+1

where I(t) is the set of indices (j,k) corresponding to the constraints which are active at time t.

Then the linearized constraints for the calculation of the direction of search (Ax,bu) become :

S (t] + 5 1k (t] • .6x(t) .;. 5 1kx 5 [t] + S [tl.bx(t)

=

0 2k 2kx [t] .Au(t)

=

0 1ku

for all (j,k)eI(t) and t $ t $ t i=0, ....• 1 (3.26)

2i 2i+l

We next give the analogue to problem (QPE) for the case that an active set strategy is used.

(17)

Problem (QPE/OCP) : minimize h llx,L'lu [0] .Ax(O) Ox

::r

+ [(f [t]

.~x+f

[t] .Au) .dt

J

Ox Ou

o

o

T T AX(O). (h [0]+ cr*D [0]) .Ax(O) + Oxx xx

-

-j

T T' T H xx [t] H xu (t] (Ax 6u ).

-H [t] ux H (t] 2t+l T ~ hx( t ) . .; *S i=O i i 2xx T T uu [t ] .Ax( t ) + i i A x ( T) • ( g [ T ] + , ; *E [ T] ) • ~ x ( T ) Oxx xx subject to : 6x

=

f [t].6x + f [t].~u + f[t] - x x u 0[0] + 0

[o).Ax(o)

=

0 x E[T] + E [T].Ax(T)

=

0 x

s

[t] + 5 [tj."&x(t) lk lkx

s

[t] +

s

[t] .Ax(t) 2k. 2kx + S [t).du(t) lku

=

0

=

+ g [T].Ak(T) + Ox 0

for all (j,k)£I(t) and t $ t ~ t i=0, ...

1

where H is the augmented Hamiltonian, i.e.

T T H := f + ~.f +

1.5

o

2i 2i+1 (3.27 ) (3.28) (3.29) (3.30 ) (3.31) (3.32 )

(the variables with a bar denote the corresponding variables from the previous step c.q. iteration).

Application of the lemmas 3.1 and 3.2 yields necessary conditions for optimality for the solution of problem (QPE/OCP). These conditions take the form of a linear multipoint boundary value problem and are counterpart to (2.20). Combination with the analogue of (2.21) yields the following linear multipoint boundary value problem, which is to be

(18)

solved at step (iii) of the algorithm :

.

Dx = f [tl.Ax ... f (t].Au ... f[t] - x a.e. 0 :% t $ T

x u • T T T

'A

= -

>-.

f [ t ] - 'YJ • S [ t ] - f [ t] -

H [

t] . Ax x ( x ~ xx - H [t].i>u xu 0[0] ... 0 [0] .Ax(O) ::

a

x a.e. 0..$ t ~ T T T

- (h [0]+ ~*O [0]) .:.x(O) - MO) :: h [0] + G.D [0]

Oxx xx Ox x E[T] + E [T].~x(T) =

a

x (g [T]+

i*E

(T]) .6X(T) - >-<T) Oxx xx T T = - g [T] -

f'.E

[T1 Ox x (3.33) (3.34 ) (3.35) (3.36) . (3.37) (3.38) T T f [ t] + f [ t ] .

>..

+ S [ t] ,

7

+

H [

t ] ,.<1 x +

H [

t ] .), u = 0 0

~

t

~

T au u lu 1 ux uu (3.39)

s

[t] + S [t].Ax(t) + S [t].~u(t)

=

0 lk lkx lku S [t] + S [t] .tx(t) = 0 2k 2kx

for all (j,k)EI(t) and t ~ t $ t i=O, ...• t (3.40)

2i 2i+1

'A

'

T

( t +) =A(t -) - Y.S [ t ] - y*S [t ] .Dx(t ) i=O, •• • 2t+1 (3.41)

i i i 2x i i 2xx i i

v

=

a

ij . if (2,j)

1

r(t ) 1 T T . =

<p

[t] .

~

+)...

(P

[t] . Ax +

~

[t].A. u) x u

,

"'"F}

=

0 ( jk (j,k)(fI(t) t $ t :;:: t 21 21+1 (3.42) (3.43) (3.44)

(19)

We note that we used 5 and

7

to replace 5

[11

1 S

=

f

= ,3.45) S 2

{2

The translation of (2.17) will follow straightforward following the previous approach, once the mapping G is defined. As this definition is (within limits) arbitrary, we may take :

<G.d ,d > ::

J~A.(t)

:Ax(

t) • Au(t) :.!U(t» .dt (3.46)

1 1

o

This yields the following modification of (3.34), (3.36), (3.38), (3.39) and (3.41)

~

= -

>.

T .f [t] -('Sx[t] T - f T [t] - ~x x Ox

o

~ t :S T (3.47) T T ).,( 0) = - h [0] - e" .D [0] (3.48) Ox x T T 'A(T) = g [T] + ~.E [T] Ox x (3.49) T \ T f Et] + f [t].A +S [t].! +

4

u = 0

au

u lIS 1

o

~ t $ T (3.50) ');.. (t \ T

- Y

.5 [t ] +) = ~(t -) i=0, ..• 21+1 (3.51) i i i 2x i

When slack-variables are used, the constraints 5 are transformed into equality constraints in the following way :

where S [t] + diag(y (t».y (t)/2

=

0 1 1 1 S [tJ + diag(y (t».y (t)/2 = a 2 2 2 kl y : [O,T] -> R 1 k~ and y :[O,T] -> R 2

In this approach junction points are not treated explicitly.

(3.52)

(3.53)

The modification of problem (QPE/OCP) for this case is

straightforward. The boundary value problem to be solved in this case is given .in appendix B.

(20)

To complete the translation of the algor~thm, we consider the translation of the merit function in terms of the problem (OCP).

The first step is to define the mappings P and Q. We have chosen them similar to the criteria P and Q involved in the Sequential Gradient -Restoration Algorithm of Miele (cf. Miele (1980), P 'measures' the constraint violation, Q 'measures' the error in the optimality conditions.

For the sake of brevity we define

T 2

N(v)

=

V.v =

Ilvll

2

j

T .

P:= (N(x-f(x,u» + N(S».dt + N(D(x(O») + N(E(x(T»)

where : S := max {s

1

(x, U ) , -

«(1

+ 2

f

1 diag ( ( 1) .

71)

1 f2 } max(s2 (x) , -?2+2')ldiag (72»)2)/f 2} (3.54 ) (3.55 ) (3.56 )

in caSe that an active set strategy is used, (the max-operator is applied componentwise), otherwise

[ 5 (x,u)+diag(y ).y

12

_ 1 1 1 S· := 5 {x)+diag(y ).y /2 2 2 2 . (3.57 )

in caSe that a slack-~ariable technique is used.

J

T • T T T

Q:= (N().+)..f (x,u)+!.S (x,u)+f (x,u» ...

x x Ox

O T T

N(A.f (x,U)+l'S (x,u)+f (x,u»).dt +

u u au

T T

N(~(O)+h (x(O»+~D (x(O») +

Ox x T T

,

N(A(T)-g (x(T»-r.E (x(T») + Ox x 21+1 T

:c

N ( )..( t +) - ~ (t -) + .:; • S ( x (t

»)

(3.58) i=O i i i 2x i

(21)

The merit function becomes : T T M = h (x(O» ~ G.D(x(O»

a

+ g (x(T» + ~:E(X(T» +

a

!

T . T (fo(X'U) - ~.(x-f(x,u» + 7.S}.dt + 21+1 T

~

~'S2(X(ti»

+

()l,Q

+j(2,P)/2 0.59)'

We note that the norm on the update direction

I

Idl

I

is translated as : :to

.

lid

II

= max

{ II

A x I I , I lAx

I I ,

I I Au

I

I }

where

2 O~t~ 2 2 2

n

m

I

I ,I

I

denotes the euclidian norm on the space R or R . 2

(22)

CHAPTER 4

EXPERIMENTAL NUMERICAL IMPLEMENTATION OF THE METHOD.

An experimental implementation of the method discussed in the ~revious

sections has been developed. This implementation is based on the ideas of solving boundary value problems using colloca~ion of piecewise polynomials (cf. de Boor et al. (1973) and Weiss (1974». In an earlier stage of' our investigations we developed a code based on collocation using piecewise cubic Hermite polynomials (cf. Dickmans et al. 1975). However, the nature of the solution (discontinuities in the control) yielded rather large errors. Thi's approach has been abandoned.

One of the major parts of the method is the solution

multi~oint boundary value problem (3.33)-(3.44), translated into the following problem :

v = A(t).v + B(t).w + e(t) a.e.

o

~ t ~ T

0 = C(t).v + D(t).w + get)

a

~ t ~ T

0 = K .v(O) + L .,G + 1

a

0 0

0 = vet -) + F .v(t +) + G .,y i=O, •..• 21:+1

i i i i i 0 = H .v( t +) + 1 i=O, .••. 21:+1 i i i 0 = K

-

.v(T) + L

-·f+

1 p p p of the linear which can be

(4.1) (4.2) (4.3) (4.4) (4.5) (4.6)

The relation between boundary value problem (3.33)-(3.44) and

(23)

Following the concept of collocation with piecewise polynomials we introduce the following quantities

..

..

The breakpoint sequence ;

6:= (t ,t , ....

t_1

O=t <t < ...•. <t_=T) (4.n

o

1 p O L P

The spaces consisting of piecewise polynomials : n n

p

s ,Gl

.-

(

.-

all piecewise polynomials v:[O,T] -> R with breakpoint sequence d, where If is an s-order polynomial on

the open intervals ( t , t ,i=O, .. p-l}

The numbers

and time differences

h := t - t

j j+l j

and time instants

t + j 1:' := (t + t i+j.s j t j+l j+l

+t

.h. )/2 1. ) i i+1 if

?

= -1 1. (4.8) j=O,l ... p-l (4.9) if -1

<f,<

1 i=1,2 .. s (4.10) 1. j=O, 1 •• p-1

The collocation procedure is based on the requirement that (4.1) and

(4.2) must hold at all timepoints (j=l, .•.. s.p), which are referred to

as collocation points. Furthermore the functions v and ware supposed to be piecewise polynomials. More specifically

2n m+k

v € P and w ~ P (4.11)

s+l,A s,A

The functions ware allowed to be discontinuous at the breakpoints whereas the functions v must be continuous at these points, except in the case that such a point coincides with a junction point, in which case (4.4) must hold.

In our procedure, we allow the junction points only to coincide with some of the breakpoints. In the sequel we shall treat every breakpoint as if it is a junction point. (When this is not the case, the column dimension of the matrix G and the row dimension of the matrix H . will be zero. Taking F=-I (the 2n-identymatrix), the function v will automatically be continuous in these points.)

(24)

de Boor et al.(1973) show that Gaussian points are a suitable for J~

.

(Gaussian points are the po~nts corresponding integration formulas of Gauss).

choice to the

For v and w we select two different representations, which are based on the observation that for w we only need its values on the collocation points, whereas for v we also need the values on the breakpoints (cf. equations (4.1)-(4.6»,

This leads us to take the truncated power base for the representation

of v, Le. v(t) s

=2:

j=O j c . ( t - t ) i j i t+~t$t (4.12 ) i i+1

For w we use the Lagrange form of polynomial interpolation (cf. de Boor 1978), i.e. w .z (t) t+~t~t (4.13) s wet) =

:a:

j=l j+s.i ij i i+l with (t - "'( s k+s.i z (t)

=~

--- (4.14) i j k= 1 (7' - 1:' kfj j+s.i k+s.i

Using these representations one can show that into :

(4.1) is transformed

s

LG

.c -B(1:" ).w -e(T =0

j=O irj rj i+s.r i+s.r i+s.r

with i=1, ... s (4.15) r=O I • • • p-l j-1 j-l irj

=

(j.I - A(~ ).(l+~).h 12).{1+~). (h 12) 1.,+5. r 1. r ) i r G And (4.2) becomes: s

z=

c

.c irj rj j=O with i+s.r ) • w i+s.r + g(T i+s.r j-l j-1 irj =C("r. ).(1+p). (h/2) 1.+s.r ) i r C = 0 i=1, ••. s r=O, • , .p-1 (4.16) (4.17) (4.18)

Equations (4.1)-(4.6) are transformed into the following almost blockdiagona1 system of linear equations

(25)

l

L K

r

~ 1-1

o

0 c 00

j

0

.-

-

-e 1 i j 81 0 c 01 2

r

h .1 h .I •.. G F

o

0 1 1 with i Bl r

=

H 1 ,-! ~BL 1 I H

p-l

'--Bl

p-1

2 I h

.r

h .1. •. -I

o

0 G G ....•.• G -B(T lrO lrl lrs l+s.r

-C C • • • • • •• C DC

r-lrO lrl lrs 1+s.r G G

...

G srQ srl srs C C

...

C srO srl srs -B(T s+s.r

o('r

s+s.r c Os w 1

=

-g 1 -1 1 (4.19) (4.20)

This system is solved by means of Gauss elimination, taking the sparsity pattern of the left hand side into account (subroutine CWIDTH from de Boor (1978».

Returning to the original algorithm of section 2, we see that steps (i) and (iii) are now clear, i.e. steps (i) and (iii) require the solution of system (4.20), whereas step (li) is straightforward.

(26)

The last translation to be made is the calculation of the merit function M, whose calculation follows straightforward from (3.59) once a suitable quadrature formula for the calculation of the definite integrals is selected.

Referring to Weiss (1974), we note that there is a strong

correspondence between this type of collocation and implicit

Runge-Kutta methods based on interpolatory quadrature formulae. The

definite integrals can be estimated on the basis of this

correspondence, which yield the GaUSSian quadrature formulae (cf. Bulirsh et al. 1976), i.e.

J

\+1

f(t) .dt .. t s h.~w.f('2' i j=l j j+s.i (4.21) i

As to the accuracy of the method discussed in this section we note that there are mainly three sources of error :

1. Discretization of the time-functions.

Convergence results from the theory of approximation indicate that this error is proportional to :

2s [max h

I

i i

assuming that all functions involved are sufficiently smooth on the open intervals (t ,t ) i=O, ... ~-l.

i i+1

2. Representation of the active set.

The representation of the active set is done by means of the junction points. These junction points are only allowed to coincide with the breakpoints. (The breakpoint sequence is kept fixed.) An error is due to the fact that in general the true junction points are interior to the intervals between successive breakpoints.

In the future we will investigate variable stepsize

strategies for improving a given breakpoint sequence~ based on reducing this error.

3. Truncation of the iteration process.

Because we have local quadratic convergence of the iteration, this error can be made very small.

(27)

CHAPTER 5

EXAMPLES.

To perform some preliminary tests of the program we calculated the solution to. the following two examples.

The breakpoint sequence was taken to be uniform. For example 1 the algorithm was started with the slackvariable technique, once the norm I/d / / was below a certain limit (the value 1.0 was taken), a switch was made to the active set strategy. The calculations for example 2 were done using 'only the active set strategy.

Example 1 (cf. Miele 1980) minimize x (1) 3 subject to : x

=

x .u 1 3 2 2 x

=

x • (u 2 3

x

=

0 3 - x - 0.5) 1 x (0) + x (0)

=

0 1 2 2 2 x - u ~ 0 1 x (1) = 1 1 x (1)

=

-Tr/4 2 (5.1) -(5.2) (5.3) (5.4) (5.5) (5.6) (5.7)

(28)

Some convergence results are given in table 5.1 below I 0<.

i

Id II

1 0.500000E+00 0.223730E+01 2 0.250000E+OO 0.258591E+01 3 0.500000E+00 0.121017E+01 4 0.100OOOE+01 0.369991E+00

SWITCH OVER FROM SLACK-VARIABLE ACTIVE SET STRATEGY

• I I 5 0.lOOOOOE+01 0.8444S7E+00 6 0.100000E+01 0.283447E+00 7 0.100000E+01 0.220002E+00 8 O.lOOOOOE+Ol 0.836837E-01 9 0.lOOOOOE+01 0.191523E-02 10 0.lOOOOOE+01 O.475043E-08 11 0.19508SE-1S table 5.1

)1:::

1.0E-01

r

2= 1.0E+01 Example 2 subject to

x:

X 1 2 X : U 2

lui

~ 5

Ix I

~ 1.4 2 x (0)

=

0 1 x (0)

=

0 2

---Merit function •. r----.~ ... .. -O.238862E+Ol 0.221859E+01 0.221722E+01 0.210637E+Ol TECHNIQUE TO

I

0.195140E+01 0.182497E+01 0.182240E+Ol 0.182223E+01 0.182223E+01

o

.182223E+01 . 0.182223E+01 x (1) ::: 1 1 x (1)

=

0 2 (5.8) (5.9) (S.lO) (5.11) (5.12)

(29)

Some convergence results are given in table 5.2 below

I 0(

lid

II

Merit function

-"' ...

_-I O.lOOOOOE+Ol O.688235E+Ol O.367874E+03

2 O.SOOOOOE+OO

o

.13208lE+Ol O.188972E+03

3 O.SOOOOOE+OO O.11l806E+Ol O.lOS495E+03

4 O.SOOOOOE+OO O.101895E+Ol O.394670E+02

5 O.lOOOOOE+Ol O.700380E+OO O.345637E+02

6 O.lOOOOOE+Ol O.358886E+OO O.646068E+Ol

7 O.lOOOOOE+Ol O.1l8366E-Ol O.642857E+Ol

8 O.lOOOOOE+Ol O.140097E-04 0.642857E+Ol

9 O.lOOOOOE+Ol O.196261E-IO O.642857E+Ol

10 O.277556E-16 O.642857E+Ol

(30)

CHAPTER 6 FINAL REMARKS.

Using the experimental implementation outlined in section 4, we were able to solve two problems, indicating that the method was feasible. It will be clear that there is still a lot of work to be done to obtain a fully tested operational code for solving state-constrained optimal control problems with the method presented here. We summarize the following topics :

.,

'* '* '*

.,

.,

.,

Solution of subproblems with inequality constraints • Solution of problems with contact points.

More efficient implementation of the collocation method. Variable stepsize strategy (breakpoint sequence).

Application -techniques.

of quasi-Newton and discrete Newton

Choice of merit function, penalty constants, algorithm.

linesearch

Strategies in case of defective subproblems . We hope to present results about this research in the future.

(31)

CHAPTER 7 REFERENCES

1. Bertsekas D.P. (1982)

Constrained optimization and Lagrange multiplier methods Academic press, New York, 1982

2. Bock H.G.

Numerische behandlung von zustandsbeschraenkten und

Chebychef-steuerung problemen.

Syllabus of the course 'Optimierungsverfahren' of the Carl' Cranz Gesellschaft, Manuskript 9, Oberphaffenhofen FRG, October 1983. 3. de Boor C.; B. Swartz (1973)

Collocation at Gaussian points.

SIAM journal on numerical analysis, Vol. 10, No.4, Sept. 1973. 4. de Boor C. (1978)

A practical guide to splines

Series in applied mathematical sciences, Vol. 27. Springer Verlag, New York, 1978

5. Bryson A.E. Jr.; Denham W.F.; Dreyfus S.E. (1963)

Optimal programming problems with inequality constraints I necessary conditions for extremal solutions.

AlAA journal, Vol. 1, No. 11, pp. 2544-2551, Nov. 1963. 6. Bulirsh R,; J. Stoer (1976)

Introduction to numerical analYSis Springer Verlag, New York, 1976 7. Bulirsh H. (1983)

Die mehrzielmethode zur numerische loesung von nichtlinearen randwertproblemen und aufgaben der optimal steuerung.

Syllabus of the course 'Optimierungsverfahren' of the carl Cranz Gesellschaft, Manuskript 8, Oberphaffenhofen FRG, October 1983.

(32)

8. Dickffians.E.D.; K.H. Well (1975)

Approximate solution of optimal control problems using third order Hermite polynomial functions.

Proc. IFIP-TC 7, VI Techn. conf. on optimization techniques, held at Novosibirsk, 1974.

Proceedings Springer, 1975

9. Gill P.E.; W. Murray; M.H. Wright (1981) Practical optimization

Academic press, New York, 1981 10. Hamilton W.E. (1972)

On nonexistence of boundary arcs in control problems with bounded state-variables

IEEE trans. automatic control, AC - 17, pp. 338-343, 1972 11. Han S.P. (1976)

Super linearly convergent variable metric algorithms for general nonlinear programming problems.

Math. programming 11, pp. 263-282, 1976

12. Jacobson D.H.; Lele M.M.; Speyer J.L. (1971)

New necessary conditions of optimality for control problems with state-variable inequality constraints.

Journal of mathematical analysis and applications 35, pp. 255-284, 1971

13. Kreindler E. (1982)

Additional necessary conditions for optimal control with

state-variable inequality constraints.

Jcurnal of optimization theory and applications Vol. 38, No.2, Oct. 1982

14. Luenberger D.G. (1969)

Optimization by vector space methods Wiley, New York, 1969

15. Maurer H. (1979a)

On the minimum principle for optimal control problems with state-constraints.

Schriftreihe des Rechenzentrums der Universitaet Muenster. ISSN 0344-0842, Nr. 41, Oct. 1979.

16. Maurer H.; J. Zowe (1979b)

First and second order necessary and sufficient optimality conditions for infinite-dimensional programming problems.

Mathematical programming (16), pp. 98-110 North-Holland

publishing company, 1979. 17. Maurer H. (1981)

First and second order necessary and sufficient optimality conditions in mathematical programming and optimal control.

Mathematical programming study 14, pp. 163-177 North-Holland publishing company, 1981.

(33)

18. Miele A.; A.K. Wu (1980)

Sequential conjugate gradient-restoration control problems with non-differential conditions, part 1 and 2.

Optimal control applications and methods, Vol. I, pp. 69-88, 119-130, 1980.

algor ithm for and general

optimal boundary

19. Norris D.O.

Nonlinear programming applied to problems.

Journal of mathematical analysis 261-272, 1973 state-constrained and applications optimization 43, pp. 20. Powell M.D.J. (1978) A fast algorithm for calculations.

nonlinearly constrained optimization

Numerical analysis. Proceedings of the Biennial Conference Held at Dundee, June 1977. (G.A. watson ed.).

Lecture notes in mathematics, Vol. 630 Springer, New York, 1978

21. Schittkowski K. (1981)

The nonlinear programming method of Wilson, Han and Powell with an augmented Lagrangian type line search function.

Numer. math. 38, pp. 83-114, 1981 22. Stoer J. (1984)

23.

Foundations of recursiv~ quadratic programming methods for solving nonlinear programs.

Paper presented at the NATO advanced study institute on

"Computational Mathematical Programming" Bad Windsheim, FRG, July 23 - August 2, 1984. Tapia R.A. (1974b)

A stable approach to Newton's programming problems in Rn. Journal of optimization theory 5, 1974

method for general mathematical and applications, Vol. 14, No.

24. Tapia R.A. (1974a)

Newton's method for optimiza tion problems with equality

constraints.

SIAM journal on numerical analysis, Vol. 11, No.5, Oct. 1974. 25. Tapia R.A. (1977)

Diagonalized multiplier methods and quasi-Newton methods for constrained optimization.

Journal of optimization theory and applications Vol. 22, No.2, June 1977.

26. Tapia R.A. (1978)

QuaSi-Newton methods for equality constrained optimization equivalence of existing methods and a new implementation.

Nonlinear programming 3 (D.L. Mangasarian, R.R. Meyer and S.M.

Robinson eds.) pp. 125-164 '

(34)

.

.

.

.

27. Weiss R. (1974)

The application of implicit boundary value problems. Mathematics of computation, April 1974.

Runge-Kutta and collocation methods to

Vol. 28, No. 126, pp. 449-464,

28. Well K.H.

Uebungen zu den optimale steuerungen.

Syllabus of the course 'Optimierungsverfahren' of the Carl Cranz Gesellschaft, Manuskript 7, Oberphaffenhofen FRG, October 1983.

(35)

APPENDIX A

EXPLICIT EXPRESSION FOR

(.L

ON BOUNDARY ARCS.

nxm First we define the nxm matrix functions

fi

:[O,T] -> R

(cf. Hamilton (1972)~ : •

:= f

u

(AI)

(A2)

In the sequel.we will assume that k,

=

0, k~

=

1 and m

=

1 simplicity. (i.e. we have only a single pure state constraint). Using (AI) and (A2) it is possible to give an explicit expression the time derivatives of Hit' i.e.

(i) H u

=

T

)..·1

i T

,>-.~

i=O,l, ... p-l p-l (p) (-1) ~.5 (2 u i=p where p Because (i) H

is the order of the state constraint S.

= 0 i=O,l, ... p a.e. 0 ~ t ~ T u (A3) implies p-l T (p)

(2

= (-1) •

A·f/su

and hence p-l ~ (p)

~(K'U)

= (-1) . /S . p u in this case. (A3) (A3) (M) (AS) for for

(36)

·

.

can be derived as possible because of 1979a).

the result of an elimination process, which is the regularity condition (3.20) (cf. Maurer

(37)

APPENDIX B

BOUNDARY VALUE PROBLEM IN'CASE OF SLACKVARIABLETECHNIQUE.

The boundary value problem to be solved in case of the slackvariable technique is :

.

.

l\x

=

f [t] .Ax + f [tJ .Au + f[t] - x x u T T T

~

= -

~.f

(t] - 5

(t~.diag(7-).e

-

f

[t] - H

[t].Ax x x Ox xx

-- H [t]

.Au

xu D[O] + 0

[o].Ax(o)

=

0 T [0]) .Ax(D) - \(0) = h -(h [0] + S *D Oxx E[T] + E [T] .Ax(T)

=

a

x T [0] + C'5'.D [0] Ox x T T (g [T] + r*E [T]) .t\X(T) - )"(T)

=

-g [T].,. f.E [T] Oxx xx Ox x T T T

a

~ t $ T (Bl) (B2) (B3) (B4) (BS) (B6)

o

=

f + f .).. + 5 .diag

(7).

e

+

H

.Ax +

H

.4u

0

s.

t ~ T (B7)

Ou u u ux uu

s

.b.x + 5 .~u 2.diag(y).$

= -

5 - Y (B8)

x u

(38)

..

Following the suggestions of Tapia (1977) the change in the slack variable in the ith iteration is

'away from convergence' : dy (t)

=

Is

( t )

I - y

(t) j=l, ..• ,k

j j j

dy (t) for all t where

(BIO) 'near convergence' dy (t)

=

j j

e

(t) . y ( t )

10

j j j=l, ..• ,k (Bll)

Here dy(t) is defined as : 2

dy(t)

=

«I - diag(9(t») - I).y(t)

Is I -

y (t) otherwise

j j

The condition 'near'/'away from' convergence is based on the value of the norm Ild:tll.

(39)

APPENDIX e

RELATION BETWEEN BOUNDARY VALUE PROBLEMS.

v

= [:

1

W '

h

r

u

1

[f[t!

-~]

[S[t!

1

e = T g - T (el) -f [t] f Ou [t] Ox f [t] 0 f [t]

-5:[tJ

x u A(t)

=

-

T B(t)

=

(e2) [t] -f [t] (t] -H -H xx x xu

r

5 [t!

. a

-

5 [t] 0 lx iu . T T T

-~'f

[t] -

~

[t]

-Lf

[t] I e(t)

=

x D(t)

=

u (e3) T T

-

H [t] f [t]

-

H (t]

S

[t] L ux U uu u [ -I 0

]

=[-5:;\1 ]

F

=

~*S

[ t ] G (e4) i -I i i 2xx. i H

=[

5

[t.]

J

1

=f-S[t. ] ]

(eS) i 2x ~ i I. ~

(40)

K

.IM,

I

Lo=[D:~Oll

_ fh T (0]1 (C6) 1

=1

Ox

I

o

lD }O]

o.

o l-c

[0]

J

J

1M

:]

Lp=[ E:

~Tl]

r

T i \

Kp

=lE:[Tl

_ 19 [T]I 1

=l

Ox I (C7)

P

-E[T]

J

where : [0] + [0] [T] -10

-

[T]

M

=

h G'*D and M :; 9 jJ*E (C8) 1 Oxx xx 2 Oxx xx

Referenties

GERELATEERDE DOCUMENTEN

zonder zaaien ontwikkelde zich een soortenarme vegetatie gedomineerd door meerjarige grassen; zaaien had een grote impact op het aantal soorten na 3 jaar; op een van de

en hoopt nog voor de winter invalt een nieuw onderdak klaar te hebben voor zijn machines.. De schuur moet ook dienen als opslagruimte voor het gedorste hooi en stro dat hij heeft

Zowel het verschil zonder moeilijke (basis) als met een aantal moeilijke omstandigheden (hoge melkproductie, grote koppel en een grote koppel met een klein beweidbaar oppervlak) is

Breath-analysis apparatus for enforcement purposes can be subdivided into two kinds: (portable) devices for screening, such as the test tube (qualitative

The paper is organized as follows: in Section 2 we describe the VGS sample and the results found so far.. description of the observations and the data analysis. In Section 4,

chemical shifts of some normal alkanes and linear 1-alkenes are combined with VFF calculated conforma- tional energies in order to postulate some additional types of

Willems, kwamen wij op het spoor van een aantal vondsten uit de Romeinse tijd, die voor een vijftal jaren aan het licht kwamen bij het bouwen van een woning aan de

Voor wat betreft de input-indicatoren, zoals de grootte van investeringen in onderzoek en ontwikkeling, zowel door overheid als bedrijven, wijzen de gradiënten weliswaar in de