Faculty of Sciences and Mathematics, University of Niš, Serbia

Available at: http://www.pmf.ni.ac.yu/filomat

Filomat 21:2 (2007), 285–290 ______

A TRUST REGION ALGORITHM FOR LC1 UNCONSTRAINED OPTIMIZATION

Nada Đuranović-Miličić

Abstract: In this paper a trust region algorithm for minimization of locally Lipschitzian functions, which uses the second order Dini upper directional derivative is considered. A convergence proof is given, as well as an estimate of the rate of convergence.

Keywords and phrases: Directional derivative, second order Dini upper directional derivative, uniformly convex functions, trust region algorithm.

2000 Mathematics Subject Classification: 65K05, 90C30.

Received: November 21, 2006.

285

1. INTRODUCTION

We shall consider the following LC1 problem of unconstrained optimization

, (1)

where is a LC1 function on the open convex set , that means the objective function we want to minimize is continuously differentiable and its gradient is locally Lipschitzian, i.e.

for


for some .

We shall present an iterative algorithm which is based on the results from [2] and [3] for finding an optimal solution to problem (1) generating the sequence of points of the following form:

(2)

where the directional vector is defined by the particular algorithm.

The algorithm which we are going to present is a trust region algorithm, which uses the second order Dini upper directional derivative instead of the Hessian in the quadratic approximatiom model.

The convergence of trust region algorithms has been usually proved for functions, i.e. for twice continuously differentiable functions ( see for example [2]). The aim of this paper is to establish convergence under weaker assumptions, if .

2. PRELIMINARIES

We shall give some preliminaries that will be used for the remainder of the paper.

Definition (see [3]) The second order Dini upper directional derivative of the function at in the direction is defined to be


If is directionally differentiable at , we have

for all

Lemma 1 (See [3]) Let be a function on , where is an open subset. If is a solution of optimization problem (1), then:

and .

Lemma 2 (See [3]) Let be a function on , where is an open subset. If satisfies

and , then is a strict local minimizer of (1).

3. THE OPTIMIZATION ALGORITHM

At the k-th iteration we want to find a solution of the following direction finding subproblem

(3)

s.t.

for some bound , where the norm is arbitrary, but is usually chosen as the -norm. We choose the radius to be as large as possible depending on the agreement between and We can measure this agreement by comparing the actual reduction in f on the k-th step

and the corresponding predicted reduction

Hence, if the ratio is closer to unity, the better is the agreement beteween the quadratic model and

The trust region algorithm for optimization

Step 0. Given k=0;

Step 1. If STOP; otherwise find a solution to the subproblem (3);

Step 2. Compute and

Step 3.

Step 4. If set otherwise

Step 5. set , go to step 1.

Proposition. If the function satisfies the condition

(4)

then: 1) the function is uniformly and, hence, strictly convex, and, consequently; 2) the level set is a compact convex set for any ; 3) there exists a unique point such that .

.

Proof. See [1].

Lemma 3.

Proof is analogous to the proof of Lemma 5.2 in [3].

Convergence theorem. If the function satisfies the condition (4) and if any level set is bounded, then for any initial point and as , where is a unique minimal point.

Proof. The algorithm generates a subsequence for which either

a) and ( and hence ), or

b) and .

From the boundness of level sets it follows that belongs to a bounded set. Hence there exist accumulation points of . Let be any accumulation points of such a subsequence. We only have to prove that since, according to the Proposition, the stationary point is a unique minimizer.

We shall show that can not arise from case a) but only from case b). Namely, since from (3) it follows that there exists a sequence (see [3]) converging to zero such that

, i.e.
(5)

Dividing (5) by and taking the limit as gives Hence and ; so can arise only from case b). In case b) implies since is constant. Hence since .

From Lemma3 it follows that

Since and , it follows that From continuity of we have that Following Theorem 4.4 from [3] it can be analogously proved that converges Q-superlinearly to .

4. CONCLUSION

In this paper we presented a trust region algorithm which is based on the results from [2] and [3]. We proved the convergence of the trust region method for LC1 class of functions using the second order Dini upper directional derivative instead of the Hessian in the quadratic approximatiom model, аs it has been done in [2]. In [2] the convergence proof of the trust region algorithm is given for , hence under stronger assumptions. In [3] a trust region algorithm is also defined for LC1 functions. It uses the exact solution and an inexact solution of the subproblem (3) and it requires that the decrease in must be at least a fraction of the optimal decrease in . In [3] the convergence proof is given under a little stronger assumptions than it has been done in this paper.

5. ACKNOWLEDGMENT

This research was supported by Science Fund of Serbia, grant number 144015G , through Institute of Mathematics, SANU.

REFERENCES

[1] Djuranovic-Milicic, N., On minimization of locally Lipschitzian functions, Proceedings of SYM-OP-IS 2003, H.Novi, 347-350, 2003.

[2] Fletcher, R., Practical methods of optimization, Vol.1, J. Wiley, New York,1980.

[3] Sun, V.J., Sampaio, R.J.B., and Yuan, Y.J., Two algorithms for LC1 unconstrained optimization, Journal of Computational Mathematics 6 , 621-632, 2000.

[4] Vujčić, V., Ašić, M., and Miličić, N., Mathematical programming (in Serbian), Institute of Mathematics, Belgrade, 1980.

285

Department of Mathematics, Faculty of Technology and Metallurgy, University of Belgrade, Belgrade, Serbia

285