 Research
 Open access
 Published:
A nonmonotone flexible filter method for nonlinear constrained optimization
Journal of Mathematics in Industry volumeÂ 6, ArticleÂ number:Â 8 (2016)
Abstract
In this paper, we present a flexible nonmonotone filter method for solving nonlinear constrained optimization problems which are common models in industry. This new method has more flexibility for the acceptance of the trial step compared to the traditional filter methods, and requires less computational costs compared with the monotonetype methods. Moreover, we use a selfadaptive parameter to adjust the acceptance criteria, so that Maratos effect can be avoided a certain degree. Under reasonable assumptions, the proposed algorithm is globally convergent. Numerical tests are presented that confirm the efficiency of the approach.
1 Introduction
We consider the following inequality constrained nonlinear optimization problem
where \(x\in\mathbb{R}^{n}\), the functions \(f:\mathbb{R}^{n}\to\mathbb{R}\) and \(c_{i}\ (i\in I):\mathbb {R}^{n}\to\mathbb{R}\) are all twice continuously differentiable. For convenience, let \(g(x)=\nabla f(x)\), \(c(x)=(c_{1}(x),c_{2}(x),\ldots,c_{m}(x))^{T}\) and \(A(x)=(\nabla c_{1}(x),\nabla c_{2}(x),\ldots,\nabla c_{m}(x))\). And \(f_{k}\) refers to \(f(x_{k})\), \(c_{k}\) to \(c(x_{k})\), \(g_{k}\) to \(g(x_{k})\) and \(A_{k}\) to \(A(x_{k})\), etc.
There are various methods for solving the inequality constrained nonlinear optimization problem (P). For example, sequential quadratic programming methods, trust region approaches [1], penalty methods and interior point methods [2]. But in these works, aÂ penalty or Lagrange function is always used to test the acceptability of the iterates. However, as we all know, there are several difficulties associated with the use of penalty function, and in particular the choice of the penalty parameter. In 2002, Fletcher and Leyffer [3] proposed a class of filter methods, which does not require any penalty parameter and has promising numerical results. Consequently, filter technique has employed to many approaches, for instance, SLP methods [4], SQP methods [5, 6], interior point approaches [7] and derivativefree optimization [8, 9]. Furthermore, Fletcher et al. [5] proved the global convergence of the filterSQP method, then Ulbrich and Ulbrich [10] showed its superlinear local convergence. But the filter methods also encounter the Maratos effect. Marotos effect, observed by Maratos in his PhD thesis in 1978, means some steps that make good progress toward a solution are rejected by the merit function. To overcome the drawback in filter methods, Ulbrich [11] introduced a new filter method using the Lagrangian function instead of the objective function as the acceptance criterion. After that, Nie and Ma [12] used a fixed scalar to combine the objective function and violation constraint function as one measure in the entry of the filter. But both of them used the fixed criterion to decide whether accept a trial point or not, that means the criterion is invariable no matter what improvements made by the trial point. Actually, if we can change the criterion according to the different improvements made by the current trial point, we can avoid Maratos effect to a certain degree, and decrease the computational costs as well.
On the other hand, the promising numerical results of filter methods owe to their nonmonotonicity in a certain degree. Based on this property, some other nonmonotonetype filter methods are proposed [13â€“15]. Gould and Toint [16] also introduced a new nonmonotone filter method using the area of the region in \(hf\) plane as the criteria to decide whether a trial point is acceptable or not, where \(h=h(x)\) is the constraint violation function and \(f=f(x)\) is the objective function at the current point x.
Motivated by the idea and methods above, we proposed a class of nonmonotone filter trust region methods with selfadaptive parameter for solving problem (P). Our method improves previous nonmonotone filter method. Unlike Ulbrich [11], we do not use a Lagrangian function in the filter but use the similar type of function as that in Nie and Ma [12]. Moreover, different from Nie and Ma [12], the parameter in our method is not fixed but variable, that means the criterion is adjusted according to the different improvements. To avoid the trial point from falling into a â€˜valleyâ€™, we also add the nonmonotonic technique into the criterion. Different from existing SQPfilter methods, we use a quadratic subproblem that always feasible to avoid the feasible restoration, hence decrease the scale of the calculation to a certain degree.
This paper is organized as follows: in SectionÂ 2, we introduce the feasible SQP subproblem and the nonmonotonic flexible filter. We propose the nonmonotone filter method with selfadaptive parameter in SectionÂ 3. SectionÂ 4 presents the global convergence properties and some numerical results are reported in SectionÂ 5. We end our presentation in short conclusion in SectionÂ 6.
2 The modified SQP subproblem and the nonmonotone flexible filter method
2.1 The modified SQP subproblem
Our algorithm is an SQP method, to avoid the infeasibility of the quadratic subproblem, we choose a quadratic program that presented by Zhou [17]. At the kth iterate, we compute a trial step by solving the following quadratic problem,
where
and \(\overline{\Psi}(x_{k};d_{k})\) is the first order approximation to \(\Psi(x_{k}+d_{k})=\max\{c_{j}(x_{k}+d_{k}):j\in I\}\), namely
and \(\rho_{k}>0\). We notice that these convex programs have the following properties.
We can condense the above definitions by the following form
Lemma 1
[17]
If \(d_{k}=0\) is the solution to \(Q(x_{k},H_{k},\rho_{k})\), then \(x_{k}\) is a KKT point of the problemÂ (P).
Proof
The proof is similar to that of LemmaÂ 4.1 in [16].â€ƒâ–¡
2.2 The nonmonotone flexible filter with a selfadaptive parameter
In traditional filter method, originally proposed by Fletcher and Leyffer [3], the acceptability of iterates is determined by comparing the value of constraint violation and the objective function with previous iterates collected in a filter. Define the violation function \(h(x)\) by \(h(x)=\c(x)^{+}\_{\infty}\), where \(c_{i}(x)^{+}=\max\{c_{i}(x),0, i\in I\}\). Obviously, \(h(x)=0\) if and only if x is a feasible point. So a trial point should either reduce the value of constraint violation or the objective function f.
Definition of filter set is based on the definition of dominance as following,
Definition 1
A pair \((h_{k},f_{k})\) is dominated by \((h_{j}, f_{j})\) if and only if \(h_{k}\leq h_{j}\) and \(f_{k}\leq f_{j}\) for each \(j\neq k\).
Definition 2
A filter set \(\mathcal{F}\) is a set of pairs \((h,f)\) such that no pair dominates any other.
To ensure the convergence, some additional conditions are required to decide whether to accept a trial point to the filter or not. The traditional acceptable criterion is as following.
Definition 3
A trial point x is called acceptable to the filter if and only if
where \(0<\gamma<\beta<1\) are constants. In practice, Î² is close to 1 and Î³ close to 0.
Actually, in traditional filter method, some good point such as superlinear convergent step may be rejected due to the increase of both objective function value and constraint violation value compared to other entries in filter. That is the reason why the Maratos effect occurs. So motivated by [12], we substitute the original objective function \(f(x_{k})\) at the kth iterate by the following function
where \(c_{i}(x_{k})^{+}=\max\{c_{i}(x_{k}),0\}\) for \(i=1,2,\ldots,m\). Here \(\delta_{k}\) is a selfadaptive parameter at kth iterate, it can be changed according to the different improvements that made by the current trial point. Note that the traditional filter methods are the special cases with \(\delta_{k}=0\), and we hope overcome the Maratos effect with suitable \(\delta_{k}\).
We aim to reduce the value of both \(h(x)\) and \(l(x)\). By original criterion, the trial point is acceptable if and only if (6) holds. Nie and Ma [12] proposed a trust region filter method with a given penalty parameter which is negative, but in this paper, different from [12], the parameter \(\delta_{k}\) is a variable scalar which is changed according to the different improvements caused by the trial point. Specifically, at the beginning, let \(\delta_{0}=0\), that is what the traditional filter method does, and \(f(x_{k})=l(x_{k})\) (see FigureÂ 1).
There are four regions in the righthand half space I, II, III, IV. At the current iterate k, if the trial point \(x_{k}\) moves into the region IV, that means the pair \((h_{k},l_{k})\) is located in region IV, we say that the trial point is rejected according to our criterion. If \(x_{k}\) moves into the region I, II, or III, we accept it, but need to adjust the parameter \(\delta_{k}\) in the criterion. For region III, we say that the algorithm does not make a good improvement, since we do not want to accept points with larger constraint violation. Thus we intent to impose stricter acceptance criterion, that means to increase the value of \(\delta_{k}\), which will result in the bigger reject area and smaller acceptable area (see FigureÂ 2). So update \(\delta_{k}\) as following:
If \(x_{k}\) moves into the region II, we say that the algorithm makes good improvement since it reduces not only the objective function \(l(x_{k})\) but also the constraint violation \(h(x_{k})\), so we intend to loosen the acceptance criterion to hope for more improvements. That means to decrease the value of \(\delta_{k}\) so that make the reject area become smaller and the acceptable area bigger (see FigureÂ 3). So update \(\delta _{k}\) as following:
If \(x_{k}\) moves into region I, we will also accept it because the value of constraint violation does decrease, and we can also accept the increase of \(l(x_{k})\) in finite steps. Meanwhile, the value of \(\delta _{k}\) will not be changed. If \(x_{k}\) moves into region IV, that means this trial point is rejected, and \(\delta_{k}\) also should be remained in the next iterate.
As we all know, because of the nonmonotone properties of filter method in a certain degree, it has the good numerical results. Su and Pu [13] also proposed a modified nonmonotone filter method to exhibit a further nonmonotone technique. Motivated by this, we loosen the acceptance criterion by nonmonotonic technique and give the following criteria.
Definition 4
A point x is acceptable to the filter if and only if
where \((h_{kr},l_{kr})\in\mathcal{F}\) for \(0\le r\le m(k)1\), and \(0\le m(k)\le\min\{m(k1)+1,M\}\), \(M\geq1\) is a given positive constant, \(\sum_{r=0}^{m(k)1}\lambda_{kr}=1\), \(\lambda_{kr}\in(0,1)\) and there exists a positive constant Î» such that \(\lambda_{kr}\geq\lambda\).
Similar to the traditional filter methods, we also need to update the filter set \(\mathcal{F}\) at each successful iteration, the technique is equivalent to the traditional method with the modified acceptance rule (10).
To control the infeasibility, an upper bound condition of violation function is needed, namely \(h(x)\leq u\), where u is a positive scalar, which can be implemented in the algorithm by initiating the filter with the pair \((u, \infty)\).
3 A nonmonotone flexible filter algorithm
At the current kth iterate, the trial point \(x_{k}\) is accepted by our algorithm if it satisfies two conditions, first is accepted by the filter set, second is sufficiently reduction. We define the sufficient reduction condition is as following:
where \(\alpha_{1}\), \(\alpha_{2}\) are constants, the relaxed actual reduction \(\mathrm{rared}_{k}^{l}\) and the predicted reduction \(\mathrm{pred}_{k}^{f}\) are defined as
and the matrix \(H_{k}\) is the Hessian matrix \(\nabla^{2} f(x_{k})\) or an approximate to it, \(\sum_{r=0}^{m(k)1}\lambda_{kr}=1\), \(\lambda_{kr}\in (0,1)\), \(0\le m(k)\le\min\{m(k1)+1,M\}\), \(M\geq1\) is a given positive constant.
A formal description of the algorithm is given as follows.
Algorithm A
 Step 0.:

Let \(0<\rho_{0}<1\), \(0<\gamma<\beta<1\), \(0<\lambda\le 1\), \(0<\gamma_{0}<\gamma_{1}\le1<\gamma_{2}\), \(M\geq1\), \(u>0\), \(\alpha_{1}=\alpha _{2}=0.5\). Choose an initial point \(x_{0}\in R^{n}\), a symmetric matrix \(H_{0}\in R^{n\times n}\) and an initial region radius \(\Delta_{0}\geq\Delta_{\min}>0\), \({\mathcal {F}}_{0}=\{(u,\infty)\}\). Set \(k=0\), \(m(k)=0\).
 Step 1.:

Solve the subproblem \(Q(x_{k},H_{k},\rho_{k})\), if \(\d_{k}\ =0\), stop.
 Step 2.:

Let \(x_{k}^{+}=x_{k}+d_{k}\), compute \(h_{k}^{+}\), \(l_{k}^{+}\).
 Step 3.:

If \(x_{k}^{+}\) is acceptable to the filter \({\mathcal {F}}_{k}\), go to step 4, otherwise go to step 5.
 Step 4.:

If \(x_{k}^{+}\) is located in the region I or region IV, let \(\delta_{k+1}=\delta_{k}\), if \(x_{k}^{+}\) is located in the region II, let \(\delta_{k+1}\) is updated by (9), if \(x_{k}^{+}\) is in the region III, let \(\delta_{k+1}\) is updated by (8).
 Step 5.:

If \(\mathrm{rared}_{k}^{l}\leq\eta\, \mathrm{pred}_{k}^{f}\) and \(h_{l}(k)\leq\alpha_{1}\d_{k}\_{\infty}^{\alpha_{2}}\), then go to step 6, otherwise go to step 7.
 Step 6.:

Let \(\rho_{k}\in[\gamma_{0}\rho_{k},\gamma_{1}\rho_{k}]\), go to step 1.
 Step 7.:

Let \(x_{k+1}=x_{k}^{+}\), update the filter set. \(\rho_{k+1}\in[\rho_{k},\gamma_{2}\rho_{k}]\geq\rho_{\min}\), update \(H_{k}\) to \(H_{k+1}\), \(m(k+1)=\min\{m(k)+1,M\}\), \(k=k+1\) and go to step 1.
Remark 1
At the beginning of each iteration, we always set \(\rho_{k}\geq\rho_{\min}\), which will avoid too small trust region radius.
Remark 2
In above algorithm, let M be a nonnegative integer. For each k, let \(m(k)\) satisfy
In fact, if \(M=1\), the algorithm actual is a monotone method, the nonmonotonicity is showed as \(M>1\).
4 The convergent properties
In this section, to present a proof of global convergence of algorithm, we always assume that following conditions hold.
Assumptions

A1.
The objective function f and the constraint functions \(c_{i}\) (\(i\in I=\{1,2,\ldots,m\}\)) are twice continuously differentiable.

A2.
For all k, \(x_{k}\) and \(x_{k}+d_{k}\) all remain in a closed, bounded convex subset \(S\subset R^{n}\).

A3.
The matrix sequence \(\{H_{k}\}\) is uniformly bounded.

A4.
The functions \(A=\nabla c\) are uniformly bounded on S.
By the above assumptions, we can suppose that there exist constants \(v_{1}\), \(v_{2}\), \(v_{3}\) such that \(\f(x)\\le v_{1}\), \(\\nabla f(x)\\le v_{1}\), \(\\nabla^{2} f(x)\\le v_{1}\), \(\c(x)\\le v_{2}\), \(\\nabla c(x)\\le v_{2}\), \(\\nabla^{2} c(x)\\le v_{2}\).
Definition 5
[2]
The MangasarianFromowitz constraint qualification (MFCQ) is said to be satisfied at a point \(x\in{\mathbb{R}}^{n}\) with respect to the underlying constraint system \(g(x)\leq0\), if there is a \(z\in\mathbb{R}^{n}\) such that
Lemma 2
[12]
Let Assumptions hold, and let xÌ„ be a feasible point of problem (P) at which MFCQ holds but which is not a KKT point. Then there exists a neighborhood N of xÌ„ and positive constants \(\xi_{1}\), \(\xi_{2}\), \(\xi_{3}\) such that for all \(x_{k}\in N\cap S\) and all \(\rho_{k}\) for which
it follows that SQP subproblem has a feasible solution \(d_{k}\), and the predicted reduction satisfies
If \(\rho_{k}\leq(1\eta_{3})\xi_{1}/3nv_{2}\), then
where \(\eta<\eta_{3}\).
If \(h_{k}>0\) and \(\rho_{k}\leq\sqrt{\frac{2\beta h_{k}}{n^{2}v_{2}}}\) then \(h(x_{k}^{+})\leq\beta h_{k}\).
Lemma 3
Suppose that Assumptions hold, then Algorithm A is well defined.
Proof
We will show that the trial point \(x_{k}^{+}\) is acceptable to the filter when \(\rho_{k}\), is small enough. We consider the following two cases.
Case 1. \(h_{k}= 0\).
To prove the implementation of AlgorithmÂ A, we have to show for all k such that \(\rho_{k}\leq\delta\) it holds \(\mathrm{rared}_{k}^{l}\geq\eta\, \mathrm {pred}_{k}^{f}\). We know \(\mathrm{ared}_{k}^{l}=l(x_{k})l(x_{k}^{+})\).
In fact,
where \(y_{k}=x_{k}+\xi d_{k}\), \(\xi\in(0,1)\) denotes some point on the line segment from \(x_{k}\) to \(x_{k}^{+}\). By the update of \(\delta_{k}\) and the definition of \(h(x_{k}^{+})\), we know \(\delta_{k+1}\leq\rho_{k}\),
where \(s_{k}\) denotes some point in the line from \(x_{k}\) to \(x_{k}^{+}\).
Hence we obtain that
where \(b=\frac{1}{2}(\sup \H_{k}\+\max_{x\in S} \\nabla^{2}f(x)\)\), together with LemmaÂ 2
We have
We deduce that \(\mathrm{rared}_{k}^{l}\geq\mathrm{ared}_{k}^{f}\geq\eta\, \mathrm{pred}_{k}^{f}\) for some \(\eta\in(0,1)\), since
By \(\max\{l(x_{k}),\sum_{r=0}^{m(k)1}\lambda_{kr}l_{kr}\}l(x_{k}^{+})\geq \eta\, \mathrm{pred}_{k}^{f}>\gamma h(x_{k})\), we can see
so \(x_{k}^{+}\) is acceptable to the filter.
Case 2. \(h_{k}>0\).
There exists a constant \(\delta>0\) and \(k_{0}\) such that \(\rho_{k}\leq\delta \) when \(k< k_{0}\). Let \(\delta=\sqrt{\frac{2\beta h_{k}}{n^{2}M_{2}}}\) by LemmaÂ 2, we have \(h_{k}^{+}\leq\beta h_{k}\), that is \(h_{k}^{+}\leq\beta\max_{0\leq j\leq m(k)1}\{h_{kj}\}\). So \(x_{k}^{+}\) must be acceptable to the filter by the definition.
With the similar analysis to case 1, we have
Then it holds
The conclusion follows. This is the end of proof.â€ƒâ–¡
Lemma 4
Suppose that Assumptions hold and Algorithm A does not terminated finitely, then \(\lim_{k\rightarrow\infty}h_{k}=0\).
Proof
If Algorithm A can not be terminate finitely, then there are infinite many points accepted by the filter. We prove the result in two cases by the definition of filter.

(i)
\(K_{1}=\{kh_{k}^{+}\le\beta\max_{0\le r\le m(k)1}h_{kr}\}\) is an infinite set.

(ii)
\(K_{2}=\{kl_{k}^{+}\le\max[l_{k},\sum_{r=0}^{m(k)1}\lambda_{kr} l_{kr}]\gamma h_{k}\}\) is an infinite set.
In view of convenience, let
where \(km(k)+1\le l(k)\le k\).
(i) Since \(m(k+1)\le m(k)+1\), we have
which implies that \(\{h(x_{l(k)})\}\) converges. Then by \(h(x_{k+1})\le \beta\max_{0\le r\le m(k)1}[h(x_{kr})]\), we have
Since \(\beta\in(0,1)\), we deduce that \(h(x_{l(k)})\rightarrow 0\) (\(k\rightarrow\infty\)).
Therefore
holds by Algorithm A. That is \(\lim_{k\rightarrow\infty}h(x_{k})=0\).
(ii) We first show that for all \(k\in S\), it holds
We prove this by induction.
If \(k=1\), we have \(l_{1}\le l_{0}\gamma h_{0}\le l_{0}\lambda\gamma h_{0}\).
Assume that (27) holds for \(1,2,\ldots,k\), then we consider (27) holds for \(k+1\) in the following two cases.
Case 1. \(\max[l_{k},\sum_{r=0}^{m(k)1}\lambda_{kr} l_{kr}]=l_{k}\),
Case 2. \(\max[l_{k},\sum_{r=0}^{m(k)1}\lambda_{kr} l_{kr}]=\sum_{r=0}^{m(k)1}\lambda_{kr} l_{kr}\).
Let \(p=m(k)1\), then
By the fact that \(\sum_{t=0}^{p}\lambda_{kt}=1\), \(\lambda_{kt}\geq\lambda\), and \(h_{r}\geq0\), we have
Then for all \(k\in S\), (27) holds.
Moreover, since \(\{l_{k}\}\) is bounded below, let \(k\rightarrow\infty\), we can get that
It follows that \(h_{k}\rightarrow0\) (\(k\rightarrow\infty\)).â€ƒâ–¡
Lemma 5
Suppose that Assumptions hold. If Algorithm A does not termination finitely, then \(\lim_{k\rightarrow \infty}\d_{k}\=0\).
Proof
Suppose by contradiction that there exist constants \(\epsilon>0 \) and \(\bar{k}>0\) such that \(\d_{k}\>\epsilon\) for all \(k>\bar{k}\).
Then by LemmaÂ 2, \(\mathrm{pred}_{k}^{f}>\frac{1}{3}\xi_{1}\\rho_{k}\>\frac {1}{3}\xi_{1}\d_{k}\>\frac{1}{3}\xi\epsilon>0\), because of \(\mathrm{rared}_{k}^{l}\geq\eta\, \mathrm{pred}_{k}^{f}\), we have \(\max[l_{k},\sum_{r=0}^{m(k)1}\lambda_{kr} l_{kr}]l_{k+1}\geq\eta\, \mathrm{pred}_{k}^{f}\). We take the sum at the both sides, together with the sequence \({l_{k}}\) is bounded below, we have \(\eta\sum\mathrm{pred}_{k}^{f}<\infty\), that follows \(\mathrm{pred}_{k}^{f}\rightarrow0\) as \(k\rightarrow\infty\), which contradicts to \(\mathrm{pred}_{k}^{f}>0\). Hence the conclusion follows.â€ƒâ–¡
Theorem 1
Suppose \(\{x_{k}\}\) is an infinite sequence generated by Algorithm A. Then every cluster point of \(\{x_{k}\}\) is a KKT point of problem (P).
5 Numerical results
In this section, we give some numerical experiments to show the success of our proposed method. All examples are chosen from [18] and [19].

(1)
[2] Updating of \(H_{k}\) is done by
$$H_{k+1}=H_{k}+ \frac{y_{k}^{T}y_{k}}{y_{k}^{T}s_{k}}\frac{H_{k}s_{k}s_{k}^{T}H_{k}}{s_{k}^{T}H_{k}s_{k}}, $$where \(y_{k}=\theta_{k}\hat{y}_{k}+(1\theta_{k})H_{k}s_{k}\),
$$ \theta_{k}=\left \{ \textstyle\begin{array}{l@{\quad}l} 1,& s_{k}^{T}\hat{y}_{k}\geq0.2s_{k}^{T}H_{k}s_{k}, \\ \frac{0.8s_{k}^{T}H_{k}s_{k}}{s_{k}^{T}H_{k}s_{k}s_{k}^{T}\hat{y}_{k}}, &\text{otherwise} \end{array}\displaystyle \right . $$(31)and \(\hat{y}_{k}=g_{k+1}g_{k}\), \(s_{k}=x_{k+1}x_{k}\).

(2)
We assume the error tolerance is 10^{âˆ’6}.

(3)
The algorithm parameters were set as follows: \(H_{0}=I\in R^{n\times n}\), \(\beta=0.9\), \(\gamma=0.1\), \(\rho=0.5\), \(\alpha_{1}=\alpha_{2}=0.5\), \(\sigma_{0}=0.1\), \(\Delta_{\min}=10^{6}\), \(\Delta_{0}=1\). The program is written in Matlab.
In TableÂ 1, the problems are numbered in the same way as in Hock and Schittkowski [18] and Schittkowski [19]. For example, â€˜HS2â€™ is the problem 2 in Hock and Schittkowski [18] and â€˜S216â€™ is the problem 216 in Schittkowski [19]. Some equality constrained problems are also included in our test problems, such as S216, S235, S252 and so on. NF, NG represent the number of function and gradient calculations respectively. In TableÂ 1, the results in first column are calculated by Algorithm A, those in second column are calculated by traditional filter method, which are shown in [3], those in third column are calculated by Matlab function â€˜fminconâ€™, compared the three methods, our algorithm has a smaller number of function calculations and gradient calculations.
To show the effect of the nonmonotone method, we also list the numerical results in TableÂ 2, these tests are done for \(M=1\), \(M=3\) and \(M=10\) respectively, that means the degree of nonmonotonicity is increasing.
First numerical results show that the nonmonotone algorithm is more effective than monotone one for most test examples and our algorithm is effective and satisfactory.
6 Conclusions
In our method, the criterion used to test the trial points is flexible, the refuse region is variable according to the different improvement made by the previous trial point, while in the traditional filter methods, the elements in the filter structure are fixed. By the numerical results, we also find the new method has more effective results and less computational costs than not only the traditional methods but also the Matlab algorithms. Moreover, the use and adjustment of the selfadaptive parameter in our method is a good way to balance the value of objective function and the violation constraint function. On the other hand, the application of nonmonotone in criterion avoids the Maratos effect to a certain degree, because more trial points are accepted by the filter according to the algorithm. We also compared the results of different nonmonotonic degree, although we can not decide which value of M is the best one, at least the results with nonmonotone is better than that with monotone technique.
References
Zhang J. A robust trust region method for nonlinear optimization with inequality constraint. Appl Math Comput. 2006;176:68899.
Nocedal J, Wright S. Numerical optimization. New York: Springer; 1999.
Fletcher R, Leyffer S. Nonlinear programming without a penalty function. Math Program. 2002;91:23969.
Chin C, Fletcher R. On the global convergence of an SLPfilter algorithm takes EQP steps. Math Program. 2003;96:16177.
Fletcher R, Gould N, Leyffer S, Toint P, Wachter A. A global convergence of a trust region SQPfilter algorithm for general nonlinear programming. SIAM J Optim. 2002;13:63560.
Fletcher R, Leyffer S, Toint P. On the global convergence of a filterSQP algorithm. SIAM J Optim. 2002;13:4459.
Ulbrich M, Ulbrich S, Vicente L. A global convergent primaldual interiorpoint method for nonconvex nonlinear programming. Math Program. 2004;100:379410.
Audet C, Dennis J. A pattern search filter method for nonlinear programming without derivatives. SIAM J Optim. 2004;14:9801010.
Karas E, Riberio A, Sagastizabalc C, Solodov M. A bunble filter method for nonsmooth convex constrained optimization. Math Program. 2009;116:297320.
Ulbrich M, Ulbrich S. Nonmonotone trust region methods for nonlinear equality constrained optimization without a penalty function. Math Program. 2003;95:10335.
Ulbrich S. On the superlinear local convergence of a filterSQP method. Math Program. 2004;100:21745.
Nie P, Ma C. A trust region filter method for general nonlinear programming. Appl Math Comput. 2006;172:100017.
Su K, Pu D. A nonmonotone filter trust region method for nonlinear constrained optimization. J Comput Appl Math. 2009;223:2309.
Chen Z. A penaltyfreetype nonmonotone trust region method for nonlinear constrained optimization. Appl Math Comput. 2006;173:101446.
Chen Z, Zhang X. A nonmonotone trust region algorithm with nonmonotone penalty parameters for constrained optimization. J Comput Appl Math. 2004;172:739.
Gould N, Toint P. Global convergence of a nonmonotone trustregion SQPfilter algorithm for nonlinear programming. Nonconvex Optim Appl. 2006;82:12550.
Zhou G. A modified SQP method and its global convergence. J Glob Optim. 1997;11:193205.
Hock W, Schittkowski K. Test examples for nonlinear programming codes. Lecture notes in economics and mathematics system. New York: Springer; 1981.
Schittkowski K. More test examples for nonlinear mathematical programming codes. New York: Springer; 1987.
Acknowledgements
This research is supported by the National Natural Science Foundation of China (No.Â 11101115), the Natural Science Foundation of Hebei Province (No.Â 2014201033) and the Key Research Foundation of Educational Bureau of Hebei Province (No.Â ZD2015069). In addition, we would like to show our deepest gratitude to the editor and the anonymous reviewers who have helped to improve the paper.
Author information
Authors and Affiliations
Corresponding author
Additional information
Competing interests
The authors declare that they have no competing interests.
Authorsâ€™ contributions
All authors contributed equally to the writing of this paper. All authors read and approved the final manuscript.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
Su, K., Li, X. & Hou, R. A nonmonotone flexible filter method for nonlinear constrained optimization. J.Math.Industry 6, 8 (2016). https://doi.org/10.1186/s1336201600291
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s1336201600291