Lagrange-type Functions in Constrained Non-Convex by Alexander M. Rubinov, Xiao-qi Yang

By Alexander M. Rubinov, Xiao-qi Yang

Lagrange and penalty functionality equipment offer a robust method, either as a theoretical software and a computational motor vehicle, for the research of limited optimization difficulties. even if, for a nonconvex limited optimization challenge, the classical Lagrange primal-dual process may possibly fail to discover a mini­ mum as a 0 duality hole isn't really consistently assured. a wide penalty parameter is, as a rule, required for classical quadratic penalty features so that minima of penalty difficulties are a superb approximation to these of the unique restricted optimization difficulties. it really is famous that penaity features with too huge parameters reason a disadvantage for numerical implementation. hence the query arises the best way to generalize classical Lagrange and penalty services, with a view to receive a suitable scheme for lowering restricted optimiza­ tion difficulties to unconstrained ones that would be compatible for sufficiently huge periods of optimization difficulties from either the theoretical and computational viewpoints. a few ways for this kind of scheme are studied during this e-book. one in every of them is as follows: an unconstrained challenge is developed, the place the target functionality is a convolution of the target and constraint services of the unique challenge. whereas a linear convolution ends up in a classical Lagrange functionality, other kinds of nonlinear convolutions bring about attention-grabbing generalizations. we will name services that seem as a convolution of the target functionality and the constraint features, Lagrange-type functions.

Show description

Read or Download Lagrange-type Functions in Constrained Non-Convex Optimization PDF

Best linear programming books

Linear Programming and its Applications

Within the pages of this article readers will locate not anything below a unified remedy of linear programming. with no sacrificing mathematical rigor, the most emphasis of the booklet is on versions and functions. crucial periods of difficulties are surveyed and offered by way of mathematical formulations, by means of resolution tools and a dialogue of a number of "what-if" situations.

Methods of Mathematical Economics: Linear and Nonlinear Programming, Fixed-Point Theorems (Classics in Applied Mathematics, 37)

This article makes an attempt to survey the center topics in optimization and mathematical economics: linear and nonlinear programming, keeping apart aircraft theorems, fixed-point theorems, and a few in their applications.

This textual content covers basically topics good: linear programming and fixed-point theorems. The sections on linear programming are based round deriving equipment in line with the simplex set of rules in addition to the various ordinary LP difficulties, reminiscent of community flows and transportation challenge. I by no means had time to learn the part at the fixed-point theorems, yet i believe it might turn out to be necessary to investigate economists who paintings in microeconomic concept. This part offers 4 assorted proofs of Brouwer fixed-point theorem, an explanation of Kakutani's Fixed-Point Theorem, and concludes with an evidence of Nash's Theorem for n-person video games.

Unfortunately, crucial math instruments in use via economists at the present time, nonlinear programming and comparative statics, are slightly pointed out. this article has precisely one 15-page bankruptcy on nonlinear programming. This bankruptcy derives the Kuhn-Tucker stipulations yet says not anything concerning the moment order stipulations or comparative statics results.

Most most likely, the unusual choice and assurance of issues (linear programming takes greater than half the textual content) easily displays the truth that the unique version got here out in 1980 and in addition that the writer is de facto an utilized mathematician, no longer an economist. this article is worthy a glance if you'd like to appreciate fixed-point theorems or how the simplex set of rules works and its purposes. glance in other places for nonlinear programming or newer advancements in linear programming.

Planning and Scheduling in Manufacturing and Services

This publication specializes in making plans and scheduling functions. making plans and scheduling are sorts of decision-making that play an enormous position in so much production and companies industries. The making plans and scheduling capabilities in an organization regularly use analytical ideas and heuristic ways to allocate its restricted assets to the actions that experience to be performed.

Optimization with PDE Constraints

This e-book offers a contemporary advent of pde restricted optimization. It presents an actual useful analytic therapy through optimality stipulations and a cutting-edge, non-smooth algorithmical framework. moreover, new structure-exploiting discrete ideas and big scale, essentially correct functions are provided.

Additional resources for Lagrange-type Functions in Constrained Non-Convex Optimization

Example text

If y E domhp = {y > 0: hp(y) < +oo}, then (hp(y),y) E supp (p,L). Hence, ifdomhp = (O,+oo),thensupp(p,L) = {(a,y): y > O,a 'S hp(y)}. 11) holds with b = 0. Assume that there exists a pointy > 0 such that hp (y) = +oo. It means that (a, y) E supp (p, L ), for all a > 0. Then the normality of supp (p, L) implies that hp(y') = +oo, for all 0 < y'::::; y. Thus the set {y > 0: hp(y) = +oo }, if nonempty, is a segment. Upper semicontinuity of hp implies that this segment is closed (in IR++). 11) holds with b = sup{y: hp(y) = +oo}.

Then (y,hp 1 (y)) E supp (p1,L), (y,hp 2 (y)) E supp (p2,L), and (y, a) tj. 13) that hp(Y) = hp 1 (y) = min(hp 1 (y), hp 2 (y)). 3 and PI ~ P2· In the sequel we need the following simple assertion. A) > 0. ),y,\}for y > 0. ) 2) Xy ( Ay) = y).. A) as y --7 +0. Proof" The proof is straightforward. PH function defined on lR~. Then supp(1,y) y>O = suphp(y). y>O Proof" It follows from the definition of hp that supp (p, L) 0, 0 < 8 ~ hp(z)}. So, for y > 0, we have: p(1,y) : z > sup{ ((8, z), (1, y)) : (8, z) E supp (p, L)} sup z>0,6:Shp(z) Thus = {(8, z) min( 8, zy).

2 Let H be a set of continuous functions defined on a metric space Z and 2 E Z. Assume that each nonnegative continuous function f defined on Z is abstract convex at the point 2. Then, for each E E (0, 1) and 8 > 0, there exists a function h E H, which is a support to an Urysohn peak, corresponding to (2, E, 8). Proof To establish the result, we consider a 8-Urysohn peak fo, where 8 is an arbitrary positive number. Since fr; is abstract convex with respect to H at the point 2, it follows that for each c > 0 there exists a function h E: H, such that h fr; and h(z) > fo(2) - c = 1 - c.

Download PDF sample

Rated 4.51 of 5 – based on 33 votes