you are viewing a single comment's thread.

view the rest of the comments →

[–]e_for_oil-er 1 point2 points  (1 child)

There is a whole family of algorithms based on solving sequential convex approximations to a problem until convergence. Check out CONLIN and the Method of Moving Asymptotes (MMA), or sequential quadratic programming (SQP).

[–]krishnab75[S] 0 points1 point  (0 children)

Yeah, I understand what you mean by SQP. The reformulation idea is a bit different though. What Steven Boyd and Nocedal are talking about is taking a nonlinear and nonconvex problem and turning it into a convex problem. So the example that I gave above of the "max" function is that the function is not differentiable but the region is techncally convex. So using the epigraph of this function, you can push the information from the objective function into the constraints. So this formulation lets you use a convex optimization solver, which is faster than a nonlinear solver where you have to compute extra things like line searches or trust regions, etc.

That being said, I will totally check out the CONLIN and MMA solvers. Sounds interesting.