# Difference between revisions of "Interior-point method for NLP"

Cindyxchen (Talk | contribs) |
Cindyxchen (Talk | contribs) (→Algorithm) |
||

Line 11: | Line 11: | ||

− | minimize <math>c^Tx-\mu\ | + | minimize <math>c^Tx - \mu\sum_{i=1} ln(x_i)</math><br> |

subject to <math>Ax=b</math><br> | subject to <math>Ax=b</math><br> | ||

− | |||

− | |||

− | |||

− |

## Revision as of 22:58, 23 May 2015

Author names: Cindy Chen

Steward: Dajun Yue and Fengqi You

# Introduction

The interior point (IP) method for nonlinear programming was pioneered by Anthony V. Fiacco and Garth P. McCormick in the early 1960s. The basis of IP method restricts the constraints into the objective function (duality) by creating a barrier function. This limits potential solutions to iterate in only the feasible region, resulting in a much more efficient algorithm with regards to time complexity.

Use value of mu to control how much value is given to the barrier, large mu means we stay far from boundaries. First solve with large value of mu gives us analytic center of the feasible region. As we decrease mu, we can find the optimal value by tracing out the central path. Traveling along the central path is time consuming and computational intense - can approximate by using Newton’s method for solving NLPs. To get polynomial ordered time complexity, we decrease mu slowly, using one Newton step each time we decrease mu. This results in small zig-zag convergence to optimal point. In practice, we can often decrease more rapidly and converge faster. Newton steps approximate central path through interior of feasible region.

# Algorithm

minimize

subject to