Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Warm start = 1 has expected effects #160

Open
dlcfort opened this issue Feb 14, 2024 · 1 comment
Open

Warm start = 1 has expected effects #160

dlcfort opened this issue Feb 14, 2024 · 1 comment

Comments

@dlcfort
Copy link
Contributor

dlcfort commented Feb 14, 2024

I ran some tests using the dense qcqp optimizer and set warm start =0, 1 , and 2 over three test cases. When I run the optimization with warm_start = 2 and set "v" equal to the solution, I get the fastest performance. However when I set warm start equal to 1 it has unpredictable effects. My question is, why does warm start = 1 have worse timing than warm start = 0 when we start near the solution?

My Results and Code are below running "example_dense_qcqp_getting_started.py".

START CONDITIONS EQUAL TO SOLUTION
0 Warm start: ipm iter = 20
1 Warm start: ipm iter = 21
2 Warm start: ipm iter = 3

START CONDITIONS NEAR SOLUTION
0 Warm start: ipm iter = 20
1 Warm start: ipm iter = 22
2 Warm start: ipm iter = 4

START CONDITIONS FAR FROM SOLUTION
0 Warm start: ipm iter = 20
1 Warm start: ipm iter = 12
2 Warm start: ipm iter = 30


nv = 2  # number of variables
nq = 1  # number of quadratic inequality constraints
dim = hpipm_dense_qcqp_dim()
dim.set('nv', nv)
dim.set('nq', nq)

H = np.array([[1,0],
             [0,1]])
g = np.array([[0],[0]])
Hq = np.array([[2,0],[0,2]])
gq = np.array([[-2],[-2]])
uq = -1

# qp
qcqp = hpipm_dense_qcqp(dim)
# data
qcqp.set('H', H)
qcqp.set('g', g)
qcqp.set('Hq', Hq)
qcqp.set('gq', gq)
qcqp.set('uq', uq)

# qp sol
qcqp_sol = hpipm_dense_qcqp_sol(dim)
# set up solver arg
# mode = 'speed'
mode = 'robust'
# create and set default arg based on mode
arg = hpipm_dense_qcqp_solver_arg(dim, mode)
# create and set default arg based on mode
arg.set('mu0', 1e4)
arg.set('iter_max', 30)
arg.set('tol_stat', 1e-5)
arg.set('tol_eq', 1e-5)
arg.set('tol_ineq', 1e-5)
arg.set('tol_comp', 1e-5)
arg.set('reg_prim', 1e-12)

# if warm_start=1, then the primal variable is initialized from qp_sol
warm_start = 0; # set to 1 to warm-start the primal variable
# warm_start = 1; # set to 1 to warm-start the primal variable
# warm_start = 2; # set to 1 to warm-start the primal variable
arg.set('warm_start', warm_start)
qcqp_sol.set('v', np.array([-4.4929, 2.2929]))
# qcqp_sol.set('v', np.array([0.39289321, 0.19289321]))
# qcqp_sol.set('v', np.array([0.29289321, 0.29289321])) ### SOLUTION

# set up solver
solver = hpipm_dense_qcqp_solver(dim, arg)
start_time = time.time()
solver.solve(qcqp, qcqp_sol)
end_time = time.time()
v = qcqp_sol.get('v')

# get solver statistics
status = solver.get('status')
iters = solver.get('iter')
stat = solver.get('stat')
status = solver.get('status')
print('v      = {}'.format(v.flatten()))
print('ipm iter = ', iters)
print('solve time {:e}'.format(end_time-start_time))



@giaf
Copy link
Owner

giaf commented Feb 20, 2024

Warm starting IPM is notoriously difficult, as at best gives only a little benefit, and it can also likely make convergence slower.
Therefore, in general warm starting is not recommended.

You can have a look at the warm starting algorithm in https://github.com/giaf/hpipm/blob/master/dense_qp/x_dense_qcqp_ipm.c#L746 , it is quite straightforward:

  • warm_start==2 expects both the primal and the dual solution, and it just increases the too little values of the dual solution to a certain threshold.
  • warm_start==0 zeroes out the primal solution while warm_start==1 it keeps it. But they are very similar with regard to the dual solution. In the case of inequality constraints, they attempt to build a nearly feasible value for the slacks, and the initialize the corresponding Lagrange multipliers such that the product of each single slack and corresponding multiplier is equal to mu0.
    So mu0 is related to the initial value of the barrier parameter, and in a way gives the value of trust in the initial guess.
    In this specific example, the optimal value for the only Lagrange multiplier is ~0.21, so rather small, while with the default value of mu0 it gets initialized to a large value.
    So in this case, reducing the value of mu0 to e.g. 1 greatly reduces the number of iterations for warm_start equal to 0 or 1.
    In this example warm_start equal to 2 works particularly well starting close to the solution by chance: the default dual solution is zero, but this is then increased to the minimum value of the threshold, that it happens to be very close to the optimal value of the Lagrange multiplier.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants