Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[pull] master from MadNLP:master #46

Merged
merged 63 commits into from
Aug 24, 2024
Merged
Changes from 1 commit
Commits
Show all changes
63 commits
Select commit Hold shift + click to select a range
7a7d0e2
Update for julia v1.9 (#261)
sshin23 Jun 6, 2023
3895613
Update README.md for JuMP documentation (#260)
odow Jun 6, 2023
4979e39
[MadNLPTests] Remove ADNLPModels in deps (#259)
frapac Jun 6, 2023
9411a1e
[MadNLPGPU] Migrate to KernelAbstractions 0.9 (#258)
frapac Jun 6, 2023
4396718
CompatHelper: bump compat for NLPModels to 0.20 for package MadNLPTes…
github-actions[bot] Jun 7, 2023
55c5cd9
CompatHelper: bump compat for NLPModels to 0.20, (keep existing compa…
github-actions[bot] Jun 7, 2023
29e95a5
Status badge updated
sshin23 Jun 7, 2023
f0effa2
Using HSL_jll instead of custom compile (#263)
sshin23 Jun 13, 2023
182dafc
bump MadNLP version to 0.7.0 (#266)
sshin23 Jun 13, 2023
6d694cd
Add SparseCondensedSystem and several changes (#272)
sshin23 Nov 10, 2023
6c4cecb
add proper testing for KKT systems (#278)
frapac Nov 22, 2023
9a4ce89
add support for MOI.ScalarNonLinearFunction (#280)
frapac Nov 22, 2023
2c752d3
simplify implementation of kernels (#281)
frapac Nov 30, 2023
43a2f8f
add support for CUDA.jl v5 (#283)
frapac Dec 1, 2023
805018a
[MadNLPHSL] Use HSL.jl (#277)
amontoison Dec 1, 2023
3685435
fix non-deterministic behavior by forcing instantiations (#284)
frapac Dec 11, 2023
14d16c2
Simplify API of SparseCallback and DenseCallback (#285)
frapac Jan 8, 2024
dec0331
[Tests] Remove test nlp_009_010 from MINLPTests tests (#288)
frapac Jan 8, 2024
07db755
[API] Simplify arguments of create_kkt_system (#286)
frapac Jan 8, 2024
3d02618
[Options] Deactivate scaling if `nlp_scaling=false` (#289)
frapac Jan 9, 2024
7942ad5
[README] Fix Options.md typo (#291)
sshin23 Jan 12, 2024
b35bc21
[API] Expose the options for iterative refinements and quasi-Newton (…
frapac Jan 12, 2024
8a3369f
[Algorithm] Improve LBFGS performance (#290)
frapac Jan 25, 2024
7cb834f
[Linear Solver] Added undocumented cholesky solver (#292)
sshin23 Jan 26, 2024
bd5008d
[MOI] add support for MOI.Interval{Float64} (#295)
frapac Feb 13, 2024
006ed41
update documentation (#293)
frapac Mar 2, 2024
8801bef
Add supports to CUDSS.jl (#296)
sshin23 Mar 2, 2024
b8922ef
CompatHelper: add new compat entry for CUDSS at version 0.1 for packa…
github-actions[bot] Mar 4, 2024
4f41ed9
CompatHelper: add new compat entry for Metis at version 1 for package…
github-actions[bot] Mar 4, 2024
8b704e7
Improve kkt creation on GPUs (#299)
sshin23 Mar 6, 2024
6198b24
LDL factorization improvement (#300)
sshin23 Mar 7, 2024
41062e4
MOI interface moved to ext (#268)
sshin23 Mar 7, 2024
861c686
Update README, OPTIONS, CITATION, and documentation (#304)
sshin23 Mar 7, 2024
5f3efa3
auto deleted (#305)
sshin23 Mar 8, 2024
64469bf
docs ci fixed (#307)
sshin23 Mar 8, 2024
cf6e6be
bump MadNLP version to v0.8 (#306)
sshin23 Mar 8, 2024
cb18db8
Update TagBot.yml (#308)
sshin23 Mar 8, 2024
94477c2
Update README.md (#309)
sshin23 Mar 8, 2024
ed6a53e
Add logos to documentation (#310)
sshin23 Mar 9, 2024
b4d1935
Introduce linear solvers with version info (#315)
sshin23 Mar 27, 2024
de04e2a
CuDSS synchronize added (#314)
sshin23 Mar 27, 2024
fcbbbda
fix optional arguments in get_index_constraints (#316)
frapac Mar 28, 2024
9269b44
Bump MadNLP version to v0.8.1 (#319)
sshin23 Apr 10, 2024
40cced2
bump versions of MadNLPGPU and MadNLPHSL (#320)
sshin23 Apr 10, 2024
3737e62
Reexport MadNLP from MadNLP/libs (#325)
sshin23 Apr 12, 2024
bfcebae
Add an ordering for cuDSS (#317)
amontoison Apr 15, 2024
3fbe691
sarrays removed (#328)
sshin23 Apr 18, 2024
6c76830
[MadNLPGPU] Bug fix for empty Hessian (#326)
sshin23 Apr 26, 2024
d270a61
[LinearSolvers] Add support for LDL factorization in CHOLMOD (#321)
frapac Apr 26, 2024
5916b54
[MOI] Add support for nonlinear problems without Hessian (#322)
frapac Apr 27, 2024
cdc1d0a
[MOI] fix eval_constraint_jacobian_product (#337)
frapac May 14, 2024
8260519
Use GH Action Julia cache (#339)
michel2323 May 15, 2024
de9b3f0
bump MadNLP version to 0.8.2 (#341)
sshin23 May 16, 2024
9cc662a
[hotfix] LDLFactorizations.jl added as dependency (#343)
sshin23 May 17, 2024
1d93449
bump MadNLP version to v0.8.3 (#344)
sshin23 May 17, 2024
fd0dd2a
[MadNLPGPU] Refactoring - Part I (#340)
frapac May 28, 2024
9bfd7c7
[KKT] add two folders Sparse and Dense for KKT formulations (#347)
frapac Jun 13, 2024
2077d8a
[KKT] Fix symmetric K3 formulation (#345)
amontoison Jun 13, 2024
2e301da
Update kkt.md (#354)
amontoison Jul 4, 2024
faa17a8
bump compat for NLPModels to 0.21, (keep existing compat) (#331)
github-actions[bot] Jul 15, 2024
dbfc0fc
[MadNLPGPU] Upgrade CUDSS -- support iterative refinement and hybrid …
amontoison Jul 15, 2024
ff30db6
bump MadNLP version to v0.8.4 (#358)
frapac Jul 22, 2024
ed67920
remove benchmark scripts (#359)
frapac Jul 23, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Prev Previous commit
Next Next commit
Simplify API of SparseCallback and DenseCallback (MadNLP#285)
* make explicit the options passed to `get_index_constraints`
* make explicit the options passed to `create_callback` and `initialize`
* add docstring for the structures in `src/nlpmodels.jl`
  • Loading branch information
frapac authored Jan 8, 2024
commit 14d16c22d898ea66a322e114bc9f8d9520dfd717
51 changes: 28 additions & 23 deletions src/IPM/IPM.jl
Original file line number Diff line number Diff line change
@@ -17,7 +17,7 @@ mutable struct MadNLPSolver{
IC <: AbstractInertiaCorrector,
KKTVec <: AbstractKKTVector{T, VT}
} <: AbstractMadNLPSolver{T}

nlp::Model
cb::CB
kkt::KKTSystem
@@ -103,29 +103,34 @@ mutable struct MadNLPSolver{
end

function MadNLPSolver(nlp::AbstractNLPModel{T,VT}; kwargs...) where {T, VT}

opt, opt_linear_solver, logger = load_options(nlp; kwargs...)
@assert is_supported(opt.linear_solver, T)

cnt = MadNLPCounters(start_time=time())
cb = create_callback(opt.callback, nlp, opt)

cb = create_callback(
opt.callback,
nlp;
fixed_variable_treatment=opt.fixed_variable_treatment,
equality_treatment=opt.equality_treatment,
)

# generic options
opt.disable_garbage_collector &&
(GC.enable(false); @warn(logger,"Julia garbage collector is temporarily disabled"))
set_blas_num_threads(opt.blas_num_threads; permanent=true)
@trace(logger,"Initializing variables.")

ind_cons = get_index_constraints(
get_lvar(nlp), get_uvar(nlp),
get_lcon(nlp), get_ucon(nlp),
opt.fixed_variable_treatment,
opt.equality_treatment
get_lcon(nlp), get_ucon(nlp);
fixed_variable_treatment=opt.fixed_variable_treatment,
equality_treatment=opt.equality_treatment
)

ind_lb = ind_cons.ind_lb
ind_ub = ind_cons.ind_ub

ns = length(ind_cons.ind_ineq)
nx = get_nvar(nlp)
n = nx+ns
@@ -148,20 +153,20 @@ function MadNLPSolver(nlp::AbstractNLPModel{T,VT}; kwargs...) where {T, VT}

x = PrimalVector(VT, nx, ns, ind_lb, ind_ub)
xl = PrimalVector(VT, nx, ns, ind_lb, ind_ub)
xu = PrimalVector(VT, nx, ns, ind_lb, ind_ub)
xu = PrimalVector(VT, nx, ns, ind_lb, ind_ub)
zl = PrimalVector(VT, nx, ns, ind_lb, ind_ub)
zu = PrimalVector(VT, nx, ns, ind_lb, ind_ub)
f = PrimalVector(VT, nx, ns, ind_lb, ind_ub)
x_trial = PrimalVector(VT, nx, ns, ind_lb, ind_ub)

d = UnreducedKKTVector(VT, n, m, nlb, nub, ind_lb, ind_ub)
p = UnreducedKKTVector(VT, n, m, nlb, nub, ind_lb, ind_ub)
_w1 = UnreducedKKTVector(VT, n, m, nlb, nub, ind_lb, ind_ub)
_w2 = UnreducedKKTVector(VT, n, m, nlb, nub, ind_lb, ind_ub)
_w3 = UnreducedKKTVector(VT, n, m, nlb, nub, ind_lb, ind_ub)
_w4 = UnreducedKKTVector(VT, n, m, nlb, nub, ind_lb, ind_ub)

jacl = VT(undef,n)
jacl = VT(undef,n)
c_trial = VT(undef, m)
y = VT(undef, m)
c = VT(undef, m)
@@ -190,28 +195,28 @@ function MadNLPSolver(nlp::AbstractNLPModel{T,VT}; kwargs...) where {T, VT}
VT,
n, m, nlb, nub, ind_lb, ind_ub
)

cnt.init_time = time() - cnt.start_time

return MadNLPSolver(
nlp, cb, kkt,
opt, cnt, logger,
opt, cnt, logger,
n, m, nlb, nub,
x, y, zl, zu, xl, xu,
zero(T), f, c,
jacl,
d, p,
_w1, _w2, _w3, _w4,
x_trial, c_trial, zero(T), c_slk, rhs,
ind_cons.ind_ineq, ind_cons.ind_fixed, ind_cons.ind_llb, ind_cons.ind_uub,
x_lr, x_ur, xl_r, xu_r, zl_r, zu_r, dx_lr, dx_ur, x_trial_lr, x_trial_ur,
iterator,
zero(T), f, c,
jacl,
d, p,
_w1, _w2, _w3, _w4,
x_trial, c_trial, zero(T), c_slk, rhs,
ind_cons.ind_ineq, ind_cons.ind_fixed, ind_cons.ind_llb, ind_cons.ind_uub,
x_lr, x_ur, xl_r, xu_r, zl_r, zu_r, dx_lr, dx_ur, x_trial_lr, x_trial_ur,
iterator,
zero(T), zero(T), zero(T), zero(T), zero(T), zero(T), zero(T), zero(T), zero(T),
" ",
zero(T), zero(T), zero(T),
Tuple{T, T}[],
inertia_corrector, nothing,
INITIAL, Dict(),
INITIAL, Dict(),
)

end
68 changes: 35 additions & 33 deletions src/IPM/solver.jl
Original file line number Diff line number Diff line change
@@ -15,8 +15,8 @@ function initialize!(solver::AbstractMadNLPSolver{T}) where T

nlp = solver.nlp
opt = solver.opt
# Initializing variables

# Initializing variables
@trace(solver.logger,"Initializing variables.")
initialize!(
solver.cb,
@@ -25,13 +25,15 @@ function initialize!(solver::AbstractMadNLPSolver{T}) where T
solver.xu,
solver.y,
solver.rhs,
solver.ind_ineq,
opt
solver.ind_ineq;
tol=opt.tol,
bound_push=opt.bound_push,
bound_fac=opt.bound_fac,
)
fill!(solver.jacl, zero(T))
fill!(solver.zl_r, one(T))
fill!(solver.zu_r, one(T))

# Initializing scaling factors
set_scaling!(
solver.cb,
@@ -50,7 +52,7 @@ function initialize!(solver::AbstractMadNLPSolver{T}) where T
# Initializing jacobian and gradient
eval_jac_wrapper!(solver, solver.kkt, solver.x)
eval_grad_f_wrapper!(solver, solver.f,solver.x)


@trace(solver.logger,"Initializing constraint duals.")
if !solver.opt.dual_initialized
@@ -65,7 +67,7 @@ function initialize!(solver::AbstractMadNLPSolver{T}) where T
copyto!(solver.y, dual(solver.d))
end
end

# Initializing
solver.obj_val = eval_f_wrapper(solver, solver.x)
eval_cons_wrapper!(solver, solver.c, solver.x)
@@ -207,7 +209,7 @@ function regular!(solver::AbstractMadNLPSolver{T}) where T
)
solver.inf_compl = get_inf_compl(solver.x_lr,solver.xl_r,solver.zl_r,solver.xu_r,solver.x_ur,solver.zu_r,zero(T),sc)
inf_compl_mu = get_inf_compl(solver.x_lr,solver.xl_r,solver.zl_r,solver.xu_r,solver.x_ur,solver.zu_r,solver.mu,sc)

print_iter(solver)

# evaluate termination criteria
@@ -244,7 +246,7 @@ function regular!(solver::AbstractMadNLPSolver{T}) where T
dual_inf_perturbation!(primal(solver.p),solver.ind_llb,solver.ind_uub,solver.mu,solver.opt.kappa_d)

inertia_correction!(solver.inertia_corrector, solver) || return ROBUST

# filter start
@trace(solver.logger,"Backtracking line search initiated.")
theta = get_theta(solver.c)
@@ -278,7 +280,7 @@ function regular!(solver::AbstractMadNLPSolver{T}) where T
unsuccessful_iterate = false

while true

copyto!(full(solver.x_trial), full(solver.x))
axpy!(solver.alpha, primal(solver.d), primal(solver.x_trial))
solver.obj_val_trial = eval_f_wrapper(solver, solver.x_trial)
@@ -294,7 +296,7 @@ function regular!(solver::AbstractMadNLPSolver{T}) where T
solver.filter,theta,theta_trial,varphi,varphi_trial,switching_condition,armijo_condition,
solver.theta_min,solver.opt.obj_max_inc,solver.opt.gamma_theta,solver.opt.gamma_phi,
has_constraints(solver))

if solver.ftype in ["f","h"]
@trace(solver.logger,"Step accepted with type $(solver.ftype)")
break
@@ -308,7 +310,7 @@ function regular!(solver::AbstractMadNLPSolver{T}) where T
end
end

unsuccessful_iterate = true
unsuccessful_iterate = true
solver.alpha /= 2
solver.cnt.l += 1
if solver.alpha < alpha_min
@@ -333,7 +335,7 @@ function regular!(solver::AbstractMadNLPSolver{T}) where T
empty!(solver.filter)
push!(solver.filter,(solver.theta_max,-Inf))
solver.cnt.k+=1

return REGULAR
end
end
@@ -378,7 +380,7 @@ function regular!(solver::AbstractMadNLPSolver{T}) where T
primal(solver.x),
solver.mu,solver.opt.kappa_sigma,
)

eval_grad_f_wrapper!(solver, solver.f,solver.x)

if !switching_condition || !armijo_condition
@@ -462,7 +464,7 @@ function restore!(solver::AbstractMadNLPSolver{T}) where T
end

adjust_boundary!(solver.x_lr,solver.xl_r,solver.x_ur,solver.xu_r,solver.mu)

F = F_trial

theta = get_theta(solver.c)
@@ -561,20 +563,20 @@ function robust!(solver::MadNLPSolver{T}) where T
eval_lag_hess_wrapper!(solver, solver.kkt, solver.x, solver.y; is_resto=true)
end
set_aug_RR!(solver.kkt, solver, RR)

# without inertia correction,
@trace(solver.logger,"Solving restoration phase primal-dual system.")
set_aug_rhs_RR!(solver, solver.kkt, RR, solver.opt.rho)

inertia_correction!(solver.inertia_corrector, solver) || return RESTORATION_FAILED


finish_aug_solve_RR!(
RR.dpp,RR.dnn,RR.dzp,RR.dzn,solver.y,dual(solver.d),
RR.pp,RR.nn,RR.zp,RR.zn,RR.mu_R,solver.opt.rho
)


theta_R = get_theta_R(solver.c,RR.pp,RR.nn)
varphi_R = get_varphi_R(RR.obj_val_R,solver.x_lr,solver.xl_r,solver.xu_r,solver.x_ur,RR.pp,RR.nn,RR.mu_R)
varphi_d_R = get_varphi_d_R(
@@ -623,7 +625,7 @@ function robust!(solver::MadNLPSolver{T}) where T
varphi_R_trial = get_varphi_R(
RR.obj_val_R_trial,solver.x_trial_lr,solver.xl_r,solver.xu_r,solver.x_trial_ur,RR.pp_trial,RR.nn_trial,RR.mu_R)

armijo_condition = is_armijo(varphi_R_trial,varphi_R,solver.opt.eta_phi,solver.alpha,varphi_d_R)
armijo_condition = is_armijo(varphi_R_trial,varphi_R,solver.opt.eta_phi,solver.alpha,varphi_d_R)

small_search_norm && break
solver.ftype = get_ftype(
@@ -643,7 +645,7 @@ function robust!(solver::MadNLPSolver{T}) where T
# (experimental) while giving up directly
# we give MadNLP.jl second chance to explore
# some possibility at the current iterate

fill!(solver.y, zero(T))
fill!(solver.zl_r, one(T))
fill!(solver.zu_r, one(T))
@@ -722,7 +724,7 @@ function robust!(solver::MadNLPSolver{T}) where T
else
copyto!(solver.y, dual(solver.d))
end

solver.cnt.k+=1
solver.cnt.t+=1

@@ -806,7 +808,7 @@ function inertia_correction!(
inertia_corrector::InertiaBased,
solver::MadNLPSolver{T}
) where {T}

n_trial = 0
solver.del_w = del_w_prev = zero(T)

@@ -815,14 +817,14 @@ function inertia_correction!(
factorize_wrapper!(solver)

num_pos,num_zero,num_neg = inertia(solver.kkt.linear_solver)


solve_status = !is_inertia_correct(solver.kkt, num_pos, num_zero, num_neg) ?
false : solve_refine_wrapper!(
solver.d, solver, solver.p, solver._w4,
)


while !solve_status
@debug(solver.logger,"Primal-dual perturbed.")

@@ -837,7 +839,7 @@ function inertia_correction!(
return false
end
end
solver.del_c = num_neg == 0 ? zero(T) : solver.opt.jacobian_regularization_value * solver.mu^(solver.opt.jacobian_regularization_exponent)
solver.del_c = num_neg == 0 ? zero(T) : solver.opt.jacobian_regularization_value * solver.mu^(solver.opt.jacobian_regularization_exponent)
regularize_diagonal!(solver.kkt, solver.del_w - del_w_prev, solver.del_c)
del_w_prev = solver.del_w

@@ -850,15 +852,15 @@ function inertia_correction!(
)
n_trial += 1
end

solver.del_w != 0 && (solver.del_w_last = solver.del_w)
return true
end

function inertia_correction!(
inertia_corrector::InertiaFree,
solver::MadNLPSolver{T}
) where T
) where T

n_trial = 0
solver.del_w = del_w_prev = zero(T)
@@ -922,7 +924,7 @@ function inertia_correction!(
inertia_corrector::InertiaIgnore,
solver::MadNLPSolver{T}
) where T

n_trial = 0
solver.del_w = del_w_prev = zero(T)

@@ -946,7 +948,7 @@ function inertia_correction!(
return false
end
end
solver.del_c = solver.opt.jacobian_regularization_value * solver.mu^(solver.opt.jacobian_regularization_exponent)
solver.del_c = solver.opt.jacobian_regularization_value * solver.mu^(solver.opt.jacobian_regularization_exponent)
regularize_diagonal!(solver.kkt, solver.del_w - del_w_prev, solver.del_c)
del_w_prev = solver.del_w

Loading