how to control the coefficient of binaries generated by the kkt command #1282
Unanswered
rgzbsn
asked this question in
Problems using YALMIP
Replies: 1 comment 3 replies
-
A very small coefficient should be no problems for solvers, as they will clean away that if necessary. A very large coefficient though means kkt wasn't able to derive big-M coefficients, most likely as it didn't manage to derive strong bounds on duals (not surprisingly as it typically is as hard to derive these, as it is to solve the bilevel program to begin with). Hence, you have to add explicit bound on them (details.duals) but then you have to come up with correct bounds. If too small you might cut away the optimal solution, and too large you are still stuck with a numerically poor model |
Beta Was this translation helpful? Give feedback.
3 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hi everyone
I'm trying the kkt command to generate the kkt condition of a MILP model.I've noticed that the kkt command works well except for one row,here is the .lp log of this row:
R32561: 1.052631578947368 C1104 + 1.052631578947368 C1105
<= 48.12154099193583
It comes that the coefficient of binary(C10598) is very small,which result in a large matrix range of the model ([1e-13,1e+04]),and thus severly slowing the solving process.However,according to the big-M method,the coefficient of binary could just be 1 or big constant.I'm wondering:
1.how this coefficeint is generated by the kkt command of yalmip?
2.if there is any way to improve this coefficient?
3.if it's hard to convert the coefficient,can I do something to ignore small coefficient in yalmip?
Thanks in advance!
Beta Was this translation helpful? Give feedback.
All reactions