You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Sometimes models can have small coefficients that may as well be zero but aren't quite. An almost-zero coefficient might have negligible impact on a model solution but might impact run time significantly. There is value in eliminating these as solvers make use of sparse matrix methods extensively.
Additionally, in printed outputs from Julia, I am constrantly seeing things like this: 0.050000000000000003. The value specified was 0.05 but somewhere, numerical precision issues are resulting in this value being slightly imprecise. We have seen this happen with zeroes/non-zeroes too... a value that should be zero somehow ends up being very small instead.
In summary, I think we should filter and/or round coefficients before they are sent to the solver to make maximum use of zeroes.
Perhaps two arguments could be specified in the "using_spine_db" function. One would be the lowest absolute value and any float (whether scalar or in a timeseries or map) with an absolute value of less, is treated as zero. The second argument could be a global precision argument that rounds all floats to that level of precision.
Thoughts @manuelma? I would at least like to test this as I suspect it could have a big impact on some models.
The text was updated successfully, but these errors were encountered:
Sometimes models can have small coefficients that may as well be zero but aren't quite. An almost-zero coefficient might have negligible impact on a model solution but might impact run time significantly. There is value in eliminating these as solvers make use of sparse matrix methods extensively.
Additionally, in printed outputs from Julia, I am constrantly seeing things like this: 0.050000000000000003. The value specified was 0.05 but somewhere, numerical precision issues are resulting in this value being slightly imprecise. We have seen this happen with zeroes/non-zeroes too... a value that should be zero somehow ends up being very small instead.
Here is another related issue: spine-tools/SpineOpt.jl#585
In summary, I think we should filter and/or round coefficients before they are sent to the solver to make maximum use of zeroes.
Perhaps two arguments could be specified in the "using_spine_db" function. One would be the lowest absolute value and any float (whether scalar or in a timeseries or map) with an absolute value of less, is treated as zero. The second argument could be a global precision argument that rounds all floats to that level of precision.
Thoughts @manuelma? I would at least like to test this as I suspect it could have a big impact on some models.
The text was updated successfully, but these errors were encountered: