You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I think that this is a remnant of the way we used to do constraints with ADMM. Looking back on it, as is usually the case looking back at old code, if I was to do it again I would do it differently but the whole thing was built in a few stages.
For Remy, we actually flattened the data into a vector and performed constraints and transforms using matrices so that the linear algebra interpretation was literal. So we needed to build a matrix to apply the constraints and this originally returned a large sparse matrix that we applied to the model with a real dot product. But that was sssslllloooowwww, so we started to create the matrix as a scipy sparse diagonal matrix by converting everything into an array that represented the diagonals and could be used to initialize the sparse matrix. That's the time where the majority of the code comes from, converting data and weights into diagonals for a large sparse matrix. This is probably the last piece of the codebase that hasn't been refactored yet since then (because it always worked).
Looking at the code now for the first time in years, none of this makes sense for what we're doing now, even if we use weights. So fixing it seems beyond the scope of this PR and I'm fine with you just skipping it for now and fixing it as part of a new issue (or assigning it to me to cleanup since it's my mess). But if you want to fix it then you should just probably stop looking at the old code altogether and rewrite it from scratch, since what we're doing now is so much simpler (especially your new flat weights method).
I think that this is a remnant of the way we used to do constraints with ADMM. Looking back on it, as is usually the case looking back at old code, if I was to do it again I would do it differently but the whole thing was built in a few stages.
For Remy, we actually flattened the data into a vector and performed constraints and transforms using matrices so that the linear algebra interpretation was literal. So we needed to build a matrix to apply the constraints and this originally returned a large sparse matrix that we applied to the model with a real dot product. But that was sssslllloooowwww, so we started to create the matrix as a scipy sparse diagonal matrix by converting everything into an array that represented the diagonals and could be used to initialize the sparse matrix. That's the time where the majority of the code comes from, converting data and weights into diagonals for a large sparse matrix. This is probably the last piece of the codebase that hasn't been refactored yet since then (because it always worked).
Looking at the code now for the first time in years, none of this makes sense for what we're doing now, even if we use weights. So fixing it seems beyond the scope of this PR and I'm fine with you just skipping it for now and fixing it as part of a new issue (or assigning it to me to cleanup since it's my mess). But if you want to fix it then you should just probably stop looking at the old code altogether and rewrite it from scratch, since what we're doing now is so much simpler (especially your new flat weights method).
Originally posted by @fred3m in #168
The text was updated successfully, but these errors were encountered: