Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Trapping 3-way merge #459

Closed
wants to merge 55 commits into from
Closed

Trapping 3-way merge #459

wants to merge 55 commits into from

Conversation

Jacob-Stevens-Haas
Copy link
Member

@Jacob-Stevens-Haas Jacob-Stevens-Haas commented Jan 11, 2024

Conflict resolution branch for @Jacob-Stevens-Haas @MPeng5 @akaptano.

We'll code review each merge conflict bit by bit, and as we resolve them, I'll push changes to this branch and backport them to a shadow branch to maintain testability.

Alan Kaptanoglu and others added 30 commits January 11, 2022 09:51
…s into single file (and objective function), fixed some test errors, fixed issue where trapping algorithm Qijk piece was slightly off (and reran notebooks).
…lambda machine. Rerunning the 9 mode von karman example is too intensive for my computer.
…so cleaned up the linting in trapping_sr3.py
…aping format needs to be fortran style. Added some of the examples from the dysts database, some of which dont seem to quite be behaving -- need to look closer at these. Importantly, added functionality in the code and in the example for including a bias (constant) term in the dynamical equations.
Mai Peng and others added 16 commits April 19, 2023 13:34
…ing the enstrophy in the trapping theorem and split up the dysts examples into a separate script to make things easier to use.
…ndy_examples/. Now includes four jupyter notebooks, for the local stability, for dysts and lid-cavity flow examples, for using the enstrophy with the trapping theorem, and for the original trapping SINDy paper notebook. Added all the mutual functions to the trapping_utils file.
…e system was added. More to add and polish in the future
…think we have some bugs left. The global version is still imposing the constraints on Q instead of on PQ. I think I fixed this for the local version but now not getting reasonably results, so probably there are bugs in both.
Comment on lines +32 to +38
<<<<<<< HEAD
python-version: ["3.8", "3.10"]
||||||| 5c6e9fd
python-version: [3.7, 3.8]
=======
python-version: [3.7, 3.8, 3.9]
>>>>>>> origin/trapping_extended
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So what happened here is that

  • I bumped the minimum version to 3.8, as python 3.7 reached end of life
  • I bumped the maximum version to the version of python to the max version in "security" maintenance support.
  • I changed the type from numeric to string, because YAML be like that.

I didn't want to test intermediate versions for CI reasons, but maybe its better to. I could go either way.

This is a case where I'd recommend just going with HEAD. Thoughts?

Copy link
Member Author

@Jacob-Stevens-Haas Jacob-Stevens-Haas left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok, finally taking a look at this. Been through some of the files (everything but examples/ and trapping_sr3.py). Left a few flags to come back to later. I'll take on the other files in another review chunk.

Comment on lines +319 to +344
<<<<<<< HEAD
def quadratic_library():
||||||| 5c6e9fd
def data_quadratic_library():
=======
def data_quadratic_library_with_bias():
library_functions = [
lambda x: x,
lambda x, y: x * y,
lambda x: x**2,
]
function_names = [
lambda x: str(x),
lambda x, y: "{} * {}".format(x, y),
lambda x: "{}^2".format(x),
]
return CustomLibrary(
library_functions=library_functions,
function_names=function_names,
include_bias=True,
)


@pytest.fixture
def data_quadratic_library():
>>>>>>> origin/trapping_extended
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Makes sense to add this 👍 . Couldn't we just use PolynomialLibrary though?

Comment on lines +848 to +853
<<<<<<< HEAD:test/test_feature_library.py
differentiation_method=FiniteDifference(),
||||||| 5c6e9fd:test/feature_library/test_feature_library.py
differentiation_method=FiniteDifference(drop_endpoints=True),
=======
>>>>>>> origin/trapping_extended:test/feature_library/test_feature_library.py
Copy link
Member Author

@Jacob-Stevens-Haas Jacob-Stevens-Haas Jan 18, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, equivalent edits, prefer trapping_extended.

Comment on lines +862 to +867
<<<<<<< HEAD:test/test_feature_library.py
differentiation_method=FiniteDifference(),
||||||| 5c6e9fd:test/feature_library/test_feature_library.py
differentiation_method=FiniteDifference(drop_endpoints=True),
=======
>>>>>>> origin/trapping_extended:test/feature_library/test_feature_library.py
Copy link
Member Author

@Jacob-Stevens-Haas Jacob-Stevens-Haas Jan 18, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, equivalent edits, prefer trapping_extended

Comment on lines +123 to +146
@pytest.mark.parametrize(
"optimizer",
[
STLSQ(),
SSR(),
SSR(criteria="model_residual"),
SR3(),
ConstrainedSR3(),
StableLinearSR3(),
TrappingSR3(),
Lasso(fit_intercept=False),
ElasticNet(fit_intercept=False),
MIOSR(),
],
)
def test_fit_2d(data_derivative_2d, optimizer):
x, x_dot = data_derivative_2d
opt = SINDyOptimizer(optimizer, unbias=False)
model = SINDy(optimizer=opt)
model.fit(x, x_dot=x_dot)

check_is_fitted(opt)
assert opt.complexity >= 0
assert opt.coef_.shape == (2, 6)
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I put a lot of work into cutting down test time, so sorry if I'm a bit particular here. For my thoughts on testing, see Kent Beck's test desiderata

  • Did this test address a specific bug?
  • Tests of optimizers shouldn't need to create a full SINDy model
  • I believe we've removed the SINDyOptimizer (unbias passed directly to model)
  • Bugs in init are caused when this module is loaded, preventing running other tests. Could be mitigated with lazy fixture loading, construction inside test, or by following sklearn convention of a minimalistic __init__.
  • Why test complexity? That the default parameters here find a solution for arbitrary 2D data seems iffy.

For context, here's the full commit that added these lines:

6178df9 2023-03-16  Added a fix to the soft constraints functionality. Issue was the reshaping format needs to be fortran style. Added some of the examples from the dysts database, some of which dont seem to quite be behaving -- need to look closer at these. Importantly, added functionality in the code and in the example for including a bias (constant) term in the dynamical equations.  Alan Kaptanoglu
 examples/17_trapping_inequality.ipynb          | 1369 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++----------------
 examples/8_trapping_sindy_paper_examples.ipynb |  186 ++++++++++----------
 pysindy/optimizers/trapping_sr3.py             |  101 +++++------
 test/conftest.py                               |   19 +++
 test/optimizers/test_optimizers.py             |  147 +++++++++++-----
 test/test_pysindy.py                           |    4 +-

Comment on lines +240 to +246
<<<<<<< HEAD:test/test_optimizers.py
||||||| 5c6e9fd:test/optimizers/test_optimizers.py
dict(inequality_constraints=True, evolve_w=False),
=======
dict(inequality_constraints=True, evolve_w=False),
dict(inequality_constraints=True),
>>>>>>> origin/trapping_extended:test/optimizers/test_optimizers.py
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I removed the evolve_w arg, you added the inequality constraints option. So both commits have some value. I'd recommend approving both removing the evolve_w param and adding your param without it.

Comment on lines +529 to +592
@pytest.mark.parametrize(
"trapping_sr3_params",
[
dict(),
dict(accel=True),
],
)
@pytest.mark.parametrize(
"params",
[
dict(thresholder="l1", threshold=0),
dict(thresholder="l1", threshold=1e-5),
dict(
thresholder="weighted_l1",
thresholds=np.zeros((3, 10)),
eta=1e5,
alpha_m=1e4,
alpha_A=1e5,
),
dict(thresholder="weighted_l1", thresholds=1e-5 * np.ones((3, 10))),
dict(thresholder="l2", threshold=0),
dict(thresholder="l2", threshold=1e-5),
dict(thresholder="weighted_l2", thresholds=np.zeros((3, 10))),
dict(thresholder="weighted_l2", thresholds=1e-5 * np.ones((3, 10))),
],
)
def test_trapping_sr3_quadratic_library_with_bias(
params, trapping_sr3_params, data_quadratic_library_with_bias
):
np.random.seed(100)
x = np.random.standard_normal((100, 3))

sindy_library = data_quadratic_library_with_bias
params.update(trapping_sr3_params)

opt = TrappingSR3(**params)
model = SINDy(optimizer=opt, feature_library=sindy_library)
model.fit(x)
assert opt.PL_unsym_.shape == (3, 3, 3, 10)
assert opt.PL_.shape == (3, 3, 3, 10)
assert opt.PQ_.shape == (3, 3, 3, 3, 10)
check_is_fitted(model)

# Rerun with identity constraints
r = 3
N = 10
p = r + r * (r - 1) + int(r * (r - 1) * (r - 2) / 6.0)
params["constraint_rhs"] = np.zeros(p)
params["constraint_lhs"] = np.eye(p, r * N)

opt = TrappingSR3(**params)
model = SINDy(optimizer=opt, feature_library=sindy_library)
model.fit(x)
assert opt.PL_unsym_.shape == (3, 3, 3, 10)
assert opt.PL_.shape == (3, 3, 3, 10)
assert opt.PQ_.shape == (3, 3, 3, 3, 10)
check_is_fitted(model)
# check is solve was infeasible first
if not np.allclose(opt.m_history_[-1], opt.m_history_[0]):
print(model.coefficients())
zero_inds = [0, 1, 4, 7, 10, 13, 17, 20, 23, 26]
assert np.allclose((model.coefficients().flatten())[zero_inds], 0.0, atol=1e-5)


Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🏁🏁🏁🏁 for review later

We should see if this could be merely a parametrization of the test without bias. It uses different data and checks specific indices for zeros, but it's difficult to see why these terms are zero under all of the different weights and arguments in the signature.

Worse comes to worse we can add it (though def need to time this test, and maybe see if it's possible to shrink the size of the problem)

Comment on lines +862 to +872
<<<<<<< HEAD:test/test_optimizers.py
x = x.reshape(-1, 1)
optimizer.max_iter = 0 # normally prohibited in constructor

||||||| 5c6e9fd:test/optimizers/test_optimizers.py
x = x.reshape(-1, 1)

=======
if len(x.shape) == 1:
x = x.reshape(-1, 1)
>>>>>>> origin/trapping_extended:test/optimizers/test_optimizers.py
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How would we not know the shape of a test fixture?

Comment on lines +877 to +887
<<<<<<< HEAD:test/test_optimizers.py
@pytest.mark.parametrize(
"optimizer",
[(ConstrainedSR3, {"max_iter": 80}), (TrappingSR3, {"max_iter": 100}), (MIOSR, {})],
)
||||||| 5c6e9fd:test/optimizers/test_optimizers.py
@pytest.mark.parametrize("optimizer", [ConstrainedSR3, TrappingSR3, MIOSR])
=======
# @pytest.mark.parametrize("optimizer", [ConstrainedSR3, TrappingSR3, MIOSR])
@pytest.mark.parametrize("optimizer", [ConstrainedSR3, MIOSR])
>>>>>>> origin/trapping_extended:test/optimizers/test_optimizers.py
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🏁 🏁 🏁 🏁

tbh this was always a not-great test that does a whole lot of work in order to test whether it can accept row format constraints. And I want to get out of the business of two different constraint formats. So I'm happy to cut Trapping out of this list. But let's see if my change passes, once all is said and done.

Comment on lines +336 to +345
<<<<<<< HEAD
def test_fit_multiple_trajectores(data_multiple_trajectories):
x, t = data_multiple_trajectories
||||||| 5c6e9fd
def test_fit_multiple_trajectores(data_multiple_trajctories):
x, t = data_multiple_trajctories
=======
def test_fit_multiple_trajectories(data_multiple_trajctories):
x, t = data_multiple_trajctories
>>>>>>> origin/trapping_extended
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good spelling catch :) yours is the right one here.

dict(thresholder="l1", threshold=2, expected=2.5),
dict(thresholder="weighted_l1", thresholds=0.5 * np.ones((1, 2)), expected=1.0),
dict(thresholder="l2", threshold=2, expected=1.5),
dict(
thresholder="weighted_l2", thresholds=0.5 * np.ones((1, 2)), expected=0.75
),
||||||| 5c6e9fd:test/optimizers/test_optimizers.py
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🏁 🏁 🏁 🏁

This test almost deserves its own PR given the different directions we've gone with it. Just enumerating there to come back later:

  • trapping_extended adds threshold=0 cases
  • I shrunk size of problem from lorenz to mini
  • You removed second test case
  • I added testing for specific values equal to zero

But since I shifted the actual cvxpy work to ConsrainedSR3, it may make sense, if the point of this is just to check that inequality constraints are added, to remove this test?

Base automatically changed from trapping-helper to master April 14, 2024 02:22
@Jacob-Stevens-Haas
Copy link
Member Author

Closing this because we're not going this direction anymore. Instead backporting each change bit by bit.

@Jacob-Stevens-Haas Jacob-Stevens-Haas deleted the trapping-merge branch June 10, 2024 22:56
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants