-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Combine event loop cleanup with other style/modernization cleanup #4
base: main
Are you sure you want to change the base?
Conversation
as discussed on ib-api-reloaded/ib_async#62
Python 3.10 as a minimum seems safe (4 years old right now). Also added more dev deps including pytest with a concurrency runner. Tests can now be run concurrently with: poetry run pytest -n auto
Improves: - uses only more modern python versions for tests - uses poetry for installs and running instead of pip directly
More new lines between different logical sections for easier reading. Cleaned up some other loop creation to be inline when possible. Replaced `asyncio.ensure_future()` in ops/transform.py with `create_task()` for more modern APIs and all the tests still pass.
Primarily due to version changes over time, this all used to work in older versions, but Python 3.12 started issuing new warnings and errors unless manual steps are used for event loop handling instead of automatic steps. Fixes #2
Interestingly, ruff and µsort have different opinions on how headers should be sorted. Guess one has to win out over time (µsort prefers to have blank lines between regular "import" lines and "from X import Y" lines).
Fixes: - Incorporates doc deps into poetry (poetry install --with dev,docs) - Removes manual version.py parsing to get version number - Fixes "no language" error in docs spec - Fixes doc syntax problem in docstring Docs can now be built using: poetry run sphinx-build -b html docs html
version this, version that, it worked, but we were overwriting things incorrectly even though it didn't impact the final otuput.
FYI
/dev/eventkit on next*
$ poetry run ruff check --select "NPY201" eventkit/
All checks passed! I think there is no need to update ruff rules as described here
|
@mattsta I was planning to add github actions to automate pypi release process as described here i see that you use poetry, which can be used to publish to pypi as well so i wanted to double check this with you. with github actions we can set a rule to publish any tag that follows a certain rule, usually let me know your thoughts |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i'm good to go
|
Originally slots were using just nested ararys indexed by position, but we can do better now. This project was originally written in Python 3.6 before many more modern things like dataclasses exist, so now we can use dataclass(slots=True) for automatic slots generation as well as easily use dataclasses for inner-encapsulated data management.
Thanks! Looks good all around. Nice adding an extra test too. The packaging via actions should be fine if you want to manage the auth tokens inside the actions environment. I haven't looked into setting all those configs up before. How about one more before update before a version bump... I worked a while this week to modernize the data layout away from positional nested arrays into more self-managing objects. All tests still pass for me, but give it a look to see if anything looks out of place or if any naming should be improved anywhere. 🎯 |
on second look of my diff, I noticed I left in There's a trick with dataclasses and automatic slots and weakrefs where they don't work nicely together. The only difference between the original It's okay if we want to revert the |
@mattsta , I'm reviewing your code. I wrote a few test for Slot and Slots classes. I will post them. |
- dependabot - build and test - publish
@mattsta I added 2 new files for github actions, "build and test" and "pypi publish". I added pypy to the matrix, but it's failing. from what i can see pypy seems to be handling garbage collector differently, so the tests that delete the event handler fail to pass the assertions. it's not a big problem, eventkit and ib_async work with pypy (i have tested) but there might be surprises. I added as well dependabot, which will keep github actions up-to-date, please enable it on settings > Code security > dependabot (I don't have access) I loosing my faith with PEP 514, there is a huge backlog and no progress what so ever on any request. the only issues that are closed have been closed by the author. |
oh, great idea. Nice to keep pypy verified too. I'll test more locally to see if I can get it to start working again. I did run the tests locally with pypy 3.12 recently and it worked, but maybe that was before some other recent tests/changes.
will do. thanks for playing yaml warrior with all those things.
It was nice to try at least? Honestly, I think we know We could also do a bit of a feature-reduction for parts not necessary for basic event notifications. I'm not sure anybody has ever actually used things like the numpy-backed I think we could just declare the core usage here is only adding event handlers to named events, then running event handlers when named events receive event updates. If somebody wants to create a graph of numpy processing elements, it can be easily composed from existing built-in operations without needing to provide those exactly internally (or people can subclass/copy locally where needed too). |
Found some pypy details with weakref problems:
https://doc.pypy.org/en/latest/cpython_differences.html The failing tests work if I manually inject a GC between the delete and the next check in (also this error has always been there back to the last official release, so it is unrelated to any of our recent changes/refactors). Fails del obj2
self.assertEqual(len(event), 1) Works del obj2
gc.collect()
self.assertEqual(len(event), 1) |
I think this is just enabled by the action being committed. If I go to enable it in the settings, all it says is "commit this to enable the feature": # To get started with Dependabot version updates, you'll need to specify which
# package ecosystems to update and where the package manifests are located.
# Please see the documentation for all configuration options:
# https://docs.github.com/code-security/dependabot/dependabot-version-updates/configuration-options-for-the-dependabot.yml-file
version: 2
updates:
- package-ecosystem: "" # See documentation for possible values
directory: "/" # Location of package manifests
schedule:
interval: "weekly" |
Ok, so I went through a rabbit hole. I have solved "all" (knock on good) the warnings with the tests. Specially the warnings stating that the event loop is closed. I didn't like the There is a mix of things causing the warnings, the main one is pytest-xdist, and a bad combination between unittest module and asyncio. So to summarize my changes
All test pass now, but i will check that ib_async stills works. I have some code that uses |
- functools.cache is out - pytest-async plugin in - I had to rewrite all tests so they don't use unittest, so now all test are on pytest - All test pass, except when they are run with pytest-xdist, poetry run pytest -vs -n auto gives a warning. It's a warning, not an error > task: <Task pending name='Task-18' coro=<Aiterate._looper() done, defined at ~/Documents/dev/eventkit/eventkit/ops/create.py:42> wait_for=> - updated Event.repeat, as the parameters where in the wrong position. it should be Ok now. - update Aiterate and Wait to use tasks. removed __del__ method from Aiterate as it causes warnings. we can put it back and live with warnings on tests, which is not bad. - right now everything is task based, except timing module which uses loop.call_soon and loop.call_at
Some of this may be a bit too much now? Overall, this is a tiny abandoned project whose only real goal is to "trigger callback." If refactor attempts are introducing behavior changes and warnings and requiring workarounds, it's probably not worth it.
amusingly, I also did the same thing and saw it just ran slower overall under It appears all the original tests are now broken because of event loop logic changes, so technically all the tests don't work and we fixed broken tests by rewriting the tests instead of maintaining original code compatibility?
New warnings introduced where they didn't exist before seems not entirely clean.
Nice catch!
This is another one of those "it worked before, but now it doesn't work after all the changes," so it doesn't work? I agree Moving to
It does seem like the right thing to do, right? I had also tried that a couple weeks ago and found it just doesn't work. I ran the original tests and they all fail because, I think, they used to work before the event loop was running by placing tasks on it, then they started after the event loop started. addendum(also, yes the original pattern of "allow attaching futures to a non-running event loop so they run when the loop starts" is a non-standard practice, but it's how this thing has worked since it was created, so do we want to break it now? — another alternative is: if our refactors break existing features, we could just remove the existing non-core features too. simple as. I would be surprised if anybody in the world actually used any of the "advanced operator" or time delay or numpy APIs at all. i guess one good example: if we want to keep these async/create_task update changes and break backwards usage, we could also just remove the |
Hi,
the original test are implemented using broker? no they are not broken
it does work, all test pass. the warning is caused by pytest-xdist that closes the loop before everything is done. we are already using I have tested it with ib_async and all works fine. The eventkit sample notebook works fine, everything is working fine. There is no impact in changing ensure_future by the create (which is the recommended way going forward). with these changes we have tests working in modern way (no need for workarounds for test to pass, like the function cache, or use sync methods to test async) |
Well, yes, overall the changes are great! Improvements to an abandoned project for stability and modernization are always a good thing. But we have to acknowledge the new Originally the patterns allowed were:
or:
These updates change the pattern to not work with the original usage of (create events -> start loop) and now everything must be (start loop -> create events). That's why all the original tests break is because I think the original use case was just because of the way he liked to write things in notebooks and test them later all at once, but the usage of the library has now changed to remove an entire category of use cases which worked before. Does it matter? Probably not. But we broke the interface, all the original tests broke, then we changed the tests to match the new interface changes instead of maintaining test compat over time. Since the original way doesn't work anymore, we can delete the |
Indeed
I missed this scenario, there is no test for it, because it was basically everywhere, until I replace all for I've done an analysis and the errors are coming from the "new" I'm not sure if it's just a problem with the tests, and how the event loop is started/stopped during test or a problem with There is a note on asyncio.get_running_event_loop
and in asyncio.get_event_loop
so The way I see it the options are:
Personally I rather live with the warning. If you are Ok let me know I will summit a commit with the changes. And please do your analysis, I might have missed something. Personally never went this deep on the event loop |
new test for exception handling during emit/slots.__call__ enable pytest log on console
fix and unit test for #5 new setting on |
rolled back changes, with "legacy get event loop" |
Previous nice refactorings/structure/cleanups provided by @gnzsnz plus some additional style and technical debt cleanups found along the way.
Easiest way of running tests is now
poetry install --with dev; poetry run pytest -vs -n auto
(but some of the tests have timing conflicts, so maybe 1 out of 10 or 20 test runs show false errors when running concurrently... we can also just runpoetry run pytest -vs
for regular sequential running).(and docs building is now:
poetry install --with docs; poetry run sphinx-build -b html docs html
— we probably don't need the generated docs in the repo itself — we could set up something like theib_async
CI docs-to-website-on-commit config here too).Status: all tests are currently passing on 3.10 and 3.12 and the rest works for me 🤷♂️