Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Lazy converter imports #1094

Open
wants to merge 6 commits into
base: main
Choose a base branch
from

Conversation

calad0i
Copy link
Contributor

@calad0i calad0i commented Oct 26, 2024

Description

Make all front-end specific dependencies (tf, (q)keras, torch, onnx) optional and imported lazily.
All handlers are registered now, no matter if the frontend specific package is present. The frontend specific packages are only imported if the corresponding handler is used. This should remove the strong dependency of hls4ml on qkeras and tensorflow (e.g., when converting a model not in tf.keras, tensorflow no longer need to be installed).

Type of change

  • Breaking change (fix or feature that would cause existing functionality to not work as expected)

Tests

Test Configuration:

  • I have read the guidelines for contributing.
  • I have commented my code, particularly in hard-to-understand areas.
  • I have made corresponding changes to the documentation.
  • My changes generate no new warnings.
  • I have installed and run pre-commit on the files I edited or added.
  • I have added tests that prove my fix is effective or that my feature works.

@calad0i calad0i added please test Trigger testing by creating local PR branch and removed please test Trigger testing by creating local PR branch labels Oct 26, 2024
@calad0i calad0i changed the title [DRAFT] Lazy converter import Lazy converter imports Oct 26, 2024
@calad0i calad0i mentioned this pull request Nov 8, 2024
7 tasks
@jmitrevs jmitrevs added this to the v1.1.0 milestone Nov 8, 2024
@calad0i calad0i mentioned this pull request Nov 15, 2024
2 tasks
@vloncar vloncar mentioned this pull request Nov 22, 2024
1 task
@vloncar
Copy link
Contributor

vloncar commented Nov 22, 2024

I agree with the spirit of the change but not the execution. Optional dependencies are the way to go but not unchecked lazy imports. Having imports randomly fail in the middle of some function and leaving the user to deal with the problem themselves is not friendly.

In the original proposal for modular hls4ml, frontends and backends would be installable extras. All of them. In practice this would mean basic install of hls4ml would only provide the core classes that can be used to save/load hls4ml models (a feature currently in development) and one backend (which has to be enforced somehow in setup.py). We didn't manage it before, but we could try again. For example, wiring the converters in such a way that if hls4ml was not installed with pip install hls4ml[torch] there would be no torch converter and the function convert_from_pytorch_model() would print that it is not available and how it should be installed to make it available. Same for ONNX and Keras and same for backends. So if a user wants an installation for converting Keras models to Vitis firmware the installation command would be pip install hls4ml[keras,vitis]. If they try to convert a pytorch model for oneAPI backend on that installation they should get a message that this is not what their current installation supports.

To achieve that we should make sure that our internals don't depend on big external libraries, in addition to some reordering of the imports that this PR does. To that end, we should focus on removing qkeras as the internal dependency from types and from profiling functions. Since we're already discussing ditching QKeras official for a drop-in replacement library, we can replace the internal dependency with chang's reusable optimizers, which mimic qkeras anyway. Then this new replacement library (yet to be named) can be built on top of the optimizers library. It would contain the definitions of layers (QDense, QConv1D/2D etc) that rely on our optimizers. Our keras frontend could then depend on this library, if even needed (as it seems that we only really need qkeras' optimizers for parsing, not the layer definitions).

@calad0i
Copy link
Contributor Author

calad0i commented Nov 22, 2024

Having imports randomly fail in the middle of some function and leaving the user to deal with the problem themselves is not friendly

I was thinking a missing import is rather self-explanatory. Here are few proposals to throw an informative error for this without large restructuring for the short to midterm. Restructuring the whole project to multiple frontend and backend sounds like an overkill for this particular issue at the moment.

  • For the frontend issue, I guess this is only relevant to the onnx and (q)keras frontend when the user passes the saved model in the config dict to the framework, as otherwise passing the model object to hls4ml implies the existence of the corresponding package (e.g., torch). In this case, I think we can just ward the *_to_hls function with a try-and-catch and map the corresponding import error to a custom error message prompting for installing hls4ml[***]. Possible import error can be thrown in the future save-load part though (likely only for qkeras, see below)
  • More dirty but universal, wrap importlib.import_module, call it and throw custom error for the lazy imports. Type hinting will probably be completely broken when developing the corresponding modules but on runtime nothing would change...

For types & quantizers, the qkeras dependency appears to be only interfacing with the qkeras models, but they persist the whole lifecycle of the hls models. In my opinion, the quantizers should end their lifecycle at the handler level for quantized models, but maybe there is some reason at the time of design.

The standalone quantizers library can be used in-place of qkeras quantizers in most cases, with two main caveats:

  • the handling of alpha is different, auto_po2 for example. Though no one should really be using them I guess.
  • exponent types are not support in quantizers. Setting m=0 in the minifloat quantizer could mimic that but probably with slightly different rounding.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
please test Trigger testing by creating local PR branch
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants