Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Disable graph check while tracing #1103

Merged
merged 4 commits into from
Jan 14, 2025
Merged
Changes from 2 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
9 changes: 8 additions & 1 deletion optimum/exporters/openvino/convert.py
Original file line number Diff line number Diff line change
Expand Up @@ -23,6 +23,7 @@
from transformers.generation import GenerationMixin
from transformers.utils import is_tf_available, is_torch_available

from openvino.frontend.pytorch.ts_decoder import TorchScriptPythonDecoder
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would prefer if we used something like TorchScriptPythonWrapper or TorchScriptPythonModel, the decoder part doesn't make sense especially since we are using this wrapper on top of diffusers text_encoder/unet/transformer etc.
Will this wrapper ever change and do decoder-specific stuff ? if npt why does it have Decoder in its name.

Copy link
Contributor Author

@mvafin mvafin Jan 10, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a "decoder" in a sense that it is used to decode the pytorch model. It is used in openvino as an abstraction to get the graph from the model. It is an internal class, but it can be used to provide such custom parameters for tracing.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this naming does not have relation to decoder models, Decoder is internal component of openvino responsible for decoding of original framework model during conversion. All that time we use it internally inside convert_model, now it is created explicitly for resolving advanced conversion options

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Well, we can rename it on import. Will TorchScriptPythonWrapper be a better name?

Copy link
Member

@IlyasMoutawwakil IlyasMoutawwakil Jan 12, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

oh okay, confusing name but okay for me if it's part of the openvino naming conventions.

Copy link
Collaborator

@AlexKoff88 AlexKoff88 Jan 13, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@IlyasMoutawwakil, can we merge this then?

from openvino.runtime import Model, save_model
from openvino.runtime.exceptions import OVTypeError
from openvino.tools.ovc import convert_model
Expand All @@ -44,6 +45,7 @@
_transformers_version,
compare_versions,
is_openvino_tokenizers_version,
is_openvino_version,
is_tokenizers_version,
is_transformers_version,
)
Expand Down Expand Up @@ -427,15 +429,20 @@ def ts_patched_forward(*args, **kwargs):

patcher.patched_forward = ts_patched_forward

decoder_kwargs = {}
if library_name == "diffusers" and is_openvino_version(">=", "2025.0"):
decoder_kwargs["trace_kwargs"] = {"check_trace": False}
IlyasMoutawwakil marked this conversation as resolved.
Show resolved Hide resolved

with patcher:
if patch_16bit_model:
from openvino.frontend.pytorch.patch_model import __make_16bit_traceable

__make_16bit_traceable(model)
check_dummy_inputs_are_allowed(model, dummy_inputs)
input_info = _get_input_info(model, config, dummy_inputs)
decoder = TorchScriptPythonDecoder(model, example_input=dummy_inputs, **decoder_kwargs)
IlyasMoutawwakil marked this conversation as resolved.
Show resolved Hide resolved
ov_model = convert_model(
model,
decoder,
IlyasMoutawwakil marked this conversation as resolved.
Show resolved Hide resolved
example_input=dummy_inputs,
input=[(item.shape, item.type) for item in input_info],
)
Expand Down