You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Jun 18, 2024. It is now read-only.
This worked with the base model from TensorFlow Hub, but when I replaced the url with the location of my saved model folder (which also included assets/ and variables/), I got the following error:
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-38-6c11f4769dd0> in <module>()
5 signature='tokens',
6 signature_outputs_as_dict=True)
----> 7 encoder_inputs = preprocessor(text_input)
8 encoder = hub.KerasLayer(
9 "https://tfhub.dev/tensorflow/albert_en_base/3",
1 frames
/usr/local/lib/python3.7/dist-packages/tensorflow/python/autograph/impl/api.py in wrapper(*args, **kwargs)
690 except Exception as e: # pylint:disable=broad-except
691 if hasattr(e, 'ag_error_metadata'):
--> 692 raise e.ag_error_metadata.to_exception(e)
693 else:
694 raise
TypeError: Exception encountered when calling layer "keras_layer_7" (type KerasLayer).
in user code:
File "/usr/local/lib/python3.7/dist-packages/tensorflow_hub/keras_layer.py", line 229, in call *
result = f()
TypeError: pruned(input_ids, input_mask, segment_ids) takes 0 positional arguments, got 1.
Call arguments received:
• inputs=tf.Tensor(shape=(None,), dtype=string)
• training=False
This may come down to my limited knowledge of TensorFlow, but the albert code is giving me a saved_model which seems to be of a different format than other saved_models I've used. Can the saved model generated by the albert classifier be used in this way?
The text was updated successfully, but these errors were encountered:
kelseyneis
changed the title
How to get the prediction embeddings from fine-tuned model
How to get the test embeddings from output of fine-tuned model (tutorial)
Mar 7, 2022
Sign up for freeto subscribe to this conversation on GitHub.
Already have an account?
Sign in.
Is there a way to easily generate the embeddings of the test data from a fine-tuned model?
Here's what I've tried:
I followed the tutorial on MRPC with these flags (all default except predict=true and export_dir=dir):
This gave me a
saved_model.pb
file, which I wanted to load in order to generate embeddings for the test data, in order to do some error analysis.I tried running something similar to this code:
This worked with the base model from TensorFlow Hub, but when I replaced the url with the location of my saved model folder (which also included assets/ and variables/), I got the following error:
This may come down to my limited knowledge of TensorFlow, but the albert code is giving me a
saved_model
which seems to be of a different format than other saved_models I've used. Can the saved model generated by the albert classifier be used in this way?The text was updated successfully, but these errors were encountered: