tf-exporter
let's you
create a single artifact to serve Transformers predictions without requiring distinct steps for tokenization and model predictions.
python -m venv venv # Recommended: Create a virtual environment
source venv/bin/activate # Activate it
# Install the code from git
python -m pip install git+https://github.com/balikasg/tf-exporter.git
From python now you can:
from tf_exporter import ModelConverter
converter = ModelConverter(model_name_or_path='sentence-transformers/nq-distilbert-base-v1',
output_dir='/tmp/tf-model')
converter.convert_pytorch_to_tensorflow(input_test='This is a test')
# Persists tf model files at `/tmp/tf-model`
You can also use a shell script:
# Convert your model and save under models/ (default):
python tf_exporter/convert_to_single_graph.py --model-name sentence-transformers/nq-distilbert-base-v1
ls models/tf_model # Returns the persisted files
# assets keras_metadata.pb saved_model.pb variables