Is it possible to export a custom MLX model to run anywhere else than Apple Silicon? #1730
Unanswered
FerusAndBeyond
asked this question in
Q&A
Replies: 1 comment 1 reply
-
No not really. You can build and run MLX on a linux server and it should work (no GPU support just CPU). There is no export to other framework functionality like ONNX or PyTorch etc. Is exporting to ONNX what you are looking for or is it something else? |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I'm interested in using multiple physical devices for training ML models using MLX, then export it to a format where I can run it anywhere, e.g. on a server/linux. This is not a fixed model from huggingface but a custom model trained from scratch on custom data.
Is it possible? I couldn't find anything at all, just discussions about ONNX.
Beta Was this translation helpful? Give feedback.
All reactions