-
Notifications
You must be signed in to change notification settings - Fork 60
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Integration with tract #104
base: master
Are you sure you want to change the base?
Conversation
Ported GPUTensor handling code
version = "0.0.1" | ||
license = "MIT/Apache-2.0" | ||
authors = ["Dmitry Sharshakov <[email protected]>"] | ||
description = "An integration crate for using GPU compiler from wonnx to accelerate tract model inference" | ||
repository = "https://github.com/snipsco/tract" | ||
keywords = ["TensorFlow", "NeuralNetworks"] | ||
categories = ["science"] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These should be done properly for this repository.
@haixuanTao now I certainly need to have a chat or some other quicker way of getting in touch with you. We have to discuss many stuff about this PR |
Sure. Let's find a schedule: https://calendly.com/tao-xavier/60min |
Set up 2PM UTC+3. Are you comfortable talking via Discord or prefer texting? |
f7f4c5f
to
e8df736
Compare
Migrated code from sonos/tract#697, see discussion there for reasoning. The goal is to combine powers: leverage both model loading and optimization capabilities of
tract-core
and shader compilation/execution implementations fromwonnx
.Tasks (partially from sonos/tract#697):
GPUTensor
tensorflow-mobilenet-v2
tract
repo