-
Notifications
You must be signed in to change notification settings - Fork 34
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Tests migration to tch #195
base: master
Are you sure you want to change the base?
Conversation
@@ -838,6 +839,17 @@ impl Tensor { | |||
)) | |||
} | |||
|
|||
#[cfg(feature = "testing")] | |||
pub fn to_tch<T: TensorDType + tch::kind::Element>(&self) -> anyhow::Result<tch::Tensor> { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we take self
and consume here instead of deep_clone
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We usually pass a &Tensor
in the tests to get the ground truth so I don't think we can consume self
here. Although we may not need the deep_clone
as we call already call into_owned
in self.to_ndarray_view().into_owned()
which has the signature :
pub fn into_owned(self) -> Array<A, D>
where
A: Clone,
S: Data,
{
S::into_owned(self)
}
I guess we are cloning elements anyway.
f.write(response.content) | ||
with zipfile.ZipFile('libtorch.zip', 'r') as zip_ref: | ||
zip_ref.extractall() | ||
os.environ['LIBTORCH'] = os.path.join(os.getcwd(), f'libtorch') |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If the running is MACOS maybe we should add :
export DYLD_LIBRARY_PATH=$LIBTORCH/lib:$DYLD_LIBRARY_PATH
. This fixed linking issues for me
if 'torch==' in line: | ||
version = line.split('==')[1].strip() | ||
break | ||
response = requests.get(f'https://download.pytorch.org/libtorch/cpu/libtorch-macos-arm64-{version}.zip') |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think an arm64 version of 2.3.0 torch exists as a Python wheel but if you need to download the zip.
This one worked for me for (x86 at least):
wget https://download.pytorch.org/libtorch/cpu/libtorch-shared-with-deps-2.3.0%2Bcpu.zip
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@AmineDiro Still being a pain 😡
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok I think I see the issue (finally got access to the runner output). Setting environment variables in Python I think will not set it in the environment across steps.
I also see in output that we successfully install torch==2.3.0
using pip so I think we can use already installed version without need to download it.
I added the env variable in 87e37ef hope this works
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ohh I see that's the DYLIB export 👍🏼
Code Metrics Report=============================================================================== Language Files Lines Code Comments Blanks =============================================================================== TOML 1 74 62 2 10 ------------------------------------------------------------------------------- Rust 62 10399 8960 119 1320 |- Markdown 30 270 0 210 60 (Total) 10669 8960 329 1380 =============================================================================== Total 63 10473 9022 121 1330 =============================================================================== |
solves #191 for
softmax
.In this PR:
tch-rs
and updated contribution guide for how to installtch
test under thetesting
feature. I can add another flag if that's not appropriatebinary
ops support in 35af80egemm
tests in 3bb4f50groupnorm
,rms
andlayernorm
tests in ef60b16