Skip to content

Commit

Permalink
Update optimum/gptq/quantizer.py
Browse files Browse the repository at this point in the history
Co-authored-by: Ilyas Moutawwakil <[email protected]>
  • Loading branch information
jiqing-feng and IlyasMoutawwakil authored Dec 19, 2024
1 parent f2b9688 commit c446522
Showing 1 changed file with 1 addition and 1 deletion.
2 changes: 1 addition & 1 deletion optimum/gptq/quantizer.py
Original file line number Diff line number Diff line change
Expand Up @@ -665,7 +665,7 @@ def tmp(_, input, output):
del layer_inputs
layer_inputs = []
torch.cuda.empty_cache()
if hasattr(torch, "xpu"):
if hasattr(torch, "xpu") and torch.xpu.is_available():
torch.xpu.empty_cache()

if self.bits == 4:
Expand Down

0 comments on commit c446522

Please sign in to comment.